合并更大数据的有效替代方案

问题描述:

我正在寻找一个高效(计算机资源聪明和学习/实现明智的)方法来合并两个更大的(大小> 1百万/ 300 KB RData文件)数据帧。

I am looking for an efficient (both computer resource wise and learning/implementation wise) method to merge two larger (size>1 million / 300 KB RData file) data frames.

在基本R中的合并和plyr中的join似乎使我的所有内存有效地崩溃我的系统。

"merge" in base R and "join" in plyr appear to use up all my memory effectively crashing my system.

示例

加载测试数据框

并尝试

test.merged<-merge(test, test)

test.merged<-join(test, test, type="all")  




    -

      -

      以下帖子提供合并和替代的列表:

      如何在R(内部,外部,

      The following post provides a list of merge and alternatives:
      How to join data frames in R (inner, outer, left, right)?

      以下允许对象大小检查:

      https://heuristically.wordpress.com/2010/01/04/r-memory-usage-statistics-variable/

      The following allows object size inspection:
      https://heuristically.wordpress.com/2010/01/04/r-memory-usage-statistics-variable/

      匿名

必须的 data.table 示例:

library(data.table)

## Fix up your example data.frame so that the columns aren't all factors
## (not necessary, but shows that data.table can now use numeric columns as keys)
cols <- c(1:5, 7:10)
test[cols] <- lapply(cols, FUN=function(X) as.numeric(as.character(test[[X]])))
test[11] <- as.logical(test[[11]])

## Create two data.tables with which to demonstrate a data.table merge
dt <- data.table(test, key=names(test))
dt2 <- copy(dt)
## Add to each one a unique non-keyed column
dt$X <- seq_len(nrow(dt))
dt2$Y <- rev(seq_len(nrow(dt)))

## Merge them based on the keyed columns (in both cases, all but the last) to ...
## (1) create a new data.table
dt3 <- dt[dt2]
## (2) or (poss. minimizing memory usage), just add column Y from dt2 to dt
dt[dt2,Y:=Y]