Scala:如何按行号拆分数据框?
我想将270万行的数据帧拆分为100000行的小数据帧,所以最终得到27个数据帧,我也希望将它们存储为csv文件.
I want to split up a dataframe of 2,7 million rows into small dataframes of 100000 rows, so end up with like 27 dataframes, which I want to store as csv files too.
我已经看过这个partitionBy和groupBy了,但是我不必担心任何条件,只需要按日期对它们进行排序即可.我正在尝试编写自己的代码来实现此目的,但是如果您知道我可以使用的某些Scala(Spark)函数,那就太好了!
I took a look at this partitionBy and groupBy already, but I don't need to worry about any conditions, except that they have to be ordered by date. I am trying to write my own code to make this work, but if you know about some Scala (Spark) functions I could use, that would be great!
谢谢大家的建议!
您可以使用RDD API中的zipWithIndex
(不幸的是,SparkSQL中没有等效项)将每行映射到一个索引,范围在0
和
You could use zipWithIndex
from the RDD API (no equivalent in SparkSQL unfortunately) that maps each row to an index, ranging between 0
and rdd.count - 1
.
因此,如果您假设有一个数据框据我认为会进行相应排序,则需要在两个API之间来回切换,如下所示:
So if you have a dataframe that I assumed to be sorted accordingly, you would need to go back and forth between the two APIs as follows:
import org.apache.spark.sql.types._
import org.apache.spark.sql.Row
// creating mock data
val df = spark.range(100).withColumn("test", 'id % 10)
// zipping the data
val partitionSize = 5 // I use 5 but you can use 100000 in your case
val zipped_rdd = df.rdd
.zipWithIndex.map{ case (row, id) =>
Row.fromSeq(row.toSeq :+ id / partitionSize )
}
//back to df
val newField = StructField("partition", LongType, false)
val zipped_df = spark
.createDataFrame(zipped_rdd, df.schema.add(newField))
让我们看一下数据,我们有一个名为partition的新列,它对应于您想要拆分数据的方式.
Let's have a look at the data, we have a new column called partition and that corresponds to the way you want to split your data.
zipped_df.show(15) // 5 rows by partition
+---+----+---------+
| id|test|partition|
+---+----+---------+
| 0| 0| 0|
| 1| 1| 0|
| 2| 2| 0|
| 3| 3| 0|
| 4| 4| 0|
| 5| 5| 1|
| 6| 6| 1|
| 7| 7| 1|
| 8| 8| 1|
| 9| 9| 1|
| 10| 0| 2|
| 11| 1| 2|
| 12| 2| 2|
| 13| 3| 2|
| 14| 4| 2|
+---+----+---------+
// using partitionBy to write the data
zipped_df.write
.partitionBy("partition")
.csv(".../testPart.csv")