在pyspark中将RDD转换为Dataframe
问题描述:
我正在尝试将我的RDD转换为pyspark中的Dataframe。
I am trying to convert my RDD into Dataframe in pyspark.
我的RDD:
[(['abc', '1,2'], 0), (['def', '4,6,7'], 1)]
我想要以数据框的形式进行RDD:
I want the RDD in the form of a Dataframe:
Index Name Number
0 abc [1,2]
1 def [4,6,7]
我尝试过:
rd2=rd.map(lambda x,y: (y, x[0] , x[1]) ).toDF(["Index", "Name" , "Number"])
但是我遇到错误
An error occurred while calling
z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure:
Task 0 in stage 62.0 failed 1 times, most recent failure: Lost task 0.0
in stage 62.0 (TID 88, localhost, executor driver):
org.apache.spark.api.python.PythonException: Traceback (most recent
call last):
您能告诉我,我哪里出问题了吗?
Can you let me know, where am I going wrong?
更新:
rd2=rd.map(lambda x: (x[1], x[0][0] , x[0][1]))
我有以下形式的RDD:
I have the RDD in the form :
[(0, 'abc', '1,2'), (1, 'def', '4,6,7')]
要转换为数据框:
rd2.toDF(["Index", "Name" , "Number"])
它仍然给我错误:
An error occurred while calling o2271.showString.
: java.lang.IllegalStateException: SparkContext has been shutdown
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2021)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2050)
答
RDD.map
具有一元函数:
rdd.map(lambda x: (x[1], x[0][0] , x[0][1])).toDF(["Index", "Name" , "Number"])
所以您不能传递二进制数。
so you cannot pass binary one.
如果要拆分数组,则:
rdd.map(lambda x: (x[1], x[0][0] , x[0][1].split(","))).toDF(["Index", "Name" , "Number"])