在 Spark 中使用自定义函数聚合多列

问题描述:

我想知道是否有某种方法可以为多列的火花数据帧指定自定义聚合函数.

I was wondering if there is some way to specify a custom aggregation function for spark dataframes over multiple columns.

我有一个这样的表格(名称、商品、价格):

I have a table like this of the type (name, item, price):

john | tomato | 1.99
john | carrot | 0.45
bill | apple  | 0.99
john | banana | 1.29
bill | taco   | 2.59

到:

我想将每个人的项目和成本汇总到一个列表中,如下所示:

I would like to aggregate the item and it's cost for each person into a list like this:

john | (tomato, 1.99), (carrot, 0.45), (banana, 1.29)
bill | (apple, 0.99), (taco, 2.59)

这在数据帧中可能吗?我最近了解了 collect_list,但它似乎只适用于一列.

Is this possible in dataframes? I recently learned about collect_list but it appears to only work for one column.

作为 DataFrame 最简单的方法是先收集两个列表,然后使用 UDFzip 将两个列表放在一起.类似的东西:

The easiest way to do this as a DataFrame is to first collect two lists, and then use a UDF to zip the two lists together. Something like:

import org.apache.spark.sql.functions.{collect_list, udf}
import sqlContext.implicits._

val zipper = udf[Seq[(String, Double)], Seq[String], Seq[Double]](_.zip(_))

val df = Seq(
  ("john", "tomato", 1.99),
  ("john", "carrot", 0.45),
  ("bill", "apple", 0.99),
  ("john", "banana", 1.29),
  ("bill", "taco", 2.59)
).toDF("name", "food", "price")

val df2 = df.groupBy("name").agg(
  collect_list(col("food")) as "food",
  collect_list(col("price")) as "price" 
).withColumn("food", zipper(col("food"), col("price"))).drop("price")

df2.show(false)
# +----+---------------------------------------------+
# |name|food                                         |
# +----+---------------------------------------------+
# |john|[[tomato,1.99], [carrot,0.45], [banana,1.29]]|
# |bill|[[apple,0.99], [taco,2.59]]                  |
# +----+---------------------------------------------+