星火:使用reduceByKey代替groupByKey和mapByValues

问题描述:

我有一个RDD与以下格式重复值:

I have an RDD with duplicates values with the following format:

[ {key1: A}, {key1: A}, {key1: B}, {key1: C}, {key2: B}, {key2: B}, {key2: D}, ..]

我想新RDD有以下的输出,并获得重复的车程。

I would like the new RDD to have the following output and to get ride of duplicates.

[ {key1: [A,B,C]}, {key2: [B,D]}, ..]

我有把一组值获得重复的车程管理与以下code做到这一点。

I have manage to do this with the following code by putting the values in a set to get ride of duplicates.

RDD_unique = RDD_duplicates.groupByKey().mapValues(lambda x: set(x))

不过,我想做到这一点更优雅的1命令

But I am trying to achieve this more elegantly in 1 command with

RDD_unique = RDD_duplicates.reduceByKey(...)

我没有设法拿出一个lambda函数,让我同样的结果在reduceByKey功能。

I have not managed to come up with a lambda function that gets me the same result in the reduceByKey function.

您可以做到这一点是这样的:

You can do it like this:

data = (sc.parallelize([ {key1: A}, {key1: A}, {key1: B},
  {key1: C}, {key2: B}, {key2: B}, {key2: D}, ..]))

result = (data
  .mapValues(lambda x: {x})
  .reduceByKey(lambda s1, s2: s1.union(s2)))