当前位置: 动力学知识库 > 问答 > 编程问答 >

Spark AggregateByKey From pySpark to Scala

问题描述:

I am transferring all my code over to scala and I had a function in pySpark that I have little clue on how to translate over to scala. Can anybody help and provide an explanation?

The PySpark looks like this:

.aggregateByKey((0.0, 0.0, 0.0),

lambda (sum, sum2, count), value: (sum + value, sum2 + value**2, count+1.0),

lambda (suma, sum2a, counta), (sumb, sum2b, countb): (suma + sumb, sum2a + sum2b, counta + countb))

Edit:

What I have so far is:

val dataSusRDD = numFilterRDD.aggregateByKey((0,0,0), (sum, sum2, count) =>

But what I am having trouble understanding is how you write this in scala because of the group of functions being then designating the value into a group of actions (sum + value, etc). into the second aggregating functions all with the proper syntax. Its hard to coherently state my troubles in this scenario. Its more so I not understanding of scala and when to use the brackets, vs parentheses, vs, comma

网友答案:

As @paul suggests using named functions might make understanding whats going on a bit simpler.

val initialValue = (0.0,0.0,0.0)
def seqOp(u: (Double, Double, Double), v: Double) = (u._1 + v, u._2 + v*v, u._3 + 1)
def combOp(u1: (Double, Double, Double),  u2: (Double, Double, Double)) = (u1._1 + u2._1, u1._2 + u2._2, u1._3 + u2._3)
rdd.aggregateByKey(initialValue)(seqOp, combOp)
分享给朋友:
您可能感兴趣的文章:
随机阅读: