编程语言
首页 > 编程语言> > 第五章_Spark核心编程_Rdd_转换算子_keyValue型_cogroup

第五章_Spark核心编程_Rdd_转换算子_keyValue型_cogroup

作者:互联网

1. 定义

  /*
    * 1.定义
    *     def cogroup[W](other: RDD[(K, W)]): RDD[(K, (Iterable[V], Iterable[W]))]
    *     def cogroup[W1, W2](other1: RDD[(K, W1)], other2: RDD[(K, W2)])
    *             : RDD[(K, (Iterable[V], Iterable[W1], Iterable[W2]))]
    *       def cogroup[W1, W2, W3](other1: RDD[(K, W1)],
    *               other2: RDD[(K, W2)],
    *               other3: RDD[(K, W3)],
    *               partitioner: Partitioner)
    *               : RDD[(K, (Iterable[V], Iterable[W1], Iterable[W2], Iterable[W3]))]
    * 2.功能
    *     将两个(或多个) 类型为(K,V)和(K,W)的RDD 进行fullouterjoin
    *               返回一个相同 key 对应的所有元素连接在一起的 (K,(Iterable<V>,Iterable<W>))的 RDD
    *
    * 3.操作流程
    *     1. 对每个Rdd进行分组操作
    *              rdd1: key,Iterable<V>
    *              rdd2: key,Iterable<W>
    *              rdd3: key,Iterable<Z>
    *     2. 对多个Rdd 按Key 进行fullOuterJoin
    *              rdd1.cogroup(rdd2,rdd3)
    *              结果 : key,(Iterable<V>,Iterable<W>,Iterable<Z>)
    * 4.note
    *     1. 参数中对多可以传入三个Rdd
    * */

2.示例

  object cogroupTest extends App {

    val sparkconf: SparkConf = new SparkConf().setMaster("local").setAppName("distinctTest")

    val sc: SparkContext = new SparkContext(sparkconf)

    val rdd1: RDD[(Int, String)] = sc.makeRDD(List((1, "刘备"),(1, "刘备1"), (2, "张飞"), (3, "关羽"), (4, "曹操"), (5, "赵云"), (7, "孙权")), 2)
    val rdd2: RDD[(Int, String)] = sc.makeRDD(List((1, "蜀国"), (2, "蜀国"), (2, "蜀国1") ,(3, "蜀国"), (4, "魏国"), (5, "蜀国"), (6, "吴国")), 3)
    val rdd3: RDD[(Int, String)] = sc.makeRDD(List((1, "蜀国_"), (2, "蜀国_"), (2, "蜀国1_") ,(3, "蜀国_"), (4, "魏国_"), (5, "蜀国_"), (16, "吴国_")), 3)

    private val rdd4: RDD[(Int, (Iterable[String], Iterable[String], Iterable[String]))] = rdd1.cogroup(rdd2,rdd3)

    rdd4.collect().foreach(println(_))

    sc.stop()
  }

 

标签:String,RDD,cogroup,Rdd,keyValue,W2,W1,Iterable,蜀国
来源: https://www.cnblogs.com/bajiaotai/p/16061872.html