python-Spark流.从Kafka并行读取导致重复数据
作者:互联网
我使用以下代码创建了6个输入DStream,这些DStream使用直接方法从Kafka的6个分区主题中读取,我发现即使为流指定相同的组ID,我也会重复获取数据6次.如果仅创建3个DStream,我将数据重复3次,依此类推.
numStreams = 6
kafkaStreams = [KafkaUtils.createDirectStream(ssc, ["send6partitions"], {
"metadata.broker.list": brokers,
"fetch.message.max.bytes": "20971520",
"spark.streaming.blockInterval" : "2000ms",
"group.id" : "the-same"},
valueDecoder = decodeValue, keyDecoder = decode_key) for _ in range (numStreams)]
kvs = ssc.union(*kafkaStreams)
我在这里做错了什么?
解决方法:
在直接方法中,您不应从一个主题创建许多DStream.
Simplified Parallelism: No need to create multiple input Kafka streams
and union them. With directStream, Spark Streaming will create as many
RDD partitions as there are Kafka partitions to consume, which will
all read data from Kafka in parallel. So there is a one-to-one mapping
between Kafka and RDD partitions, which is easier to understand and
tune.
因此,仅创建一个DStream,Spark将使用所有Kafka分区:)
标签:apache-spark,apache-kafka,streaming,python 来源: https://codeday.me/bug/20191112/2024093.html