大数据技术之Kafka 第6章 Flume对接Kafka
作者:互联网
第6章 Flume对接Kafka
6.1 简单实现
1)配置flume
# define
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /opt/module/data/flume.log
# sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.bootstrap.servers = hadoop102:9092,hadoop103:9092,hadoop104:9092
a1.sinks.k1.kafka.topic = first
a1.sinks.k1.kafka.flumeBatchSize = 20
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1
# channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# bind
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
2)启动kafka消费者
3)进入flume根目录下,启动flume
$ bin/flume-ng agent -c conf/ -n a1 -f jobs/flume-kafka.conf
4)向 /opt/module/data/flume.log里追加数据,查看kafka消费者消费情况
$ echo hello >> /opt/module/data/flume.log
6.2 数据分离
0)需求: 将flume采集的数据按照不同的类型输入到不同的topic中
将日志数据中带有wolffy的,输入到Kafka的first主题中,
将日志数据中带有root的,输入到Kafka的second主题中,
其他的数据输入到Kafka的third主题中
1) 编写Flume的Interceptor
package com.wolffy.kafka.flumeInterceptor;
import org.apache.flume.Context;
import org.apache.flume.Event;
import org.apache.flume.interceptor.Interceptor;
import javax.swing.text.html.HTMLEditorKit;
import java.util.List;
import java.util.Map;
public class FlumeKafkaInterceptor implements Interceptor {
@Override
public void initialize() {
}
@Override
public Event intercept(Event event) {
//1.获取event的header
Map<String, String> headers = event.getHeaders();
//2.获取event的body
String body = new String(event.getBody());
if(body.contains("wolffy")){
headers.put("topic","first");
}else if(body.contains("root")){
headers.put("topic","second");
}
return event;
}
@Override
public List<Event> intercept(List<Event> events) {
for (Event event : events) {
intercept(event);
}
return events;
}
@Override
public void close() {
}
public static class MyBuilder implements Builder{
@Override
public Interceptor build() {
return new FlumeKafkaInterceptor();
}
@Override
public void configure(Context context) {
}
}
}
2)将写好的interceptor打包上传到Flume安装目录的lib目录下
3)配置flume
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 6666
# Describe the sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.topic = third
a1.sinks.k1.kafka.bootstrap.servers = hadoop102:9092,hadoop103:9092,hadoop104:9092
a1.sinks.k1.kafka.flumeBatchSize = 20
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1
#Interceptor
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = com.wolffy.kafka.flumeInterceptor.FlumeKafkaInterceptor$MyBuilder
# # Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
4)启动kafka消费者
5)进入flume根目录下,启动flume
$ bin/flume-ng agent -c conf/ -n a1 -f jobs/flume-kafka.conf
6) 向6666端口写数据,查看kafka消费者消费情况
IT学习网站
牛牛IT网站:https://www.cowcowit.com/
Kafka学习资源推荐
Kafka多维度系统精讲-从入门到熟练掌握
链接: https://pan.baidu.com/s/1fvhFRu9eExVKq8tGr3mPsw 提取码: 2mdb
--来自百度网盘超级会员v4的分享
失效加V:x923713
标签:Flume,flume,sinks,对接,kafka,a1,k1,c1,Kafka 来源: https://www.cnblogs.com/niuniu2022/p/16345818.html