首页 > 其他分享> > |NO.Z.00012|——————————|BigDataEnd|——|Hadoop&PB级数仓.V04|---------------------------------------|PB数仓.v
|NO.Z.00012|——————————|BigDataEnd|——|Hadoop&PB级数仓.V04|---------------------------------------|PB数仓.v
作者:互联网
[BigDataHadoop:Hadoop&PB级数仓.V04] [BigDataHadoop.PB级企业电商离线数仓][|章节二|Hadoop|会员活跃度分析:日志数据采集&agent配置&Flume配置|]
一、Agent的配置
### --- 配置Flume.agent配置
[root@hadoop02 ~]# vim /data/yanqidw/conf/flume-log2hdfs1.conf
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# taildir source
a1.sources.r1.type = TAILDIR
a1.sources.r1.positionFile = /data/yanqidw/conf/startlog_position.json
a1.sources.r1.filegroups = f1
a1.sources.r1.filegroups.f1 = /data/yanqidw/logs/start/.*log
# memorychannel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 100000
a1.channels.c1.transactionCapacity = 2000
# hdfs sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = /user/data/logs/start/%Y-%m-%d/
a1.sinks.k1.hdfs.filePrefix = startlog.
a1.sinks.k1.hdfs.fileType = DataStream
# 配置文件滚动方式(文件大小32M)
a1.sinks.k1.hdfs.rollSize = 33554432
a1.sinks.k1.hdfs.rollCount = 0
a1.sinks.k1.hdfs.rollInterval = 0
a1.sinks.k1.hdfs.idleTimeout = 0
a1.sinks.k1.hdfs.minBlockReplicas = 1
# 向hdfs上刷新的event的个数
a1.sinks.k1.hdfs.batchSize = 1000
# 使用本地时间
a1.sinks.k1.hdfs.useLocalTimeStamp = true
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
二、Flume的优化配置### --- 启动agent环境准备
~~~ # 查看本地文件结构
[root@hadoop02 ~]# tree /data/
/data/
└── yanqidw
├── conf
│ ├── flume-log2hdfs1.conf
│ ├── flumetest1.conf
│ └── startlog_position.json
├── jars
├── logs
│ ├── data
│ │ └── start0802.log
│ ├── event
│ ├── source
│ └── start
│ └── start0802.log
└── script
~~~ # 把该文件夹上传到hdfs服务中
[root@hadoop02 ~]# hdfs dfs -mkdir /usr/data/
[root@hadoop02 ~]# hdfs dfs -put /data/yanqidw/* /usr/data/
### --- 启动agent
[root@hadoop02 ~]# flume-ng agent --conf-file /data/yanqidw/conf/flume-log2hdfs1.conf \
-name a1 -Dflume.root.logger=INFO,console
### --- 报错现象
~~~ 向 /data/yanqidw/logs/ 目录中放入日志文件,报错:
~~~ # 缺省情况下 Flume jvm堆最大分配20m,这个值太小,需要调整。
java.lang.OutOfMemoryError: GC overhead limit exceeded
### --- 解决方案:
~~~ 在 $FLUME_HOME/conf/flume-env.sh 中增加以下内容
[root@hadoop02 ~]# vim $FLUME_HOME/conf/flume-env.sh
export JAVA_OPTS="-Xms4000m -Xmx4000m -Dcom.sun.management.jmxremote"
### --- 重新启动agent
~~~ # 要想使配置文件生效,还要在命令行中指定配置文件目录
[root@hadoop02 ~]# flume-ng agent --conf /opt/yanqi/servers/flume-1.9.0/conf \
--conf-file /data/yanqidw/conf/flume-log2hdfs1.conf -name a1 \
-Dflume.root.logger=INFO,console
[root@hadoop02 ~]# flume-ng agent --conf-file /data/yanqidw/conf/flume-log2hdfs1.conf \
-name a1 -Dflume.root.logger=INFO,console
~~~ # 查看hdfs日志文件
[root@hadoop02 ~]# hdfs dfs -ls /user/data/logs/start/2021-09-28/startlog..1632808271900.tmp
-rw-r--r-- 5 root supergroup 256333 2021-09-28 13:51 /user/data/logs/start/2021-09-28/startlog..1632808271900.tmp
### --- 再次拷贝文件到/data/yanqidw/logs/start/目录下
[root@hadoop02 ~]# cp /data/yanqidw/logs/data/start0802.log /data/yanqidw/logs/start/start2.log
[root@hadoop02 ~]# cp /data/yanqidw/logs/data/start0802.log /data/yanqidw/logs/start/start3.log
[root@hadoop02 ~]# hdfs dfs -ls /user/data/logs/start/2021-09-28/
Found 3 items
-rw-r--r-- 5 root supergroup 33691250 2021-09-28 14:05 /user/data/logs/start/2021-09-28/startlog..1632808271900
-rw-r--r-- 5 root supergroup 33691476 2021-09-28 14:06 /user/data/logs/start/2021-09-28/startlog..1632808271901
-rw-r--r-- 5 root supergroup 58189 2021-09-28 14:06 /user/data/logs/start/2021-09-28/startlog..1632808271902.tmp
### --- Flume内存参数设置及优化:
~~~ 根据日志数据量的大小,Jvm堆一般要设置为4G或更高
~~~ -Xms -Xmx 最好设置一致,减少内存抖动带来的性能影响
~~~ 存在的问题:Flume放数据时,使用本地时间;不理会日志的时间戳
===============================END===============================
Walter Savage Landor:strove with none,for none was worth my strife.Nature I loved and, next to Nature, Art:I warm'd both hands before the fire of life.It sinks, and I am ready to depart ——W.S.Landor
来自为知笔记(Wiz)
标签:---------------------------------------,a1,logs,hdfs,root,PB,V04,conf,data 来源: https://www.cnblogs.com/yanqivip/p/16125757.html