其他分享
首页 > 其他分享> > Log4j采集日志分析(模拟)

Log4j采集日志分析(模拟)

作者:互联网

原文链接:https://class.imooc.com

test下新建directory然后idea的右上角项目结构使其变成Test,新建LogGenerator

import org.apache.log4j.Logger;

public class LoggerGenerator {

    private static Logger logger = Logger.getLogger(LoggerGenerator.class.getName());

    public static void main(String[] args) throws InterruptedException {
        int index=0;
        while(true){
            Thread.sleep(1000);
            logger.info("current value is"+ index++);
        }
    }
}

test下再建resource目录,使其变成testresources,新建log4j.properties

log4j.rootLogger=INFO,stdout

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} [%t] [$c] [$p] - %m%n

运行便会重复出现日志

对flume的配置

#streaming.conf
#名称
agent1.sources=avro-source
agent1.channels=logger-channel
agent1.sinks=log-sink

#define source
agent1.sources.avro-source.type=avro
agent1.sources.avro-source.bind=0.0.0.0
agent1.sources.avro-source.port=41414

#define channel
agent1.channels.logger-channel.type=memory

#define sink
agent1.sinks.log-sink=logger

agent1.sources.avro-source.channels=logger-channel
agent1.sinks.log-sink.channel=logger-channel

flume-ng agent \

--conf $FLUME_HOME/conf \

--conf-file $FLUME_HOME/conf/streaming.conf \

--name agent1 \

-Dflume.root.logger=INFO,console

 

log4j整合flume,修改log4j的配置文件

log4j.rootLogger=INFO,stdout,flume

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} [%t] [$c] [$p] - %m%n

log4j.appender.flume=org.apache.flume.clients.log4jappender.Log4jAppender
log4j.appender.flume.Hostname=hadoop000
log4j.appender.flume,Port=41414
log4j.appender.flume.UnsafeMode=true

整合flume和kafka,修改配置文件

kafka对接sparkstreaming

package Kafka2SparkStreaming

import kafka.serializer.StringDecoder
import org.apache.spark.streaming.dstream.{DStream, InputDStream}
import org.apache.spark.streaming.kafka.KafkaUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.{SparkConf, SparkContext}

class LowApi_CreateDirectDstream {
  /*低级api*/
  def main(args: Array[String]): Unit = {
    //appName设置为当前类名LowApi_CreateDirectDstream
    val sparkConf: SparkConf =new SparkConf().setAppName("LowApi_CreateDirectDstream").setMaster("local[2]")
    val sc: SparkContext =new SparkContext(sparkConf)
    sc.setLogLevel("WARN")
    //创建SparkstreamingContext
    val ssc: StreamingContext =new StreamingContext(sc,Seconds(10))
    ssc.checkpoint("./checkpoint")
    //设置kafka相关参数
    val kafkaParam: Map[String, String] =Map("metadata.broker.list"->"192.168.213.8:9092,192.168.213.9:9092,192.168.213.10:9092",
        "group.id"->"Kafka_Direct")
    //定义topic,可以拉取多个topic的数据
    val topics: Set[String] =Set("spark_01")
    //创建DStream
    val dstream: InputDStream[(String, String)] =KafkaUtils.createDirectStream[String,String,StringDecoder,StringDecoder](ssc,kafkaParam,topics)
    //获取kafka中topic的数据  取当前元组的第二位
    val topicData: DStream[String] =dstream.map(_._2).count()
    //打印输出
    result.print()
    ssc.start()
    ssc.awaitTermination()

  
  }
}

 

标签:String,Log4j,采集,agent1,org,apache,log4j,日志,appender
来源: https://blog.csdn.net/someInNeed/article/details/99185756