其他分享
首页 > 其他分享> > ELFK 7.x 集群部署

ELFK 7.x 集群部署

作者:互联网

ELFK介绍

ELK 是 elastic 公司旗下三款产品ElasticSearch、Logstash、Kibana的首字母组合,也即Elastic Stack包含ElasticSearch、Logstash、Kibana、Beats。ELK提供了一整套解决方案,并且都是开源软件,之间互相配合使用,完美衔接,高效的满足了很多场合的应用,是目前主流的一种日志系统。

ElasticSearch   一个基于 JSON 的分布式的搜索和分析引擎,作为 ELK 的核心,它集中存储数据,
                用来搜索、分析、存储日志。它是分布式的,可以横向扩容,可以自动发现,索引自动分片

Logstash        一个动态数据收集管道,支持以 TCP/UDP/HTTP 多种方式收集数据(也可以接受 Beats 传输来的数据),
                并对数据做进一步丰富或提取字段处理。用来采集日志,把日志解析为json格式交给ElasticSearch

Kibana          一个数据可视化组件,将收集的数据进行可视化展示(各种报表、图形化数据),并提供配置、管理 ELK 的界面

Beats           一个轻量型日志采集器,单一用途的数据传输平台,可以将多台机器的数据发送到 Logstash 或 ElasticSearch

X-Pack          一个对Elastic Stack提供了安全、警报、监控、报表、图表于一身的扩展包,不过收费

官网:https://www.elastic.co/cn/ ,中文文档:https://elkguide.elasticsearch.cn/

下载elk各组件的旧版本:

https://www.elastic.co/downloads/past-releases

grok正则表达式参考:

https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns

当前最新版本:7.10.1


环境准备

环境准备工作请在所有节点上完成。

es主节点/es数据节点/kibana/head             192.168.100.132

es主节点/es数据节点/logstash                192.168.100.133

es主节点/es数据节点/filebeat                192.168.100.134

systemctl stop firewalld && systemctl disable firewalldsed -i 's/=enforcing/=disabled/g' /etc/selinux/config && setenforce 0

cat >> /etc/security/limits.conf << EOF
* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096
EOF[ `grep 'vm.max_map_count' /etc/sysctl.conf |wc -l` -eq 0 ] && echo 'vm.max_map_count=655360' >> /etc/sysctl.conf

sysctl -p

mkdir /software && cd /software             #将jdk安装包放到 /software 下tar xf jdk-11.0.9_linux-x64_bin.tar.gz && mv jdk-11.0.9 /usr/local/jdk

vim /etc/profile

JAVA_HOME=/usr/local/jdk
PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/jre/libexport JAVA_HOME PATH CLASSPATHsource !$

java -version


elastic-head

elastic-head 部署工作请在 head 节点上完成。

curl http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker.repo

yum makecache fast

yum install -y docker-ce

systemctl enable docker && systemctl start docker

docker pull mobz/elasticsearch-head:5

docker run -d --name head -p 9100:9100 mobz/elasticsearch-head:5

docker psCONTAINER ID   IMAGE                       COMMAND                  CREATED         STATUS         PORTS                    NAMES
c0ecb506aeb4   mobz/elasticsearch-head:5   "/bin/sh -c 'grunt s…"   3 seconds ago   Up 2 seconds   0.0.0.0:9100->9100/tcp   head

访问ip:9100

在这里插入图片描述


elasticsearch

elasticsearch 集群部署工作请在 es 节点上完成。

rpm -ivh https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.10.1-x86_64.rpm

vim /etc/elasticsearch/elasticsearch.yml

192.168.30.132

cluster.name: elk               #集群名,同一集群必须相同node.name: elk-132              #指定节点主机名node.master: true               #允许成为主节点node.data: true                 #数据节点path.data: /var/lib/elasticsearch               #数据存放路径path.logs: /var/log/elasticsearch               #日志路径bootstrap.memory_lock: false                #关闭锁定内存,设置为true会报错network.host: 192.168.30.132                #监听iphttp.port: 9200                 #http端口transport.tcp.port: 9300discovery.seed_hosts: ["192.168.30.132", "192.168.30.133", "192.168.30.134"]                #master资格节点列表cluster.initial_master_nodes: ["elk-132", "elk-133", "elk-134"]             #集群初始节点列表discovery.zen.minimum_master_nodes: 2gateway.recover_after_nodes: 3http.cors.enabled: true             #允许head插件访问eshttp.cors.allow-origin: "*"

192.168.30.133

cluster.name: elk               #集群名,同一集群必须相同node.name: elk-133              #指定节点主机名node.master: true               #允许成为主节点node.data: true                 #数据节点path.data: /var/lib/elasticsearch               #数据存放路径path.logs: /var/log/elasticsearch               #日志路径bootstrap.memory_lock: false                #关闭锁定内存,设置为true会报错network.host: 192.168.30.133                #监听iphttp.port: 9200                 #http端口transport.tcp.port: 9300discovery.seed_hosts: ["192.168.30.132", "192.168.30.133", "192.168.30.134"]                #master资格节点列表cluster.initial_master_nodes: ["elk-132", "elk-133", "elk-134"]             #集群初始节点列表discovery.zen.minimum_master_nodes: 2gateway.recover_after_nodes: 3http.cors.enabled: true             #允许head插件访问eshttp.cors.allow-origin: "*"

192.168.30.134

cluster.name: elk               #集群名,同一集群必须相同node.name: elk-134              #指定节点主机名node.master: true               #允许成为主节点node.data: true                 #数据节点path.data: /var/lib/elasticsearch               #数据存放路径path.logs: /var/log/elasticsearch               #日志路径bootstrap.memory_lock: false                #关闭锁定内存,设置为true会报错network.host: 192.168.30.134                #监听iphttp.port: 9200                 #http端口transport.tcp.port: 9300discovery.seed_hosts: ["192.168.30.132", "192.168.30.133", "192.168.30.134"]                #master资格节点列表cluster.initial_master_nodes: ["elk-132", "elk-133", "elk-134"]             #集群初始节点列表discovery.zen.minimum_master_nodes: 2gateway.recover_after_nodes: 3http.cors.enabled: true             #允许head插件访问eshttp.cors.allow-origin: "*"

在机器负载不是非常高的时候,建议将所有节点设置允许成为主节点和数据节点,实现集群高可用。

systemctl enable elasticsearch && systemctl start elasticsearch

curl '192.168.30.132:9200/_cluster/health?pretty'               #查看集群健康状态

{
  "cluster_name" : "elk",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0}

curl '192.168.30.132:9200/_cat/master?v'                #查看集群masterid                     host           ip             node
oql8XBNFTqSSaiiImwjiEg 192.168.30.133 192.168.30.133 elk-133

curl '192.168.30.132:9200/_cluster/state?pretty'                #查看集群详细信息

在这里插入图片描述


kibana

kibana 部署工作请在 kibana 节点上完成。

rpm -ivh https://artifacts.elastic.co/downloads/kibana/kibana-7.10.1-x86_64.rpm

vim /etc/kibana/kibana.yml

server.port: 5601server.host: "192.168.30.132"elasticsearch.hosts: ["http://192.168.30.132:9200","http://192.168.30.133:9200","http://192.168.30.134:9200"]kibana.index: ".kibana"elasticsearch.username: elasticelasticsearch.password: changemei18n.locale: "zh-CN"

systemctl enable kibana && systemctl start kibana

访问ip:5601

在这里插入图片描述

在这里插入图片描述


filebeat

filebeat 部署工作请在 filebeat 节点上完成。

rpm -ivh https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.10.1-x86_64.rpm

yum install -y nginx && systemctl start nginx

vim /etc/filebeat/filebeat.yml

filebeat.inputs:- type: log  enabled: true
  tail_files: true
  backoff: "1s"
  paths:
    - /var/log/nginx/access.log  fields:
    type: nginx_access  fields_under_root: true
  multiline.pattern: '\d+\.\d+\.\d+\.\d+'
  multiline.negate: true
  multiline.match: after    
output.logstash:
  hosts: ["192.168.30.133:5040"]
  enabled: true
  worker: 1
  compression_level: 3

systemctl enable filebeat && systemctl start filebeat


logstash

logstash 部署工作请在 logstash 节点上完成。

rpm -ivh https://artifacts.elastic.co/downloads/logstash/logstash-7.10.1-x86_64.rpm

vim /etc/logstash/logstash.yml

http.host: "192.168.30.133"http.port: 9600path.data: /var/lib/logstashpipeline.ordered: autopath.logs: /var/log/logstash

vim /etc/logstash/conf.d/nginx.conf

input {
    beats {
        port => 5040    }}filter {
    if [type] == "nginx_access" {
        grok {
            match => { "message" => "%{COMBINEDAPACHELOG}" }
        }
    }}output {
    if [type] == "nginx_access" {
        elasticsearch {
            hosts => ["192.168.30.132:9200","192.168.30.133:9200","192.168.30.134:9200"]
            user => "elastic"
            password => "changeme"
            index => "nginx-log"
        }
    }}

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginx.conf -t              #检查配置文件是否正确systemctl enable logstash && systemctl start logstash

访问nginx页面,以产生日志,

curl '192.168.30.132:9200/_cat/indices?v'health status index                           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   logstash-2020.12.29-000001      18ZOpN6mSiqZYBNzua0rFg   1   1          0            0       416b           208b
green  open   .apm-custom-link                RJ6l1ijjSt2msGVyKQ0yLA   1   1          0            0       416b           208b
green  open   nginx-log                       L4MZcw6BSMCZzbItjlElVg   1   1         11            0    248.4kb        124.2kb
green  open   .kibana_task_manager_1          QrcOB_pNT9Cj_6yGz4-pfg   1   1          5          978    292.4kb        134.9kb
green  open   .apm-agent-configuration        DCbIKU_SSX-nToV5kSxJpA   1   1          0            0       416b           208b
green  open   .kibana-event-log-7.10.1-000001 P8WJ1ayXSPmKUbVkXTA6lw   1   1          1            0     11.2kb          5.6kb
green  open   .kibana_1                       AHRdWUl6QgO6inJ3ypekSg   1   1         45           45      8.5mb          4.2mb

head上查看,

在这里插入图片描述

kibana上 Stack Management索引模式创建索引模式,创建索引 nginx-log

在这里插入图片描述


x-pack

x-pack 启用工作请在 es 节点上完成。

cd /usr/share/elasticsearch/

/usr/share/elasticsearch/bin/elasticsearch-certutil ca                #直接回车/usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12                #直接回车cp elastic-certificates.p12 /etc/elasticsearch/scp elastic-certificates.p12 192.168.30.133:!$ ;scp elastic-certificates.p12 192.168.30.134:!$

如果生成证书设置了密码,需要将密码添加到elasticsearch秘钥库:

/usr/share/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password

/usr/share/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password

vim /etc/elasticsearch/elasticsearch.yml                #追加

http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type             #增加headxpack.security.enabled: true                #启用x-packxpack.security.transport.ssl.enabled: truexpack.security.transport.ssl.verification_mode: certificatexpack.security.transport.ssl.keystore.path: elastic-certificates.p12xpack.security.transport.ssl.truststore.path: elastic-certificates.p12

chmod 660 /etc/elasticsearch/elastic-certificates.p12

systemctl restart elasticsearch

/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive              #自定义密码,任选一节点执行Enter password for [elastic]: elk-2021
Reenter password for [elastic]: elk-2021
Enter password for [apm_system]: elk-2021
Reenter password for [apm_system]: elk-2021
Enter password for [kibana]: elk-2021
Reenter password for [kibana]: elk-2021
Enter password for [logstash_system]: elk-2021
Reenter password for [logstash_system]: elk-2021
Enter password for [beats_system]: elk-2021
Reenter password for [beats_system]: elk-2021
Enter password for [remote_monitoring_user]: elk-2021
Reenter password for [remote_monitoring_user]: elk-2021

curl -u elastic:elk-2021 -XGET 'http://192.168.30.132:9200/_cluster/health?pretty'

{
  "cluster_name" : "elk",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 1,
  "active_shards" : 2,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0}

vim /etc/kibana/kibana.yml

server.port: 5601server.host: "192.168.30.132"elasticsearch.hosts: ["http://192.168.30.132:9200","http://192.168.30.133:9200","http://192.168.30.134:9200"]kibana.index: ".kibana"elasticsearch.username: elasticelasticsearch.password: elk-2021i18n.locale: "zh-CN"

systemctl restart kibana

刷新kibana网页,可以看到需要登录账号及密码。输入账号:elastic、密码:elk-2021,登录

在这里插入图片描述

ELFK 7.x 版本的 basic 许可永不过期,因此不需要像 6.x 版本那样破解。

开启x-pack后访问head需要加上账号及密码,

http://192.168.30.132:9100/?auth_user=elastic&auth_password=elk-2021

在这里插入图片描述

vim /etc/logstash/logstash.yml

http.host: "192.168.30.133"http.port: 9600path.data: /var/lib/logstashpipeline.ordered: autopath.logs: /var/log/logstashxpack.monitoring.enabled: truexpack.monitoring.elasticsearch.username: logstash_systemxpack.monitoring.elasticsearch.password: elk-2021xpack.monitoring.elasticsearch.hosts: ["http://192.168.30.132:9200","http://192.168.30.133:9200","http://192.168.30.134:9200"]xpack.monitoring.collection.interval: 10s

logstash收集日志时,output部分如果是输出到es,也需要更改账号密码,否则输出会失败。

output {
    elasticsearch {
        hosts => ["192.168.30.132:9200","192.168.30.133:9200","192.168.30.134:9200"]
        user => "elastic"
        password => "elk-2021"
    }}

systemctl restart logstash

filebeat收集日志时,output部分如果是输出到es,也需要更改账号密码,否则输出会失败。

vim /etc/filebeat/filebeat.yml

output.elasticsearch:
  hosts: ["192.168.30.132:9200","192.168.30.133:9200","192.168.30.134:9200"]
  username: "elastic"
  password: "elk-2021"

systemctl restart filebeat

至此,elfk 7.x 集群部署完成。


标签:elk,ELFK,9200,部署,192.168,节点,elasticsearch,集群,logstash
来源: https://blog.51cto.com/u_10272167/2730549