Rsyslog+kafka+ELK(集群) 配置xpack插件
作者:互联网
由于公司Policy要求,需要对ELK的kibana访问进行安全认证
ELK7.*已经支持xpack的集成安装,直接配置使用即可
本篇基于上篇博客基础上进行配置
1、生成证书
~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
430a218a4513 elasticsearch:7.7.0 "/tini -- /usr/loc..." 3 hours ago Up 3 hours elasticsearch
1a75637e85d6 kafka:2.12-2.5.0 "start-kafka.sh" 3 hours ago Up 3 hours kafka
03a9aa4a458c zookeeper:3.4.13 "/docker-entrypoin..." 3 hours ago Up 3 hours zookeeper
81ba9a81983c logstash:7.7.0 "/usr/local/bin/do..." 27 hours ago Up 8 minutes 5044/tcp, 0.0.0.0:4560->4560/tcp, 9600/tcp logstash
4bc2de14b119 kibana:7.7.0 "/usr/local/bin/du..." 2 days ago Up 7 minutes 0.0.0.0:5601->5601/tcp kibana
~]# docker stop kibana
~]# docker stop logstash #三台节点都需要关闭
~]# docker exec -it elasticsearch bash
elasticsearch]# elasticsearch-certutil ca # 生成CA证书
ENTER ENTER
elasticsearch]# elasticsearch-certutil cert --ca elastic-stack-ca.p12 # 生成节点证书
ENTER ENTER
#在当前目录生成相应的证书
#把证书传输到其他节点,集群间使用证书进行通信,由于之前挂载了/usr/share/elasticsearch/data目录,先把证书放在data目录下传输到宿主机
elasticsearch]# mv elastic* data/
elasticsearch]# exit
~]# cd /data/es/data
~]# mkdir /data/es/certs
~]# mv /data/es/data/elastic* /data/es/certs/
把证书文件传输给其他2台节点
~]# scp -rp /data/es/certs/elastic* user@10.10.27.126:/data/es/certs/
~]# scp -rp /data/es/certs/elastic* user@10.10.27.127:/data/es/certs/
3、修改内置用户密码
(1) 三台节点修改docker-compose配置
~]# vi /data/elk/docker-compose.yml #挂载证书对应文件
version: '2.1'
services:
elasticsearch:
image: elasticsearch:7.7.0
container_name: elasticsearch
depends_on:
- kafka
environment:
ES_JAVA_OPTS: -Xms1g -Xmx1g
network_mode: host
volumes:
- /data/es/conf/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- /data/es/data:/usr/share/elasticsearch/data
- /data/es/certs/elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12
- /data/es/certs/elastic-stack-ca.p12:/usr/share/elasticsearch/config/elastic-stack-ca.p12
......
(2) 三台台节点修改elasticsearch配置文件
~]# vi /data/es/conf/elasticsearch.yml #添加认证选项
cluster.name: es-cluster
network.host: 0.0.0.0
node.name: master1
http.cors.enabled: true
http.cors.allow-origin: "*"
node.master: true
node.data: true
network.publish_host: 10.10.27.125
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.unicast.hosts: ["10.10.27.125","10.10.27.126","10.10.27.127"]
cluster.initial_master_nodes: ["10.10.27.125","10.10.27.126","10.10.27.127"]
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
#默认的elastic-certificates.p12文件位置在容器实例的/usr/share/elasticsearch/config/路径下,也可以直接指定具体路径
(2) 启动三台节点的elasticsearch实例
master1 ~]# docker stop elasticsearch && docker rm elasticsearch
master1 ~]# cd /data/elk/ && docker-compose up -d elasticsearch
master2 ~]# docker stop elasticsearch && docker rm elasticsearch
master2 ~]# cd /data/elk/ && docker-compose up -d elasticsearch
master3 ~]# docker stop elasticsearch && docker rm elasticsearch
master3 ~]# cd /data/elk/ && docker-compose up -d elasticsearch
(3) 配置内置用户密码
master1 ~]# docker exec -it elasticsearch bash
elasticsearch]# elasticsearch-setup-passwords interactive
# 输出结果
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y # 输入y
# 直接输入密码,然后再重复一遍密码,中括号里是账号
Enter password for [elastic]:
Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana]:
Reenter password for [kibana]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:
Changed password for user [apm_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]
# 为了方便后续使用,这里可以都设置成一样
# password: Password123
elasticsearch]# exit
# 验证集群设置的账号和密码
# 打开浏览器访问这个地址,出现需要输入账号密码的界面证明设置成功
#http://10.10.27.125:9200/_security/_authenticate?pretty
3、修改配置文件
(1) 三台节点修改logstash配置文件
~]# vi /data/logstash/conf/logstash.conf
......
output{
elasticsearch{
hosts => ["10.10.27.125:9200","10.10.27.126:9200","10.10.27.127:9200"]
index => "system-log-%{+YYYY.MM.dd}"
user => "elastic" # 注意:这里演示使用超级账号,安全起见最好是使用自定义的账号,并授予该用户创建索引的权限,具体看下方地址
password => "Password123"
}
stdout{
codec => rubydebug
}
}
# 使用自定义的账号官方地址:https://www.elastic.co/cn/blog/configuring-ssl-tls-and-https-to-secure-elasticsearch-kibana-beats-and-logstash
~]# vi /data/logstash/conf/logstash.yml
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://10.10.27.125:9200","http://10.10.27.126:9200","http://10.10.27.127:9200" ]
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: "Password123"
(2) 修改kibana配置文件
~]# vi /data/kibana/conf/kibana.yml #添加kibana的用户密码
server.name: kibana
server.host: "0.0.0.0"
elasticsearch.hosts: [ "http://10.10.27.125:9200","http://10.10.27.126:9200","http://10.10.27.127:9200" ]
monitoring.ui.container.elasticsearch.enabled: true
elasticsearch.username: "kibana" # 注意:此处不用超级账号elastic,而是使用kibana跟es连接的账号kibana
elasticsearch.password: "Password123"
4、启动ELK
#重启logstash
master1 ~]# docker restart logstash
master2 ~]# docker restart logstash
master3 ~]# docker restart logstash
#重启kibana
master1 ~]# docker restart kibana
5、访问kibana
访问kibana:http://10.10.27.125:5601 使用用户elastic访问成功,在Management下面的Kibana最后出现一个Security,有User和Role,方便kibana多用户创建及角色权限控制
参考链接:https://www.cnblogs.com/sanduzxcvbnm/p/11427686.html
标签:ELK,插件,elastic,xpack,kibana,elasticsearch,docker,password,data 来源: https://www.cnblogs.com/robin788/p/16150914.html