ansible 安装etcd
作者:互联网
分组配置-/etc/ansible/hosts
[platform-etcd]
172.24.31.22 hostname=platform-etcd-3 ansible_ssh_user=root ansible_ssh_private_key_file=/xx.pem ansible_become=true ansible_become_user=root
172.24.31.24 hostname=platform-etcd-2 ansible_ssh_user=root ansible_ssh_private_key_file=/xx.pem ansible_become=true ansible_become_user=root
172.24.31.25 hostname=platform-etcd-1 ansible_ssh_user=root ansible_ssh_private_key_file=/xx.pem ansible_become=true ansible_become_user=root
playbook
---
- hosts: platform-etcd
gather_facts: False
vars:
name1: etcd1
name2: etcd2
name3: etcd3
etcd1: 172.24.31.25
etcd2: 172.24.31.24
etcd3: 172.24.31.22
data_dir: /home/service/app/etcd-v3.4.14-linux-amd64/default.etcd
roles:
- platform-etcd-cluster
roles
tasks
main.yml
---
- name: wget etcd
command: wget https://github.com/etcd-io/etcd/releases/download/v3.4.14/etcd-v3.4.14-linux-amd64.tar.gz
- name: tar.gz
command: tar -zxvf etcd-v3.4.14-linux-amd64.tar.gz -C /home/service/app
- name: copy etcd config to etcd1
template: src=etcd.conf.yml-etcd1.j2 dest=/home/service/app/etcd-v3.4.14-linux-amd64/etcd.conf.yml
when: hostname=="platform-etcd-1"
- name: copy etcd config to etcd2
template: src=etcd.conf.yml-etcd2.j2 dest=/home/service/app/etcd-v3.4.14-linux-amd64/etcd.conf.yml
when: hostname=="platform-etcd-2"
- name: copy etcd config to etcd3
template: src=etcd.conf.yml-etcd3.j2 dest=/home/service/app/etcd-v3.4.14-linux-amd64/etcd.conf.yml
when: hostname=="platform-etcd-3"
- name: copy start.sh
template: src=start.sh.j2 dest=/home/service/app/etcd-v3.4.14-linux-amd64/start.sh mode=755
#start 脚本似乎没用,我是手动执行脚本才生效~
#- name: start etcd
# command: sh /home/service/app/etcd-v3.4.14-linux-amd64/start.sh
templates
三个配置文件其实内容一样,只是变量的区别。懒得写判断了。(我笑了,其实是不会写判断)
这里参数不清楚的可以查看我的v3.4的配置文件,照着官网谷歌翻译的。。。
传送门:etcd v3.4-configuration flags
etcd.conf.yml-etcd1.j2
# This is the configuration file for the etcd server.
# Human-readable name for this member.
name: {{ name1 }}
# Path to the data directory.
data-dir: {{ data_dir }}
# Number of committed transactions to trigger a snapshot to disk.
snapshot-count: 10000
# Time (in milliseconds) of a heartbeat interval.
heartbeat-interval: 100
# Time (in milliseconds) for an election to timeout.
election-timeout: 1000
# Raise alarms when backend size exceeds the given quota. 0 means use the
# default quota.
quota-backend-bytes: 8589934592
# List of comma separated URLs to listen on for peer traffic.
listen-peer-urls: http://{{ etcd1 }}:2380
# List of comma separated URLs to listen on for client traffic.
listen-client-urls: http://{{ etcd1 }}:2379,http://127.0.0.1:2379
# List of this member's peer URLs to advertise to the rest of the cluster.
# The URLs needed to be a comma-separated list.
initial-advertise-peer-urls: http://{{ etcd1 }}:2380
# List of this member's client URLs to advertise to the public.
# The URLs needed to be a comma-separated list.
advertise-client-urls: http://{{ etcd1 }}:2379
# Initial cluster configuration for bootstrapping.
initial-cluster: etcd1=http://172.24.31.25:2380,etcd2=http://172.24.31.24:2380,etcd3=http://172.24.31.22:2380
# Initial cluster token for the etcd cluster during bootstrap.
initial-cluster-token: 'etcd-cluster'
# Initial cluster state ('new' or 'existing').
initial-cluster-state: 'new'
logger: zap
auto-compaction-retention: '1'
max-request-bytes: 33554432
log-level: debug
etcd.conf.yml-etcd2.j2
# This is the configuration file for the etcd server.
# Human-readable name for this member.
name: {{ name2 }}
# Path to the data directory.
data-dir: {{ data_dir }}
# Number of committed transactions to trigger a snapshot to disk.
snapshot-count: 10000
# Time (in milliseconds) of a heartbeat interval.
heartbeat-interval: 100
# Time (in milliseconds) for an election to timeout.
election-timeout: 1000
# Raise alarms when backend size exceeds the given quota. 0 means use the
# default quota.
quota-backend-bytes: 8589934592
# List of comma separated URLs to listen on for peer traffic.
listen-peer-urls: http://{{ etcd2 }}:2380
# List of comma separated URLs to listen on for client traffic.
listen-client-urls: http://{{ etcd2 }}:2379,http://127.0.0.1:2379
# List of this member's peer URLs to advertise to the rest of the cluster.
# The URLs needed to be a comma-separated list.
initial-advertise-peer-urls: http://{{ etcd2 }}:2380
# List of this member's client URLs to advertise to the public.
# The URLs needed to be a comma-separated list.
advertise-client-urls: http://{{ etcd2 }}:2379
# Initial cluster configuration for bootstrapping.
initial-cluster: etcd1=http://172.24.31.25:2380,etcd2=http://172.24.31.24:2380,etcd3=http://172.24.31.22:2380
# Initial cluster token for the etcd cluster during bootstrap.
initial-cluster-token: 'etcd-cluster'
# Initial cluster state ('new' or 'existing').
initial-cluster-state: 'new'
logger: zap
auto-compaction-retention: '1'
max-request-bytes: 33554432
log-level: debug
etcd.conf.yml-etcd3.j2
# This is the configuration file for the etcd server.
# Human-readable name for this member.
name: {{ name3 }}
# Path to the data directory.
data-dir: {{ data_dir }}
# Number of committed transactions to trigger a snapshot to disk.
snapshot-count: 10000
# Time (in milliseconds) of a heartbeat interval.
heartbeat-interval: 100
# Time (in milliseconds) for an election to timeout.
election-timeout: 1000
# Raise alarms when backend size exceeds the given quota. 0 means use the
# default quota.
quota-backend-bytes: 8589934592
# List of comma separated URLs to listen on for peer traffic.
listen-peer-urls: http://{{ etcd3 }}:2380
# List of comma separated URLs to listen on for client traffic.
listen-client-urls: http://{{ etcd3 }}:2379,http://127.0.0.1:2379
# List of this member's peer URLs to advertise to the rest of the cluster.
# The URLs needed to be a comma-separated list.
initial-advertise-peer-urls: http://{{ etcd3 }}:2380
# List of this member's client URLs to advertise to the public.
# The URLs needed to be a comma-separated list.
advertise-client-urls: http://{{ etcd3 }}:2379
# Initial cluster configuration for bootstrapping.
initial-cluster: etcd1=http://172.24.31.25:2380,etcd2=http://172.24.31.24:2380,etcd3=http://172.24.31.22:2380
# Initial cluster token for the etcd cluster during bootstrap.
initial-cluster-token: 'etcd-cluster'
# Initial cluster state ('new' or 'existing').
initial-cluster-state: 'new'
logger: zap
auto-compaction-retention: '1'
max-request-bytes: 33554432
log-level: debug
start.sh.j2
使用--config-file指定配置文件启动etcd。这样就可以不用每次启动都在命令行中写很多的flags。
nohup /home/service/app/etcd-v3.4.14-linux-amd64/etcd \
--config-file=/home/service/app/etcd-v3.4.14-linux-amd64/etcd.conf.yml \
&
执行效果
我是执行了好几次,所以前面安装的task我注释掉了。
命令查看集群状态
[root@cnhydabpdc0e-31-25 etcd-v3.4.14-linux-amd64]# ./etcdctl version
etcdctl version: 3.4.14
API version: 3.4
[root@cnhydabpdc0e-31-25 etcd-v3.4.14-linux-amd64]# ./etcdctl --endpoints=172.24.31.25:2379 member list
7015b0cc1bec8639, started, etcd1, http://172.24.31.25:2380, http://172.24.31.25:2379, false
b6bd8889d6452725, started, etcd2, http://172.24.31.24:2380, http://172.24.31.24:2379, false
bfe8c4af74b4db8f, started, etcd3, http://172.24.31.22:2380, http://172.24.31.22:2379, false
[root@cnhydabpdc0e-31-25 etcd-v3.4.14-linux-amd64]# ./etcdctl --endpoints=172.24.31.25:2379,172.24.31.24:2379,172.24.31.22:2379 endpoint status --write-out=table
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 172.24.31.25:2379 | 7015b0cc1bec8639 | 3.4.14 | 20 kB | false | false | 51 | 10 | 10 | |
| 172.24.31.24:2379 | b6bd8889d6452725 | 3.4.14 | 20 kB | true | false | 51 | 10 | 10 | |
| 172.24.31.22:2379 | bfe8c4af74b4db8f | 3.4.14 | 20 kB | false | false | 51 | 10 | 10 | |
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
附上个创建角色
./etcdctl --endpoints=http://127.0.0.1:2379 user add root:etcd
./etcdctl --endpoints=http://127.0.0.1:2379 role add root
./etcdctl --endpoints=http://127.0.0.1:2379 user grant-role root root
./etcdctl --endpoints=http://127.0.0.1:2379 user list
./etcdctl --endpoints=http://127.0.0.1:2379 role list
./etcdctl --endpoints=http://127.0.0.1:2379 auth enable
标签:http,2380,cluster,ansible,2379,etcd,172.24,安装 来源: https://blog.csdn.net/qq522044637/article/details/118424970