自动化运维Saltstack
作者:互联网
23.1 自动化运维
随着Linux运维的发展,我们可以发现,传统的linux运维有比较多的缺点:
1. 传统运维效率低,大多工作人工完成; 2. 传统运维工作繁琐,很容易出错; 3. 传统运维每日重复做相同的事情; 4. 传统运维没有标准化的流程; 5. 传统运维的脚本繁多,不方便管理。
这些缺点在公司规模较大的时候就体现得尤为明显。而自动化运维就被人们提出来,目的就是为了解决传统运维的这些问题。
目前主流的自动化运维工具有3种:Puppet
、Saltstack
和Ansible
,用的最多的还是Ansible。
Puppet: 官网:www.puppetlabs.com ,基于rubby开发,C/S架构,支持多平台,可管理配置文件、用户、cron任务、软件包、系统服务等。 分为社区版(免费)和企业版(收费),企业版支持图形化配置。 Saltstack: 官网:www.saltstack.com ,基于python开发,C/S架构,支持多平台,比puppet轻量,在远程执行命令是非常快捷。 配置和使用比puppet容易,能实现puppet几乎所有的功能。 Ansible: 官网:www.ansible.com ,更加简洁的自动化运维工具,不需要在客户端上安装agent,基于python开发, 可以实现批量操作系统配置、批量程序的部署、批量运行命令。
23.2 Saltstack
Saltstack安装
因为是C/S架构,所以需要安装服务端和客户端。
准备两台机器,一台作为服务端,IP:192.168.100.150,一台作为客户端,IP:192.168.100.160。
- 设置hostname及hosts:
服务器
# hostnamectl status #查看hostname状态 Static hostname: localhost.localdomain Transient hostname: status Icon name: computer-vm Chassis: vm Machine ID: 8ceda531da354b5387daf27c67adbbfd Boot ID: 4cbda1ce199d4c46a4383e818061c2df Virtualization: vmware Operating System: CentOS Linux 7 (Core) CPE OS Name: cpe:/o:centos:centos:7 Kernel: Linux 3.10.0-693.el7.x86_64 Architecture: x86-64# hostnamectl set-hostname lzx #设置hostname为lzx,重启后生效# vim /etc/hosts #添加下面内容192.168.100.150 lzx 192.168.100.160 lzx1
客户端
# hostnamectl status Static hostname: localhost.localdomain Icon name: computer-vm Chassis: vm Machine ID: 8ceda531da354b5387daf27c67adbbfd Boot ID: 079465a872604f6b9b8c97d3a5a3eb31 Virtualization: vmware Operating System: CentOS Linux 7 (Core) CPE OS Name: cpe:/o:centos:centos:7 Kernel: Linux 3.10.0-693.el7.x86_64 Architecture: x86-64# hostnamectl set-hostname lzx1 #设置hostname为lzx1,重启后生效# vim /etc/hosts #添加下面内容192.168.100.150 lzx 192.168.100.160 lzx1
在生产环境下,机器量很多的时候这样一台台去配置hosts会很浪费时间,可以去弄一个内部的DNS,自动分配hostname,无需再配置hosts。
- 下载安装saltstack:
服务器
# yum install -y https://repo.saltstack.com/yum/redhat/salt-repo-latest-2.el7.noarch.rpm #安装saltstack源# yum install -y salt-master salt-minion #服务端安装salt-master和salt-minion
客户端
# yum install -y https://repo.saltstack.com/yum/redhat/salt-repo-latest-2.el7.noarch.rpm #安装saltstack源# yum install -y salt-minion #客户端只需要安装salt-minion
- 编辑配置文件:
服务器
# vim /etc/salt/minionmaster: lzx# systemctl start salt-master #启动salt-master,master会监听端口# systemctl start salt-minion #启动salt-minion,minion不会监听任何端口# ps aux |grep salt*root 275 0.0 0.0 0 0 ? S< Sep10 0:00 [xfsalloc]root 1247 0.7 4.0 389144 40656 ? Ss 02:17 0:00 /usr/bin/python /usr/bin/salt-master root 1256 0.0 1.9 306184 19988 ? S 02:17 0:00 /usr/bin/python /usr/bin/salt-master root 1261 0.0 3.4 469800 34212 ? Sl 02:17 0:00 /usr/bin/python /usr/bin/salt-master root 1262 0.0 3.3 388140 33952 ? S 02:17 0:00 /usr/bin/python /usr/bin/salt-master root 1265 0.6 4.4 402084 44972 ? S 02:17 0:00 /usr/bin/python /usr/bin/salt-master root 1266 0.0 3.4 388992 34576 ? S 02:17 0:00 /usr/bin/python /usr/bin/salt-master root 1267 4.5 3.5 765628 35064 ? Sl 02:17 0:04 /usr/bin/python /usr/bin/salt-master root 1268 2.0 4.8 487020 48604 ? Sl 02:17 0:02 /usr/bin/python /usr/bin/salt-master root 1269 2.0 4.8 487024 48600 ? Sl 02:17 0:02 /usr/bin/python /usr/bin/salt-master root 1270 2.1 4.8 487024 48584 ? Sl 02:17 0:02 /usr/bin/python /usr/bin/salt-master root 1277 1.9 4.8 487032 48624 ? Sl 02:17 0:02 /usr/bin/python /usr/bin/salt-master root 1278 1.9 4.8 487032 48616 ? Sl 02:17 0:02 /usr/bin/python /usr/bin/salt-master root 1279 0.1 3.5 462876 35004 ? Sl 02:17 0:00 /usr/bin/python /usr/bin/salt-master root 2480 11.3 2.1 313776 21416 ? Ss 02:19 0:00 /usr/bin/python /usr/bin/salt-minion root 2483 39.6 4.2 567384 42536 ? Sl 02:19 0:01 /usr/bin/python /usr/bin/salt-minion root 2491 0.0 2.0 403992 20080 ? S 02:19 0:00 /usr/bin/python /usr/bin/salt-minion# netstat -lntpActive Internet connections (only servers)Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 740/sshd tcp 0 0 0.0.0.0:4505 0.0.0.0:* LISTEN 1261/python tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 840/master tcp 0 0 0.0.0.0:4506 0.0.0.0:* LISTEN 1267/python tcp6 0 0 :::22 :::* LISTEN 740/sshd tcp6 0 0 ::1:25 :::* LISTEN 840/master
监听4505和4506端口,4505端口用来发布消息,4506端口用来在master和minion之间传输数据。
客户端
# vim /etc/salt/minionmaster: lzx# systemctl start salt-minion #启动salt-minion,minion不会监听任何端口# ps aux |grep saltroot 1256 0.9 2.1 313776 21412 ? Ss 02:14 0:00 /usr/bin/python /usr/bin/salt-minion root 1259 3.5 4.2 567388 42460 ? Sl 02:14 0:00 /usr/bin/python /usr/bin/salt-minion root 1267 0.0 2.0 403992 20084 ? S 02:14 0:00 /usr/bin/python /usr/bin/salt-minion
saltstack配置认证
master端和minion端通信需要建立一个安全通道,传输过程需要加密,所以得配置认证,也是通过密钥对来加密解密的。
minion端在第一次启动时,会把/etc/salt/pki/master
下生成密钥对,当master端接收到minion传过来的公钥后,通过salt-key工具接受这个公钥,一旦接受后就会在/etc/salt/pki/master/minions/
目录里存放刚刚接受的公钥;同时minion端也会接受master传过去的公钥,把它放在/etc/salt/pki/minion
目录下,并命名为minion_master.pub
。以上过程需要借助salt-key工具来实现。
# ls /etc/salt/pki/master/ #启动服务时会产生下面内容master.pem master.pub minions minions_autosign minions_denied minions_pre minions_rejected
# ls /etc/salt/pki/minion/minion.pem minion.pub
服务端执行
# salt-key -a lzx1 #认证主机;-a,指定对应主机名The following keys are going to be accepted: Unaccepted Keys: #第一次认证会提示lzx1 Proceed? [n/Y] y #输入yKey for minion lzx1 accepted.# salt-key Accepted Keys: lzx1 #显示为绿色,表示已经通过认证Denied Keys: Unaccepted Keys: lzx Rejected Keys:# ls /etc/salt/pki/master/minionslzx1# salt-key -A #-A,认证所有主机The following keys are going to be accepted: Unaccepted Keys: lzx Proceed? [n/Y] y Key for minion lzx accepted.# salt-key Accepted Keys: lzx lzx1 Denied Keys: Unaccepted Keys: Rejected Keys:
客户端查看
# ls /etc/salt/pki/minion/ minion_master.pub minion.pem minion.pub #多出了minion_master.pub文件,即为master公钥
salt-key 更多用法:
-a 认证指定主机名 -A 认证所有主机 -r 拒绝指定主机名 -R 拒绝所有主机认证 -d 删除指定主机名 -D 删除全部主机认证 -y 省略交互,等于直接按了y
远程执行命令
服务端执行
# salt '*' test.ping #* 表示所有已经签名的minion端,也可以单独指定一个lzx1: True lzx: True# salt 'lzx1' test.ping #单独指定一个minionlzx1: True
# salt '*' cmd.run "hostname" #cmd.run,后面可以直接跟命令行下的命令lzx: lzx lzx1: lzx1[root@lzx ~]# salt '*' cmd.run "ls"lzx: anaconda-ks.cfg lzx1: anaconda-ks.cfg
# salt -L 'lzx,lzx1' test.ping #支持列表;-L,表示使用列表lzx1: True lzx: True
# salt -E 'lzx[0-9]*' test.ping #支持正则;-E,表示使用正则lzx1: True lzx: True
另外,还支持grains,加-G选项;支持pillar,加-I选项,下面会介绍到。
grains
grains是在minion启动时收集到的一些信息,比如操作系统类型、网卡ip、内核版本、CPU架构等。
# salt 'lzx' grains.ls #查看所有grains项目lzx: - SSDs - biosreleasedate - biosversion - cpu_flags - cpu_model - cpuarch - disks - dns - domain - fqdn - fqdn_ip4 - fqdn_ip6 - gid - gpus - groupname - host - hwaddr_interfaces - id - init - ip4_gw - ip4_interfaces - ip6_gw - ip6_interfaces - ip_gw - ip_interfaces - ipv4 - ipv6 - kernel - kernelrelease - kernelversion - locale_info - localhost - lsb_distrib_codename - lsb_distrib_id - machine_id - manufacturer - master - mdadm - mem_total - nodename - num_cpus - num_gpus - os - os_family - osarch - oscodename - osfinger - osfullname - osmajorrelease - osrelease - osrelease_info - path - pid - productname - ps - pythonexecutable - pythonpath - pythonversion - saltpath - saltversion - saltversioninfo - selinux - serialnumber - server_id - shell - swap_total - systemd - uid - username - uuid - virtual - zfs_feature_flags - zfs_support - zmqversion
# salt 'lzx' grains.items #查看所有grains项目及具体的值lzx: ---------- SSDs: biosreleasedate: 05/19/2017 biosversion: 6.00 cpu_flags: - fpu - vme - de - pse - tsc - msr - pae - mce - cx8 - apic - sep - mtrr - pge - mca - cmov - pat - pse36 - clflush - mmx - fxsr - sse - sse2 - ss - syscall - nx - pdpe1gb - rdtscp - lm - constant_tsc - arch_perfmon - nopl - xtopology - tsc_reliable - nonstop_tsc - eagerfpu - pni - pclmulqdq - ssse3 - cx16 - pcid - sse4_1 - sse4_2 - x2apic - movbe - popcnt - tsc_deadline_timer - aes - xsave - rdrand - hypervisor - lahf_lm - abm - 3dnowprefetch - fsgsbase - tsc_adjust - smep - invpcid - mpx - rdseed - smap - clflushopt - xsaveopt - xsavec - arat cpu_model: Intel(R) Pentium(R) Gold G5400 CPU @ 3.70GHz cpuarch: x86_64 disks: - sda - sr0 dns: ---------- domain: ip4_nameservers: - 8.8.8.8 - 4.4.4.4 ip6_nameservers: nameservers: - 8.8.8.8 - 4.4.4.4 options: search: sortlist: domain: fqdn: lzx fqdn_ip4: - 192.168.100.150 fqdn_ip6: - fe80::b6f9:83f6:f7f2:ece0 gid: 0 gpus: |_ ---------- model: SVGA II Adapter vendor: unknown groupname: root host: lzx hwaddr_interfaces: ---------- ens33: 00:0c:29:42:1c:de lo: 00:00:00:00:00:00 id: lzx init: systemd ip4_gw: 192.168.100.2 ip4_interfaces: ---------- ens33: - 192.168.100.150 lo: - 127.0.0.1 ip6_gw: False ip6_interfaces: ---------- ens33: - fe80::b6f9:83f6:f7f2:ece0 lo: - ::1 ip_gw: True ip_interfaces: ---------- ens33: - 192.168.100.150 - fe80::b6f9:83f6:f7f2:ece0 lo: - 127.0.0.1 - ::1 ipv4: - 127.0.0.1 - 192.168.100.150 ipv6: - ::1 - fe80::b6f9:83f6:f7f2:ece0 kernel: Linux kernelrelease: 3.10.0-693.el7.x86_64 kernelversion: #1 SMP Tue Aug 22 21:09:27 UTC 2017 locale_info: ---------- defaultencoding: UTF-8 defaultlanguage: en_US detectedencoding: UTF-8 localhost: lzx lsb_distrib_codename: CentOS Linux 7 (Core) lsb_distrib_id: CentOS Linux machine_id: 8ceda531da354b5387daf27c67adbbfd manufacturer: VMware, Inc. master: lzx mdadm: mem_total: 976 nodename: lzx num_cpus: 1 num_gpus: 1 os: CentOS os_family: RedHat osarch: x86_64 oscodename: CentOS Linux 7 (Core) osfinger: CentOS Linux-7 osfullname: CentOS Linux osmajorrelease: 7 osrelease: 7.4.1708 osrelease_info: - 7 - 4 - 1708 path: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin pid: 2483 productname: VMware Virtual Platform ps: ps -efHww pythonexecutable: /usr/bin/python pythonpath: - /usr/bin - /usr/lib64/python27.zip - /usr/lib64/python2.7 - /usr/lib64/python2.7/plat-linux2 - /usr/lib64/python2.7/lib-tk - /usr/lib64/python2.7/lib-old - /usr/lib64/python2.7/lib-dynload - /usr/lib64/python2.7/site-packages - /usr/lib/python2.7/site-packages pythonversion: - 2 - 7 - 5 - final - 0 saltpath: /usr/lib/python2.7/site-packages/salt saltversion: 2018.3.2 saltversioninfo: - 2018 - 3 - 2 - 0 selinux: ---------- enabled: False enforced: Disabled serialnumber: VMware-56 4d 8f ab 54 0e 33 71-8f 77 4e 46 94 42 1c de server_id: 933042753 shell: /bin/sh swap_total: 2047 systemd: ---------- features: +PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN version: 219 uid: 0 username: root uuid: ab8f4d56-0e54-7133-8f77-4e4694421cde virtual: VMware zfs_feature_flags: False zfs_support: False zmqversion: 4.1.4
grains的信息并不是动态的,并不会实时更新,它是在minion启动时收集到的。我们可以根据grains收集到的一些信息,做配置管理工作。另外,grains支持自定义信息。
- 客户端自定义grains:
# vim /etc/salt/grainsrole: nginx env: test# systemctl restart salt-minion #重启salt-minion
- 服务端查看:
# salt '*' grains.item role env #查看指定grains项目lzx1: ---------- env: test role: nginx lzx: ---------- env: role: #可以看到,lzx1上是有指定的两个项目的,而lzx上没有
# salt -G role:nginx cmd.run 'hostname' #-G,指定grains项目执行命令,作为匹配条件lzx1: lzx1
pillar
pillar和grains不一样,它是在master上定义的,并且是针对minion定义的一些信息,像一些比较重要的数据(如密码)可以存在pillar里,还可以定义变量等。
- 服务端执行:
# vim /etc/salt/masterpillar_roots: #顶格,否则报错 base: #相对于上一行空两格,否则报错 - /srv/pillar #相对于上一行空两格,否则报错# systemctl restart salt-master# ls /srv/pillarls: cannot access /srv/pillar: No such file or directory# mkdir !$mkdir /srv/pillar# vim /srv/pillar/test.sls #写入下面内容,.sls作为后缀名便于区分conf: /etc/123.conf# vim /srv/pillar/top.sls #写入下面内容,top.sls作为总入口base: 'lzx1': - test
- 验证:
# salt '*' saltutil.refresh_pillar #刷新pillarlzx1: True lzx: True# salt '*' pillar.item conf #查看指定pillar项目lzx1: ---------- conf: /etc/123.conf #可以看到上面定义的test.sls的内容lzx: ---------- conf:
pillar同样可以用来作为匹配对象,如:
# salt -I 'conf:/etc/123.conf' cmd.run "w" #-I,指定pillar项目执行命令,作为匹配条件lzx1: 04:26:11 up 5:01, 1 user, load average: 0.00, 0.01, 0.05 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT root pts/1 192.168.33.1 01:16 38:51 0.07s 0.07s -bash
salt安装配置httpd
- 服务端编辑配置文件:
# vim /etc/salt/masterfile_roots: base: - /srv/salt/# systemctl restart salt-master# mkdir /srv/salt ; cd /srv/salt# vim top.slsbase: '*': - httpd # vim httpd.slshttpd-service: pkg.installed: - names: #指定安装的服务名 - httpd - httpd-devel service.running: - name: httpd #指定服务名 - enable: True #表示启动服务
- 安装httpd:
# salt 'lzx' state.highstate #读取top.sls文件,并执行里面定义的sls文件^Z[1]+ Stopped salt 'lzx' state.highstate# ps aux |grep yumroot 2649 8.0 2.7 327940 27612 ? S 03:47 0:00 /usr/bin/python /usr/bin/yum --quiet --assumeyes check-update --setopt=autocheck_running_kernel=false root 2681 0.0 0.0 112704 964 pts/0 R+ 03:47 0:00 grep --color=auto yum# fgsalt 'lzx' state.highstate lzx: ---------- ID: httpd-service Function: pkg.installed Name: httpd Result: True Comment: The following packages were installed/updated: httpd Started: 03:47:49.037900 Duration: 18426.528 ms Changes: ---------- httpd: ---------- new: 2.4.6-80.el7.centos.1 old: httpd-tools: ---------- new: 2.4.6-80.el7.centos.1 old: mailcap: ---------- new: 2.1.41-2.el7 old: ---------- ID: httpd-service Function: pkg.installed Name: httpd-devel Result: True Comment: The following packages were installed/updated: httpd-devel Started: 03:48:07.535416 Duration: 7518.31 ms Changes: ---------- apr-devel: ---------- new: 1.4.8-3.el7_4.1 old: apr-util-devel: ---------- new: 1.5.2-6.el7 old: cyrus-sasl: ---------- new: 2.1.26-23.el7 old: cyrus-sasl-devel: ---------- new: 2.1.26-23.el7 old: cyrus-sasl-lib: ---------- new: 2.1.26-23.el7 old: 2.1.26-21.el7 expat-devel: ---------- new: 2.1.0-10.el7_3 old: httpd-devel: ---------- new: 2.4.6-80.el7.centos.1 old: libdb: ---------- new: 5.3.21-24.el7 old: 5.3.21-20.el7 libdb-devel: ---------- new: 5.3.21-24.el7 old: libdb-utils: ---------- new: 5.3.21-24.el7 old: 5.3.21-20.el7 openldap: ---------- new: 2.4.44-15.el7_5 old: 2.4.44-5.el7 openldap-devel: ---------- new: 2.4.44-15.el7_5 old: ---------- ID: httpd-service Function: service.running Name: httpd Result: True Comment: Service httpd has been enabled, and is running Started: 03:48:16.528163 Duration: 204.492 ms Changes: ---------- httpd: True Summary for lzx ------------ Succeeded: 3 (changed=3)Failed: 0 ------------ Total states run: 3 Total run time: 26.149 s# ps aux |grep httpdroot 2903 0.0 0.4 224020 4980 ? Ss 03:48 0:00 /usr/sbin/httpd -DFOREGROUND apache 2904 0.0 0.2 224020 2960 ? S 03:48 0:00 /usr/sbin/httpd -DFOREGROUND apache 2905 0.0 0.2 224020 2960 ? S 03:48 0:00 /usr/sbin/httpd -DFOREGROUND apache 2906 0.0 0.2 224020 2960 ? S 03:48 0:00 /usr/sbin/httpd -DFOREGROUND apache 2907 0.0 0.2 224020 2960 ? S 03:48 0:00 /usr/sbin/httpd -DFOREGROUND apache 2908 0.0 0.2 224020 2960 ? S 03:48 0:00 /usr/sbin/httpd -DFOREGROUND root 3100 0.0 0.0 112704 964 pts/0 R+ 03:51 0:00 grep --color=auto httpd
httpd安装成功并启动。
管理文件
- 服务端编辑配置文件:
# vim test.slsfile_test: #自定义ID file.managed: - name: /tmp/lzx.com #指定minion上要替换的文件 - source: salt://test/123/1.txt #指定master上模板文件;salt:// 等同于 /srv/salt - user: root #指定文件传到minion上的属主 - group: root #指定文件传到minion上的属组 - mode: 600 #指定文件传到minion上的权限# mkdir -p test/123/# cp /etc/inittab test/123/1.txt #保证master上要有模板文件# vim top.sls base: '*': - test
- 验证:
# salt 'lzx1' state.highstatelzx1: ---------- ID: file_test Function: file.managed Name: /tmp/lzx.com Result: True Comment: File /tmp/lzx.com updated Started: 04:20:06.214914 Duration: 140.874 ms Changes: ---------- diff: New fileSummary for lzx1 ------------ Succeeded: 1 (changed=1)Failed: 0 ------------ Total states run: 1 Total run time: 140.874 ms
到lzx1上查看
# ls -lt /tmp/lzx.com -rw------- 1 root root 511 Sep 12 04:20 /tmp/lzx.com #文件存在,属主属组权限对应,且时间符合# cat !$cat /tmp/lzx.com# inittab is no longer used when using systemd.## ADDING CONFIGURATION HERE WILL HAVE NO EFFECT ON YOUR SYSTEM.## Ctrl-Alt-Delete is handled by /usr/lib/systemd/system/ctrl-alt-del.target## systemd uses 'targets' instead of runlevels. By default, there are two main targets:## multi-user.target: analogous to runlevel 3# graphical.target: analogous to runlevel 5## To view current default target, run:# systemctl get-default## To set a default target, run:# systemctl set-default TARGET.target#
这样就实现master端对minion端的文件管理。
管理目录
- 服务端编辑配置文件:
# vim test_dir.slsfile_dir: #自定义ID file.recurse: - name: /tmp/testdir #指定minion上要替换的目录 - source: salt://test/123 #指定master上的模板目录 - user: root #指定目录传到minion上的属主 - file_mode: 640 #指定目录中文件传到minion上的权限 - dir_mode: 750 #指定目录传到minion上的权限 - mkdir: True #表示自动创建模板目录 - clean: True #加上之后,指定master删除源文件或目录,minion上也会同步删除,否则不删除 # vim top.slsbase: '*': - test - test_dir
- 验证:
# salt 'lzx1' state.highstatelzx1: ---------- ID: file_test Function: file.managed Name: /tmp/lzx.com Result: True Comment: File /tmp/lzx.com is in the correct state Started: 04:40:43.047724 Duration: 88.949 ms Changes: ---------- ID: file_dir Function: file.recurse Name: /tmp/testdir Result: True Comment: Recursively updated /tmp/testdir Started: 04:40:43.136910 Duration: 122.871 ms Changes: ---------- /tmp/testdir/1.txt: ---------- diff: New file mode: 0640 Summary for lzx1 ------------ Succeeded: 2 (changed=1)Failed: 0 ------------ Total states run: 2 Total run time: 211.820 ms
到lzx1上查看
# ls -l /tmp/testdir/total 4 -rw-r----- 1 root root 511 Sep 12 04:40 1.txt# ls -ld /tmp//testdir/drwxr-x--- 2 root root 19 Sep 12 04:40 /tmp//testdir/
- 服务端继续验证:
# cd test# ls123# mkdir abc# touch aaa.txt# ls123 aaa.txt abc# mv abc/ 123/# mv aaa.txt 123/# salt 'lzx1' state.highstatelzx1: ---------- ID: file_test Function: file.managed Name: /tmp/lzx.com Result: True Comment: File /tmp/lzx.com is in the correct state Started: 05:28:02.256768 Duration: 71.776 ms Changes: ---------- ID: file_dir Function: file.recurse Name: /tmp/testdir Result: True Comment: Recursively updated /tmp/testdir Started: 05:28:02.328907 Duration: 182.866 ms Changes: ---------- /tmp/testdir/aaa.txt: ---------- diff: New file mode: 0640 Summary for lzx1 ------------ Succeeded: 2 (changed=1)Failed: 0 ------------ Total states run: 2 Total run time: 254.642 ms# ls -l total 0 drwxr-xr-x 3 root root 45 Sep 12 05:26 123# tree .└── 123 ├── 1.txt ├── aaa.txt └── abc 2 directories, 2 files
到lzx1上查看
# tree /tmp/testdir//tmp/testdir/ ├── 1.txt └── aaa.txt 0 directories, 2 files
发现没有同步abc/ 目录,这是因为salt不同步空目录。
- 服务端再次验证:
# touch 123/abc/2.txt# salt 'lzx1' state.highstatelzx1: ---------- ID: file_test Function: file.managed Name: /tmp/lzx.com Result: True Comment: File /tmp/lzx.com is in the correct state Started: 05:36:01.999480 Duration: 112.723 ms Changes: ---------- ID: file_dir Function: file.recurse Name: /tmp/testdir Result: True Comment: Recursively updated /tmp/testdir Started: 05:36:02.112490 Duration: 200.572 ms Changes: ---------- /tmp/testdir/abc: ---------- /tmp/testdir/abc: New Dir /tmp/testdir/abc/2.txt: ---------- diff: New file mode: 0640 Summary for lzx1 ------------ Succeeded: 2 (changed=1)Failed: 0 ------------ Total states run: 2 Total run time: 313.295 ms
到lzx1上查看
# tree /tmp/testdir//tmp/testdir/ ├── 1.txt ├── aaa.txt └── abc └── 2.txt 1 directory, 3 files
发现abc/ 目录同步了过来,即使abc/ 目录中有个空文件,但只要不是空目录就行。
管理远程命令
- 服务端编辑配置文件:
# cd ..# vim shell_test.slsshell_test: #自定义ID cmd.script: - source: salt://test/1.sh #指定要执行的脚本 - user: root #指定执行脚本的用户 # vim test/1.sh #写入下面内容#!/bin/bashtouch /tmp/111.txtif [ ! -d /tmp/1233 ]then mkdir /tmp/1233fi# vim top.slsbase: '*': - shell_test
- 验证:
# salt 'lzx1' state.highstatelzx1: ---------- ID: shell_test Function: cmd.script Result: True Comment: Command 'shell_test' run Started: 21:17:49.175979 Duration: 80.257 ms Changes: ---------- pid: 1099 retcode: 0 stderr: stdout: Summary for lzx1 ------------ Succeeded: 1 (changed=1)Failed: 0 ------------ Total states run: 1 Total run time: 80.257 ms
到lzx1上查看
# ls -lt /tmptotal 4 drwxr-xr-x 2 root root 6 Sep 12 21:17 1233 -rw-r--r-- 1 root root 0 Sep 12 21:17 111.txt srwx------ 1 mongod mongod 0 Sep 12 20:40 mongodb-27019.sock drwx------ 3 root root 17 Sep 12 20:40 systemd-private-a58fd593274d46159203958328d8889f-chronyd.service-rue1z2 drwxr-x--- 3 root root 45 Sep 12 05:36 testdir -rw------- 1 root root 511 Sep 12 04:20 lzx.com
可以看到创建的1233
目录,说明脚本成功执行。
配置管理计划任务
- 服务端编辑配置文件:
# vim cron_test.slscron-test: cron.present: - name: /bin/touch /tmp/www.txt #命令要用绝对路径,指定要执行的任务计划 - user: root #指定执行任务计划的用户 - minute: '*' - hour: 20 - daymonth: '*' - month: '*' - dayweek: '*'
*
要用单引号括起来,我们可以用file.managed
模块来管理cron。
# vim top.slsbase: '*': - cron_test
- 验证:
# salt 'lzx1' state.highstatelzx1: ---------- ID: cron-test Function: cron.present Name: /bin/touch /tmp/www.txt Result: True Comment: Cron /bin/touch /tmp/www.txt already present Started: 21:57:03.577338 Duration: 140.875 ms Changes: Summary for lzx1 ------------ Succeeded: 1 Failed: 0 ------------ Total states run: 1 Total run time: 140.875 ms
到lzx1上查看
# crontab -l# Lines below here are managed by Salt, do not edit# SALT_CRON_IDENTIFIER:/bin/touch /tmp/www.txt* 20 * * * /bin/touch /tmp/www.txt #有计划任务
计划任务上面的两行不能修改,否则master端无法管理minion的cron。
- 删除cron:
# vim cron_test.slscron-test: cron.absent: - name: /bin/touch /tmp/www.txt
- 验证:
# salt 'lzx1' state.highstatelzx1: ---------- ID: cron-test Function: cron.absent Name: /bin/touch /tmp/www.txt Result: True Comment: Cron /bin/touch /tmp/www.txt removed from root's crontab Started: 22:02:36.094802 Duration: 287.551 ms Changes: ---------- root: /bin/touch /tmp/www.txt Summary for lzx1 ------------ Succeeded: 1 (changed=1)Failed: 0 ------------ Total states run: 1 Total run time: 287.551 ms
到lzx1上查看
# crontab -l# Lines below here are managed by Salt, do not edit #已经没有了touch /tmp/www.txt的计划任务
其它命令
cp.get_file
拷贝master上的文件到minion上:
# ls test123 1.sh# salt '*' cp.get_file salt://test/1.sh /tmp/123.sh #拷贝test/1.sh到minion上,master上文件不存在时会无法拷贝lzx1: /tmp/123.sh lzx: /tmp/123.sh# ls /tmp123.sh systemd-private-340efaf7bd0145018059f8fc09350b64-chronyd.service-02xcwz systemd-private-340efaf7bd0145018059f8fc09350b64-httpd.service-KEEBwq
cp.get_dir
拷贝master上目录到minion上:
# ls test/123/1.txt aaa.txt abc# salt '*' cp.get_dir salt://test/123 /tmp #拷贝目录,master上目录不存在时会无法拷贝lzx1: - /tmp/123/1.txt - /tmp/123/aaa.txt - /tmp/123/abc/2.txt lzx: - /tmp/123/1.txt - /tmp/123/aaa.txt - /tmp/123/abc/2.txt# ls /tmp/1231.txt aaa.txt abc
需要注意的是,cp.get_dir
拷贝目录时会自动在minion端创建目录,所以/tmp
后面不要加目录,如果写成/tmp/123
则会形成/tmp/123/123
目录。
salt-run manage.up
显示存活的minion:
# salt-run manage.up #显示存活的minion- lzx - lzx1
salt '*' cmd.script
命令行执行master上的shell脚本:
# salt '*' cmd.script salt://test/1.sh #命令行执行master上的shell脚本lzx1: ---------- pid: 1567 retcode: 0 stderr: stdout: lzx: ---------- pid: 6436 retcode: 0 stderr: stdout:
salt-ssh使用
salt-ssh不需要对客户端做认证,客户端也不用安装salt-minion,有点类似于expect。
- 安装salt-ssh:
# yum install -y salt-ssh
- 编辑配置文件:
# vim /etc/salt/rosterlzx: host: 192.168.100.150 user: root #使用root用户 passwd: lzxlzxlzx #指定root密码lzx1: host: 192.168.100.160 user: root passwd: lzxlzxlzx
- 实现salt-ssh:
# salt-ssh --key-deploy '*' -r 'w'# 第一次执行的时候会自动把本机的公钥放到minion上,之后就可以把roster里面的密码器啊去掉lzx1: ---------- retcode: 254 stderr: stdout: The host key needs to be accepted, to auto accept run salt-ssh with the -i flag: The authenticity of host '192.168.33.160 (192.168.33.160)' can't be established. ECDSA key fingerprint is SHA256:ZPsmO+OKZbdZegsq2gmGWqhl7fBOCbSljaF/K159vMQ. ECDSA key fingerprint is MD5:a6:61:af:87:78:db:c7:38:69:1b:ac:bc:e1:fa:51:7e. Are you sure you want to continue connecting (yes/no)? lzx: ---------- retcode: 254 stderr: stdout: The host key needs to be accepted, to auto accept run salt-ssh with the -i flag: The authenticity of host '192.168.33.150 (192.168.33.150)' can't be established. ECDSA key fingerprint is SHA256:ZPsmO+OKZbdZegsq2gmGWqhl7fBOCbSljaF/K159vMQ. ECDSA key fingerprint is MD5:a6:61:af:87:78:db:c7:38:69:1b:ac:bc:e1:fa:51:7e. Are you sure you want to continue connecting (yes/no)?
上面没有执行成功,是因为第一次执行,需要我们输入yes。
# ssh lzx\The authenticity of host 'lzx (192.168.33.150)' can't be established. ECDSA key fingerprint is SHA256:ZPsmO+OKZbdZegsq2gmGWqhl7fBOCbSljaF/K159vMQ. ECDSA key fingerprint is MD5:a6:61:af:87:78:db:c7:38:69:1b:ac:bc:e1:fa:51:7e. Are you sure you want to continue connecting (yes/no)? yes //输入yes Warning: Permanently added 'lzx' (ECDSA) to the list of known hosts. root@lzx's password: #输入密码Last login: Wed Sep 12 21:04:51 2018 from 192.168.33.1# ssh lzx1The authenticity of host 'lzx1 (192.168.33.160)' can't be established. ECDSA key fingerprint is SHA256:ZPsmO+OKZbdZegsq2gmGWqhl7fBOCbSljaF/K159vMQ. ECDSA key fingerprint is MD5:a6:61:af:87:78:db:c7:38:69:1b:ac:bc:e1:fa:51:7e. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'lzx1' (ECDSA) to the list of known hosts. root@lzx1's password: Last failed login: Wed Sep 12 22:41:43 EDT 2018 from 192.168.33.150 on ssh:notty There was 1 failed login attempt since the last successful login. Last login: Wed Sep 12 22:02:36 2018
# salt-ssh --key-deploy '*' -r 'w' lzx1: ---------- retcode: 0 stderr: stdout: 22:42:55 up 1:39, 1 user, load average: 0.00, 0.01, 0.05 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT root pts/0 192.168.33.1 21:04 32:55 0.01s 0.01s -bash lzx: ---------- retcode: 0 stderr: stdout: 22:42:55 up 1:39, 1 user, load average: 0.02, 0.04, 0.05 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT root pts/0 192.168.33.1 21:04 7.00s 0.62s 0.01s /usr/bin/python /usr/bin/salt-ssh --key-deploy * -r w
这次执行就没问题了。
- 查看秘钥更改时间:
lzx上查看
# ls -l ~/.ssh/authorized_keys -rw------- 1 root root 390 Sep 12 22:37 /root/.ssh/authorized_keys# dateWed Sep 12 22:45:06 EDT 2018
lzx1上查看
# ls -l ~/.ssh/authorized_keys -rw------- 1 root root 390 Sep 12 22:37 /root/.ssh/authorized_keys# cat !$cat ~/.ssh/authorized_keys ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDQGI/0xyhDa4MfvIl2DSEjVvH7ktk/PmKul6RFJwpKXa/U78BTyXIOnyOu2CPgFBuUX6AOPMXswzHYoiSQP49D6R2n9Hv8g1gQ54Y8H6zAUQXGjHFcoSH5HPwXGgS3TMWXc3tzL6k0acqKE2e13cacgswV8qmHODXJADEkXy1rmruTHbFDI17V3cfc66uWkS0NMjlajj0of+G/9kGLCp1z0YlFxsWiMVQ9LdxZzXCeXALpa53p2Tyj5B80HBD88MDxaIBkJagAK8G+Sr9ksZ6y6TEYPW6DcujxqIOuckxZ7DSQz4hxONAjEwFF2/B7hzPD8A/elXD2hnf3rNDnS9bz root@lzx
- 删除roster中密码:
# vim /etc/salt/rosterlzx: host: 192.168.100.150 user: root lzx1: host: 192.168.100.160 user: root # salt-ssh --key-deploy '*' -r 'w'lzx: ---------- retcode: 0 stderr: stdout: 22:58:11 up 1:54, 1 user, load average: 0.01, 0.05, 0.05 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT root pts/0 192.168.33.1 21:04 3.00s 0.72s 0.00s /usr/bin/python /usr/bin/salt-ssh --key-deploy * -r w lzx1: ---------- retcode: 0 stderr: stdout: 22:58:11 up 1:54, 1 user, load average: 0.00, 0.03, 0.05 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT root pts/0 192.168.33.1 21:04 3:07 0.02s 0.02s -bash
删除密码后,再次执行也不存在问题,这是因为已经把公钥推送到客户端了。
更多资料参考:
标签:tmp,lzx,运维,minion,lzx1,自动化,----------,Saltstack,salt 来源: https://blog.51cto.com/u_10272167/2730805