企业运维实战--k8s高可用集群部署
作者:互联网
企业运维实战--k8s高可用集群部署
haproxy负载均衡+k8s高可用集群
项目简介:
在前面k8s学习中,我们只是围绕一个k8smaster节点进行操作,当此节点dowm掉后k8s将无法进行后续的部署管理工作。本项目将通过haproxy配置k8s master主机实现负载均衡,通过k8s三台master主机实现k8s集群高可用。
工作原理图:
项目准备:
准备6台虚拟机server5-server10,版本为rhel7.6,火墙与selinux全部关掉,其中
- server5/server6:负责为k8s高可用集群提供haproxy负载均衡
- server7/server8/server9:负责k8s高可用集群master节点
- server10:k8s woker 测试节点
haproxy负载均衡配置
server5、6负责负载均衡准备,为k8s master主机提供负载均衡服务。
首先配置解析,和仓库文件,安装harpoxy组件
vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.9.254 foundation39.ilt.example.com
172.25.9.1 server1 hyl.westos.org
172.25.9.2 server2
172.25.9.3 server3
172.25.9.4 server4
172.25.9.5 server5
172.25.9.6 server6
172.25.9.7 server7
172.25.9.8 server8
172.25.9.9 server9
172.25.9.10 server10
编辑仓库文件并传给其他主机
vim dvd.repo
cat dvd.repo
[dvd]
name=dvd
baseurl=http://172.25.9.254/rhel7.6
gpgcheck=0
[HighAvailability]
name=HighAvailability
baseurl=http://172.25.9.254/rhel7.6/addons/HighAvailability
gpgcheck=0
----------------------------------------------
cat docker.repo
[docker]
name=docker-ce
baseurl=http://172.25.9.254/docker-ce
gpgcheck=0
以上操作每个主机均需要操作,也可通过scp的方式进行覆盖。
server5、server6安装haproxy组件pcs并设置开机自启
yum install -y pacemaker pcs psmisc policycoreutils-python
systemctl enable --now pcsd.service
创建用户hacluster给定密码并授权集群认证
passwd hacluster
pcs cluster auth server5 server6
server5中设置集群mycluster
pcs cluster setup --name mycluster server5 server6
开启pcs 集群
pcs cluster start --all
pcs cluster enable --all
关闭fence警告
pcs property set stonith-enabled=false
crm_verify -L -V
关闭后,查询pcs状态将无Warning
创建vip,172.25.9.100,通过vip锁定服务
pcs resource create vip ocf:heartbeat:IPaddr2 ip=172.25.9.100 op monitor interval=30s
ip addr
查看pcs状态
pcs status
安装haproxy服务编辑配置文件
yum install -y haproxy
vim /etc/haproxy/haproxy.cfg
# 监听80端口,查看负载均衡节点状态
listen stats *:80
stats uri /status
...
...
# 设定负载均衡监听端口为6443 监听模式为tcp
frontend main *:6443
mode tcp
default_backend app
backend app
# 添加负载均衡后端节点为k8s1/k8s2/k8s3,check各自IP的6443端口
balance roundrobin
mode tcp
server k8s1 172.25.9.7:6443 check
server k8s2 172.25.9.8:6443 check
server k8s3 172.25.9.9:6443 check
开启haproxy服务
systemctl start haproxy
查看监听端口是否开启
netstat -antlp|grep :6443
为server6进行同样操作,并在访问测试后关闭haproxy服务,确保交给集群的haproxy的关闭的。
scp /etc/haproxy/haproxy.cfg server6:/etc/haproxy/
systemctl stop haproxy.service
创建haproxy服务到pcs集群
pcs resource create haproxy systemd:haproxy op monitor interval=60s
pcs status
将vip和haproxy服务拉到一个group结点上
pcs resource group add hagroup vip haproxy
pcs status
浏览器访问:
http://172.25.9.5/status
网页访问:172.25.9.5/status
server6与server5进行同样的操作,与server5一起提供haproxy负载均衡的高可用。
网页访问测试:
http://172.25.9.6/status
k8s高可用集群部署
server7/server8/server9安装k8s高可用集群:
server7:
检查解析
vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.9.254 foundation39.ilt.example.com
172.25.9.1 server1 hyl.westos.org
172.25.9.2 server2
172.25.9.3 server3
172.25.9.4 server4
172.25.9.5 server5
172.25.9.6 server6
172.25.9.7 server7
172.25.9.8 server8
172.25.9.9 server9
172.25.9.10 server10
检查仓库文件
cat /etc/yum.repos.d/docker.repo
[docker]
name=docker-ce
baseurl=http://172.25.9.254/docker-ce
gpgcheck=0
部署docker
yum install -y docker-ce
systemctl enable --now docker
编辑文件修改cgroup格式并且将仓库地址改为本地harbor仓库
cd /etc/docker/
vim daemon.json
cat daemon.json
{
"registry-mirrors": ["https://hyl.westos.org"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
重启docker 读取文件内容
systemctl restart docker
认证docker仓库,将之前准备好的认证传给k8s集群主机。
消去docker info中的warning
sysctl --system
vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
将配置文件传给其余k8s master主机
scp daemon.json server8:/etc/docker/
scp daemon.json server9:/etc/docker/
测试拉取harbor仓库的镜像busybox,验证仓库是否配置成功
docker pull busybox
安装ipvs使用模块,kube_proxy使用IPVS模式,减少iptables的压力。
modprobe ip_vs
lsmod |grep ip_vs
yum install -y ipvsadm
ipvsadm -l
使用准备好的rpm包,安装k8s
tar zxf kubeadm-1.21.3.tar.gz
cd packages/
yum install -y *
将安装k8s的文件传给server8、server9
scp -r packages/ server8:
scp -r packages/ server9:
启动kubelet服务
systemctl enable --now kubelet
编写kubeadm初始化文件
kubeadm config print init-defaults > kubeadm-init.yaml
vim kubeadm-init.yaml
关闭swap分区
swapoff -a
vim /etc/fstab #注释掉swap分区开机自启的那一行
预先拉取镜像,验证初始化k8s文件是否能成功拉取私有harbor仓库
kubeadm config images pull --config kubeadm-init.yaml
初始化k8s
kubeadm init --config kubeadm-init.yaml --upload-certs
初始化结果:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 172.25.9.100:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:1f48896bb91d5c6ccf2a322321701eab74b73dba3e9e875d96f800894ac1fc18 \
--control-plane --certificate-key 47b29e52516f862eba711a162167913a27e109ccdd2888e089fe7e301de9697c
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.25.9.100:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:1f48896bb91d5c6ccf2a322321701eab74b73dba3e9e875d96f800894ac1fc18
按要求进行初始化下一步骤
export KUBECONFIG=/etc/kubernetes/admin.conf
添加命令行补齐
echo "source <(kubectl completion bash)" >> ~/.bashrc
source .bashrc
查看节点状态
kubectl get pod -n kube-system
kubectl get pod -n kube-system -o wide
发现有节点未成功Running,原因是未安装网络插件
添加网络插件,修改配置文件工作方式为host-gw
vim kube-flannel.yml
拉起资源清单,再次查看节点,各个节点工作恢复正常
kubectl apply -f kube-flannel.yml
kubectl get pod -n kube-system
添加server8/server9进入server7的k8s高可用集群中
server8/server9:
加入server7高可用集群中
kubeadm join 172.25.9.100:6443 --token abcdef.0123456789abcdef > --discovery-token-ca-cert-hash sha256:1f48896bb91d5c6ccf2a322321701eab74b73dba3e9e875d96f800894ac1fc18 > --control-plane --certificate-key
47b29e52516f862eba711a162167913a27e109ccdd2888e089fe7e301de9697c
export KUBECONFIG=/etc/kubernetes/admin.conf
echo "source <(kubectl completion bash)" >> ~/.bashrc #添加命令行补齐功能,方便后续操作
添加成功后再server7查看节点状况
kubectl get nodes
可以看到,有三台master主机保持READY
测试1–为pod节点提供高可用
运行一个pod节点于当前master主机,当此master主机down掉后,节点仍然可在其他master主机上查看状况并进行操作管理。
kubectl run demo --image=myapp:v1
kubectl get pod -o wide
curl 10.244.3.2
关闭server7主机,
发现pod转移到server8节点,k8s高可用集群搭建成功。
测试2–为k8s node节点提供高可用
添加server10为k8s的work node,配置连接server7主机,关闭server7 k8s主机,节点会自动切换到server8上且正常运行。
将server9的docker安装、配置文件传给server10,
scp docker.repo server10:/etc/yum.repos.d/
scp -r certs.d/ daemon.json server10:/etc/docker/
scp /etc/sysctl.d/docker.conf server10:/etc/sysctl.d/
配置好server10的k8s woker节点后,加入k8s集群中,此命令来自于初始化k8s集群时操作反馈结果。
kubeadm join 172.25.9.100:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:1f48896bb91d5c6ccf2a322321701eab74b73dba3e9e875d96f800894ac1fc18
关闭server7 k8s主机,节点会自动切换到server8上且正常运行。
再关闭server8主机,server9上无法查看节点,原因是三个master k8s主机只容忍最多一个down掉,其余两个保持高可用。
重新开启server7/server8两个k8s集群master主机,pod节点恢复正常
测试haproxy负载均衡高可用,将server5 standby,服务自动跳到server6上
标签:haproxy,运维,--,etc,172.25,docker,k8s 来源: https://blog.csdn.net/weixin_45233090/article/details/119487710