Kubernetes 1.21.1集群1M2W 环境搭建
作者:互联网
Kubernetes 1.21.1集群1M2W 环境搭建
集群说明
操作系统& 内核 | ip | 角色 | kubeadm version | docker version |
---|---|---|---|---|
Centos7 3.10.0-1160.25.1.el7.x86_64 | 192.168.56.186 | Master | 1.21.1 | 20.10.7 |
Centos7 3.10.0-1160.25.1.el7.x86_64 | 192.168.56.187 | worker | 1.21.1 | 20.10.7 |
Centos7 3.10.0-1160.25.1.el7.x86_64 | 192.168.56.188 | worker | 1.21.1 | 20.10.7 |
官方参考
官网
:https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
GitHub
:https://github.com/kubernetes/kubeadm
配置要求
每个节点的检查环节
MAC 地址唯一性
- 你可以使用命令
ip link
或ifconfig -a
来获取网络接口的 MAC 地址
# mac 地址校验
ifconfig -a
product_uuid 的唯一性
- 可以使用
sudo cat /sys/class/dmi/id/product_uuid
命令对 product_uuid 校验
一般来讲,硬件设备会拥有唯一的地址,但是有些虚拟机的地址可能会重复。 Kubernetes 使用这些值来唯一确定集群中的节点。
# product_uuid 校验
sudo cat /sys/class/dmi/id/product_uuid
允许 iptables 检查桥接流量
确保 br_netfilter
模块被加载。这一操作可以通过运行 lsmod | grep br_netfilter
来完成。若要显式加载该模块,可执行 sudo modprobe br_netfilter
。
[root@w2 ~]# lsmod | grep br_netfilter
br_netfilter 22256 0
bridge 151336 1 br_netfilter
为了让你的 Linux 节点上的 iptables 能够正确地查看桥接流量,你需要确保在你的 sysctl
配置中将 net.bridge.bridge-nf-call-iptables
设置为 1
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
系统版本统一
内核版本
Docker 要求 CentOS 系统的内核版本高于 3.10
$ uname -r
3.10.0-1160.25.1.el7.x86_64
系统基础前提配置
# 关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
# 关闭selinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
# 关闭swap
swapoff -a
sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
# 配置iptables的ACCEPT规则
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
# 时钟同步
yum install ntpdate -y && ntpdate time.windows.com
网络配置
访问外网
每个节点需要开启NAT网络模式,可访问宿主机的外网环境
配置网卡网段
大家根据自己的情况来准备centos7的虚拟机。
要保证彼此之间能够ping通,也就是处于同一个网络中。
也可以按如下操作:
虚拟机开启Host-Olny模式,默认会开启56网段
三台机器配置同网段的虚拟网卡-ifcfg-enp0s8
#进入网卡配置路径下
[root@bogon network-scripts]# cd /etc/sysconfig/network-scripts/
[root@bogon network-scripts]# ls
ifcfg-enp0s8 ifdown-ippp ifdown-Team ifup-ib ifup-ppp init.ipv6-global
ifcfg-enp0s9 ifdown-ipv6 ifdown-TeamPort ifup-ippp ifup-routes network-functions
ifcfg-lo ifdown-isdn ifdown-tunnel ifup-ipv6 ifup-sit network-functions-ipv6
ifdown ifdown-post ifup ifup-isdn ifup-Team
ifdown-bnep ifdown-ppp ifup-aliases ifup-plip ifup-TeamPort
ifdown-eth ifdown-routes ifup-bnep ifup-plusb ifup-tunnel
ifdown-ib ifdown-sit ifup-eth ifup-post ifup-wireless
186机器
[root@bogon network-scripts]# vi ifcfg-enp0s8
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=enp0s8
UUID=2f812198-74d6-4d6a-9fcd-8f6d1058b686
DEVICE=enp0s8
ONBOOT=yes
IPADDR=192.168.56.186
GETEWAY=192.168.56.1
NETMASK=255.255.255.0
187机器
[root@bogon network-scripts]# vi ifcfg-enp0s8
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=enp0s8
UUID=2f812198-74d6-4d6a-9fcd-8f6d1058b687
DEVICE=enp0s8
ONBOOT=yes
IPADDR=192.168.56.187
GETEWAY=192.168.56.1
NETMASK=255.255.255.0
188机器
[root@bogon network-scripts]# vi ifcfg-enp0s8
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=enp0s8
UUID=2f812198-74d6-4d6a-9fcd-8f6d1058b688
DEVICE=enp0s8
ONBOOT=yes
IPADDR=192.168.56.188
GETEWAY=192.168.56.1
NETMASK=255.255.255.0
修改hosts文件
三台机器设置hosts
vi /etc/hosts
192.168.56.186 m
192.168.56.187 w1
192.168.56.188 w2
设置hostname,也就是用别名替代IP
# 设置186的hostname
sudo hostnamectl set-hostname m
# 设置187的hostname
sudo hostnamectl set-hostname w1
# 设置188的hostname
sudo hostnamectl set-hostname w2
使用ping测试一下
ping m
ping w1
ping w2
每个节点安装docker和kube
更新并安装依赖
3台机器都需要执行
yum -y update
yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp
安装Docker
官方参考
官方安装手册 https://docs.docker.com/engine/install/centos/
在每一台机器上都安装好Docker,版本为20.10.7
01 安装必要的依赖
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
02 设置docker仓库
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
【设置要设置一下阿里云镜像加速器】
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["这边替换成自己的实际地址"]
}
EOF
sudo systemctl daemon-reload
03 安装docker
yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io
04 启动docker&开机运行docker
sudo systemctl start docker && sudo systemctl enable docker
安装kubeadm, kubelet, kubectl
配置yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum makecache -y fast
安装kubeadm&kubelet&kubectl
#安装 &开机启动kubelet & 启动kubelet
yum install -y kubectl-1.21.1 kubelet-1.21.1 kubeadm-1.21.1
systemctl enable kubelet
systemctl start kubelet
docker和k8s设置同一个cgroup
# docker
vi /etc/docker/daemon.json
"exec-opts": ["native.cgroupdriver=systemd"],
systemctl restart docker
# kubelet,这边如果发现输出directory not exist,也说明是没问题的,大家继续往下进行即可
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
使用阿里云下载镜像
查看kubeadm使用的镜像
可以发现这里都是国外的镜像
kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.21.1
k8s.gcr.io/kube-controller-manager:v1.21.1
k8s.gcr.io/kube-scheduler:v1.21.1
k8s.gcr.io/kube-proxy:v1.21.1
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0
解决国外镜像不能访问的问题
使用阿里云拉取镜像后修改tag为k8s需要的镜像
- 创建kubeadm.sh脚本,用于拉取镜像/打tag/删除原有镜像
vi kubeadm.sh
#!/bin/bash
set -e
KUBE_VERSION=v1.21.1
KUBE_PAUSE_VERSION=3.4.1
ETCD_VERSION=3.4.13-0
CORE_DNS_VERSION=1.8.0
GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers
images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})
for imageName in ${images[@]} ; do
docker pull $ALIYUN_URL/$imageName
docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName
docker rmi $ALIYUN_URL/$imageName
done
docker tag k8s.gcr.io/coredns:1.8.0 k8s.gcr.io/coredns/coredns:v1.8.0
echo '镜像下载成功,将打印镜像列表';
docker images|grep k8s.gcr.io
echo '将执行 kubeadm config images list';
kubeadm config images list
echo '请对比镜像的名称和tag';
- 运行脚本和查看镜像
# 运行脚本
sh ./kubeadm.sh
- 对比镜像的名称和tag
Master初始化过程
kube初始化master
注意
:此操作是在主节点上进行
初始化master节点
官网:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
本地已有
kubeadm config images list
包含的镜像了,执行以下命令
记得保存执行 init 成功最后kubeadm join的信息
# 执行init
kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.21.1 --apiserver-advertise-address=192.168.56.186
# 若一次没成功,要重新初始化集群状态:执行 kubeadm reset,进行上述操作
# 安装好后返回如下信息,后面会使用到
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.56.186:6443 --token 71h03b.psh05o31axvp09jg \
--discovery-token-ca-cert-hash sha256:5ed285e40f048e923d4a0e06dfeaac7f3ffcf20bed6402ee3fec1ee4b42d14d8
记录你的kubeadm join的命令
kubeadm join 192.168.56.186:6443 --token 71h03b.psh05o31axvp09jg \
--discovery-token-ca-cert-hash sha256:5ed285e40f048e923d4a0e06dfeaac7f3ffcf20bed6402ee3fec1ee4b42d14d8
要开始使用集群了,根据提示执行如下
# 非root用户 执行如下
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# root用户也可以这样运行
export KUBECONFIG=/etc/kubernetes/admin.conf
此时kubectl cluster-info查看一下是否成功
查看pod验证一下
等待一会儿,同时可以发现像etc,controller,scheduler等组件都以pod的方式安装成功了
注意
:coredns没有启动,需要安装网络插件
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-558bd4d5db-7zfht 0/1 Pending 0 5m56s
coredns-558bd4d5db-95k88 0/1 Pending 0 5m56s
etcd-m 1/1 Running 0 6m9s
kube-apiserver-m 1/1 Running 0 6m9s
kube-controller-manager-m 1/1 Running 0 6m9s
kube-proxy-nfvcr 1/1 Running 0 5m56s
kube-scheduler-m 1/1 Running 0 6m8s
# 上面coredns模块没有启动,需要后面安装网络插件
健康检查
curl -k https://localhost:6443/healthz
ok
部署calico网络插件
选择网络插件:https://kubernetes.io/docs/concepts/cluster-administration/addons/
calico网络插件:https://docs.projectcalico.org/v3.9/getting-started/kubernetes/
calico,同样在master节点上操作
# 在k8s中安装calico
kubectl apply -f https://docs.projectcalico.org/v3.9/manifests/calico.yaml
# 确认一下calico是否安装成功
kubectl get pods --all-namespaces -w
工作节点加入集群
kube worker join cluster
还记得初始化master节点的最后打印信息
拿着上面init执行的命令到worker node进行执行
kubeadm join 192.168.56.186:6443 --token 71h03b.psh05o31axvp09jg \
--discovery-token-ca-cert-hash sha256:5ed285e40f048e923d4a0e06dfeaac7f3ffcf20bed6402ee3fec1ee4b42d14d8
在woker01和worker02上执行上述命令
在master节点上检查集群信息
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-kubeadm-k8s Ready master 19m v1.14.0
worker01-kubeadm-k8s Ready <none> 3m6s v1.14.0
worker02-kubeadm-k8s Ready <none> 2m41s v1.14.0
此时node的STATUS开始是NoReady,此时集群正在建立连接,通过如下命令查看过程
需要等待片刻
kubectl get pods --all-namespaces -w
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-76bf499b46-mxv6q 0/1 Pending 0 47s
kube-system calico-node-fn8mx 0/1 Init:0/3 0 48s
kube-system calico-node-mpmpt 0/1 Init:0/3 0 48s
kube-system calico-node-v4hpf 0/1 Init:0/3 0 48s
kube-system coredns-558bd4d5db-7zfht 0/1 Pending 0 46m
kube-system coredns-558bd4d5db-95k88 0/1 Pending 0 46m
kube-system etcd-m 1/1 Running 0 46m
kube-system kube-apiserver-m 1/1 Running 0 46m
kube-system kube-controller-manager-m 1/1 Running 0 46m
kube-system kube-proxy-2svlm 1/1 Running 0 21m
kube-system kube-proxy-nfvcr 1/1 Running 0 46m
kube-system kube-proxy-tpqlk 1/1 Running 0 21m
kube-system kube-scheduler-m 1/1 Running 0 46m
tigera-operator tigera-operator-86c4fc874f-kw2dd 1/1 Running 0 31m
等待所有的status变成Running,node的状态才会变成Ready
都变成Ready状态后集群即搭建成功。
验证阶段
发布Pod,验证集群安装状态
定义pod.yml文件,比如pod_nginx_rs.yaml
cat > pod_nginx_rs.yaml <<EOF
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: nginx
labels:
tier: frontend
spec:
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
name: nginx
labels:
tier: frontend
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
EOF
根据pod_nginx_rs.yml文件创建pod
kubectl apply -f pod_nginx_rs.yaml
replicaset.apps/nginx created
查看pod
kubectl get pods
# 需等待1分钟 才有如下结果
NAME READY STATUS RESTARTS AGE
nginx-2zl2r 0/1 Pending 0 66s
nginx-74x6h 0/1 Pending 0 66s
nginx-wbwjx 0/1 Pending 0 66s
#大约2分钟后查看运行详情
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-2zl2r 1/1 Running 0 3m18s 192.168.80.195 w2 <none> <none>
nginx-74x6h 1/1 Running 0 3m18s 192.168.80.196 w2 <none> <none>
nginx-wbwjx 1/1 Running 0 3m18s 192.168.190.66 w1 <none> <none>
# 查看详细的运行情况
kubectl describe pod nginx
验证nginx
# 在集群内运行一下三行都可以返回结果
curl 192.168.80.195
curl 192.168.80.196
curl 192.168.190.66
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
删除pod
kubectl delete -f pod_nginx_rs.yaml
标签:yes,Kubernetes,1M2W,192.168,nginx,ifup,kubeadm,kube,1.21 来源: https://blog.csdn.net/xianghanscce/article/details/117935632