二进制安装k8s集群(二)
作者:互联网
部署 controller-manager
集群规划
主机名 | 角色 | ip |
---|---|---|
hdss7-21.host.com | controller-manager | 10.4.7.21 |
hdss7-22.host.com | controller-manager | 10.4.7.22 |
创建启动脚本 /opt/kubernetes/server/bin/kube-controller-manager.sh(hdss7-21、hdss7-22)
- 编辑 kube-controller-manager.sh脚本
#!/bin/sh ./kube-controller-manager \ --cluster-cidr 172.7.0.0/16 \ --leader-elect true \ --log-dir /data/logs/kubernetes/kube-controller-manager \ --master http://127.0.0.1:8080 \ --service-account-private-key-file ./cert/ca-key.pem \ --service-cluster-ip-range 192.168.0.0/16 \ --root-ca-file ./cert/ca.pem \ --v 2
- 给予执行权限 并创建目录
chmod +x /opt/kubernetes/server/bin/kube-controller-manager.sh mkdir -p /data/logs/kubernetes/kube-controller-manager
- 创建 supervisor 配置 /etc/supervisord.d/kube-controller-manager.ini
[program:kube-controller-manager-7-21] command=/opt/kubernetes/server/bin/kube-controller-manager.sh ; the program (relative uses PATH, can take args) numprocs=1 ; number of processes copies to start (def 1) directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd) autostart=true ; start at supervisord start (default: true) autorestart=true ; retstart at unexpected quit (default: true) startsecs=30 ; number of secs prog must stay running (def. 1) startretries=3 ; max # of serial start failures (default 3) exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) stopsignal=QUIT ; signal used to kill process (default TERM) stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) user=root ; setuid to this UNIX account to run the program redirect_stderr=true ; redirect proc stderr to stdout (default false) stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controller.stdout.log ; stderr log path, NONE for none; default AUTO stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB) stdout_logfile_backups=4 ; # of stdout logfile backups (default 10) stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) stdout_events_enabled=false
- 启动controller-manager
supervisorctl update
部署kube-scheduler(hdss7-21、hdss7-22)
集群规划
主机名 | 角色 | ip |
---|---|---|
hdss7-21.host.com | kube-scheduler | 10.4.7.21 |
hdss7-22.host.com | kube-scheduler | 10.4.7.22 |
创建启动脚本
- 创建 /opt/kubernetes/server/bin/kube-scheduler.sh
#!/bin/sh ./kube-scheduler \ --leader-elect \ --log-dir /data/logs/kubernetes/kube-scheduler \ --master http://127.0.0.1:8080 \ --v 2
- 给与执行权限,并创建目录
chmod +x /opt/kubernetes/server/bin/kube-scheduler.sh mkdir -p /data/logs/kubernetes/kube-scheduler
- 创建 supervisor 配置 /etc/supervisord.d/kube-scheduler.ini
[program:kube-scheduler-7-21] command=/opt/kubernetes/server/bin/kube-scheduler.sh ; the program (relative uses PATH, can take args) numprocs=1 ; number of processes copies to start (def 1) directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd) autostart=true ; start at supervisord start (default: true) autorestart=true ; retstart at unexpected quit (default: true) startsecs=30 ; number of secs prog must stay running (def. 1) startretries=3 ; max # of serial start failures (default 3) exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) stopsignal=QUIT ; signal used to kill process (default TERM) stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) user=root ; setuid to this UNIX account to run the program redirect_stderr=true ; redirect proc stderr to stdout (default false) stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log ; stderr log path, NONE for none; default AUTO stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB) stdout_logfile_backups=4 ; # of stdout logfile backups (default 10) stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) stdout_events_enabled=false ; emit events on stdout writes (default false)存储+
- 启动controller-manager
supervisorctl update
检查集群状态
- 现在以及部署完成了 etcd,kube-apiserver,kube-controller-manager,kube-scheduler,接下来就可以检查集群的健康状态。
- 创建kubectl的软连接
ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl
- 检查集群的健康状态
kubectl get cs #cs cluster status
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {“health”: “true”}
etcd-1 Healthy {“health”: “true”}
etcd-2 Healthy {“health”: “true”}
部署 Node 节点
部署kubelet服务
集群规划
主机名 | 角色 | ip |
---|---|---|
hdss7-21.host.com | kubelet | 10.4.7.21 |
hdss7-22.host.com | kubelet | 10.4.7.22 |
签发 kubelet 证书(hdss7-200)
-
编辑证书的请求文件 /opt/certs/kubelet-csr.json 文件
{ "CN": "k8s-kubelet", "hosts": [ "127.0.0.1", "10.4.7.10", "10.4.7.21", "10.4.7.22", "10.4.7.23", "10.4.7.24", "10.4.7.25", "10.4.7.26", "10.4.7.27", "10.4.7.28" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ] }
-
签发证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet
将证书拷贝的到 hdss7-21、hdss7-22 的 /opt/kubernetes/server/bin/cert目录下
- 拷贝证书到hdss7-21、hdss7-22上,并注意私钥权限为600
scp hdss7-200:/opt/certs/kubelet.pem . scp hdss7-200:/opt/certs/kubelet-key.pem .
- 创建配置,在 /opt/kubernetes/server/bin/conf目录下执行命令 (仅在hdss7-21上执行)
- set-cluster
kubectl config set-cluster myk8s \ --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \ --embed-certs=true \ --server=https://10.4.7.10:7443 \ --kubeconfig=kubelet.kubeconfig
- set-credentials
kubectl config set-credentials k8s-node \ --client-certificate=/opt/kubernetes/server/bin/cert/client.pem \ --client-key=/opt/kubernetes/server/bin/cert/client-key.pem \ --embed-certs=true \ --kubeconfig=kubelet.kubeconfig
- set-context
kubectl config set-context myk8s-context \ --cluster=myk8s \ --user=k8s-node \ --kubeconfig=kubelet.kubeconfig
- switch-contenxt
kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig
- 创建 /opt/kubernetes/server/bin/conf/k8s-node.yaml 资源配置文件,主要是为了让k8s-node用户与角色ClusterRole绑定,而该角色有让该节点成为k8s集群计算节点的能力。
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: k8s-node roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: k8s-node
- 执行
kubectl create -f k8s-node.yaml
- set-cluster
- 将hdss7-21 上的kubelet.kubeconfig 文件拷贝到 hdss7-22上(hdss7-22上执行)
cd /opt/kubernetes/server/bin/conf scp hdss7-21:/opt/kubernetes/server/bin/conf/kubelet.kubeconfig .
准备 pause 基础镜像 (hdss7-200上)
- 下载 pause 镜像
docker pull kubernetes/pause
- 打tag并推送到自己的harbor仓库中
docker tag f9d5de079539 harbor.od.com/public/pause:latest docker push harbor.od.com/public/pause:latest
- pause镜像很小,可用很快启动一个容器。先于业务容器启动,主要为了初始化环境。
创建 kubelet 启动脚本(hdss7-21、hdss7-22上都执行)
-
在hdss7-22上创建 kubelet.sh 启动脚本
#!/bin/sh ./kubelet \ --anonymous-auth=false \ --cgroup-driver systemd \ --cluster-dns 192.168.0.2 \ --cluster-domain cluster.local \ --runtime-cgroups=/systemd/system.slice \ --kubelet-cgroups=/systemd/system.slice \ --fail-swap-on="false" \ --client-ca-file ./cert/ca.pem \ --tls-cert-file ./cert/kubelet.pem \ --tls-private-key-file ./cert/kubelet-key.pem \ --hostname-override hdss7-22.host.com \ --image-gc-high-threshold 20 \ --image-gc-low-threshold 10 \ --kubeconfig ./conf/kubelet.kubeconfig \ --log-dir /data/logs/kubernetes/kube-kubelet \ --pod-infra-container-image harbor.od.com/public/pause:latest \ --root-dir /data/kubelet
-
创建日志目录,给予kubelet.sh脚本执行权限
chmod +x /opt/kubernetes/server/bin/kubelet.sh mkdir -p /data/logs/kubernetes/kube-kubelet /data/kubelet
-
创建/etc/supervisord.d/kube-kubelet.ini 配置
[program:kube-kubelet-7-21] command=/opt/kubernetes/server/bin/kubelet.sh ; the program (relative uses PATH, can take args) numprocs=1 ; number of processes copies to start (def 1) directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd) autostart=true ; start at supervisord start (default: true) autorestart=true ; retstart at unexpected quit (default: true) startsecs=30 ; number of secs prog must stay running (def. 1) startretries=3 ; max # of serial start failures (default 3) exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) stopsignal=QUIT ; signal used to kill process (default TERM) stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) user=root ; setuid to this UNIX account to run the program redirect_stderr=true ; redirect proc stderr to stdout (default false) stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log ; stderr log path, NONE for none; default AUTO stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB) stdout_logfile_backups=4 ; # of stdout logfile backups (default 10) stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) stdout_events_enabled=false ; emit events on stdout writes (default false)
-
启动kubelet
supervisorctl update
-
查看node节点是否加入集群
kubectl get node
NAME STATUS ROLES AGE VERSION
hdss7-21.host.com Ready <none> 102s v1.15.2
hdss7-22.host.com Ready <none> 80s v1.15.2 -
给节点加角色
kubectl label node hdss7-21.host.com node-role.kubernetes.io/master= kubectl label node hdss7-21.host.com node-role.kubernetes.io/node= kubectl label node hdss7-22.host.com node-role.kubernetes.io/master= kubectl label node hdss7-22.host.com node-role.kubernetes.io/node=
NAME STATUS ROLES AGE VERSION
hdss7-21.host.com Ready master,node 5m58s v1.15.2
hdss7-22.host.com Ready master,node 5m36s v1.15.2此时看到,节点已经有了角色,这两个节点。既做master节点,又做node节点。
部署 kube-proxy (主要是连接pod网络和集群网络的)
集群规划
主机名 | 角色 | ip |
---|---|---|
hdss7-21.host.com | kube-proxy | 10.4.7.21 |
hdss7-22.host.com | kube-proxy | 10.4.7.22 |
签发 kube-proxy 证书 (在运维主机hdss7-200.host.com)上
- 创建生成证书签名请求的json配置文件 /opt/certs/kube-proxy-csr.json
{ "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ] }
- 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json | cfssl-json -bare kube-proxy-client
将证书拷贝到个运算节点上(hdss7-21、hdss7-22),并创建配置
- 拷贝证书,私钥,注意私钥权限600
scp hdss7-200:/opt/certs/kube-proxy-client.pem . scp hdss7-200:/opt/certs/kube-proxy-client-key.pem .
- 创建配置, 在 /opt/kubernetes/server/bin/conf目录下执行命令 (仅在hdss7-21上执行)
- set-cluster
kubectl config set-cluster myk8s \ --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \ --embed-certs=true \ --server=https://10.4.7.10:7443 \ --kubeconfig=kube-proxy.kubeconfig
- set-credentials
kubectl config set-credentials kube-proxy \ --client-certificate=/opt/kubernetes/server/bin/cert/kube-proxy-client.pem \ --client-key=/opt/kubernetes/server/bin/cert/kube-proxy-client-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig
- set-context
kubectl config set-context myk8s-context \ --cluster=myk8s \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig
- switch-context
kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig
- set-cluster
- 将生成的 kube-proxy.kubeconfig文件拷贝到hdss7-22的/opt/kubernetes/server/bin/conf 目录下(hdss7-22上执行命令)
scp hdss7-21:/opt/kubernetes/server/bin/conf/kube-proxy.kubeconfig .
创建 ipvs.sh 脚本,并执行hdss7-21、hdss7-22上(为了加载 ipvs 相关模块)
- vi /root/ipvs.sh
#!/bin/bash ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs" for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*") do /sbin/modinfo -F filename $i &>/dev/null if [ $? -eq 0 ];then /sbin/modprobe $i fi done
- 给予执行权限,执行脚本,并查看加载情况
chmod +x /root/ipvs.sh ./root/ipvs.sh lsmod |grep ip_vs
创建 kube-proxy 启动脚本(hdss7-21、hdss7-22上都执行)
- 创建 /opt/kubernetes/server/bin/kube-proxy.sh 启动脚本
#!/bin/sh ./kube-proxy \ --cluster-cidr 172.7.0.0/16 \ --hostname-override hdss7-21.host.com \ --proxy-mode=ipvs \ --ipvs-scheduler=nq \ --kubeconfig ./conf/kube-proxy.kubeconfig
- 给与执行权限,并创建日志目录
chmod +x /opt/kubernetes/server/bin/kube-proxy.sh mkdir -p /data/logs/kubernetes/kube-proxy
- 创建 /etc/supervisord.d/kube-proxy.ini 配置文件
[program:kube-proxy-7-21] command=/opt/kubernetes/server/bin/kube-proxy.sh ; the program (relative uses PATH, can take args) numprocs=1 ; number of processes copies to start (def 1) directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd) autostart=true ; start at supervisord start (default: true) autorestart=true ; retstart at unexpected quit (default: true) startsecs=30 ; number of secs prog must stay running (def. 1) startretries=3 ; max # of serial start failures (default 3) exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) stopsignal=QUIT ; signal used to kill process (default TERM) stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) user=root ; setuid to this UNIX account to run the program redirect_stderr=true ; redirect proc stderr to stdout (default false) stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log ; stderr log path, NONE for none; default AUTO stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB) stdout_logfile_backups=4 ; # of stdout logfile backups (default 10) stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) stdout_events_enabled=false ; emit events on stdout writes (default false)
- 启动并查看状态
supervisorctl update supervisorctl status
- 查看 ipvs是否生效
看到如下结果,说明kube-proxy配置成功yum install -y ipvsadm ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.0.1:443 nq
-> 10.4.7.21:6443 Masq 1 0 0
-> 10.4.7.22:6443 Masq 1 0 0
验证集群是否可用
- 编辑 /root/nginx-ds.yaml 资源文件
apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: nginx-ds spec: template: metadata: labels: app: nginx-ds spec: containers: - name: my-nginx image: harbor.od.com/public/nginx:v1.7.9 ports: - containerPort: 80
- 测试
kubectl create -f nginx-ds.yaml
- 查看状态
kubectl get pods -o wide
- 删除
kubectl delete -f nginx-ds.yaml
- 获取集群状态
kubectl get cs
标签:hdss7,default,kube,kubernetes,二进制,--,集群,opt,k8s 来源: https://blog.csdn.net/weixin_32196893/article/details/119882092