其他分享
首页 > 其他分享> > kubeadm方式安装k8s:v1.18.0集群

kubeadm方式安装k8s:v1.18.0集群

作者:互联网

1 kubeadm方式安装k8s集群

1.1 环境准备

配置本地hosts 

[root@master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.2 master
192.168.1.3 node01
192.168.1.4 node02

配置master到两台node节点免密登陆

[root@master ~]# ssh-keygen
[root@master ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub master
[root@master ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub node01
[root@master ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub node02

1.2 container runtime

容器运行时:为了让容器运行在pod里,k8s设计了容器运行时。这里我们采用docker。

需要在三台节点进行安装配置。

(1)配置阿里云yum源

# step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3: 更新并安装Docker-CE
sudo yum makecache fast
sudo yum -y install docker-ce
# Step 4: 开启Docker服务
sudo service docker start

# 注意:
# 官方软件源默认启用了最新的软件,您可以通过编辑软件源的方式获取各个版本的软件包。例如官方并没有将测试版本的软件源置为可用,您可以通过以下方式开启。同理可以开启各种测试版本等。
# vim /etc/yum.repos.d/docker-ee.repo
#   将[docker-ce-test]下方的enabled=0修改为enabled=1
#
# 安装指定版本的Docker-CE:
# Step 1: 查找Docker-CE的版本:
# yum list docker-ce.x86_64 --showduplicates | sort -r
#   Loading mirror speeds from cached hostfile
#   Loaded plugins: branch, fastestmirror, langpacks
#   docker-ce.x86_64            17.03.1.ce-1.el7.centos            docker-ce-stable
#   docker-ce.x86_64            17.03.1.ce-1.el7.centos            @docker-ce-stable
#   docker-ce.x86_64            17.03.0.ce-1.el7.centos            docker-ce-stable
#   Available Packages
# Step2: 安装指定版本的Docker-CE: (VERSION例如上面的17.03.0.ce.1-1.el7.centos)
# sudo yum -y install docker-ce-[VERSION]

yum install docker-ce-19.03.13-3.el7.x86_64 -y

安装一些包

yum install net-tools.x86_64 wget lrzsz telnet tree nmap sysstat dos2unix bind-utils -y

 

(2)对docker做一些简单的配置

registry-mirrors:字段为阿里云镜像加速地址,需要注册阿里云用户。

exec-opts:中配置了cgroupdriver为systemd,这是因为k8s集群规定的。

 

[root@master ~]# cat /etc/docker/daemon.json 
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "registry-mirrors": ["https://xxxxx.mirror.aliyuncs.com"]
}

    

(3)启动docker

systemctl start docker

systemctl enable docker

docker version

docker info

 1.3 安装kubeadm、kubectl、kubelet

kubeadm:集群初始化工具

kubectl:命令行工具

kubelet:node节点必备服务,主要是启动pod并监控pod状态的组件

(1)设置iptables

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

(2)配置阿里源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

(3)安装(所有节点)

yum install -y kubelet-1.18.0-0.x86_64 kubeadm-1.18.0-0.x86_64 kubectl-1.18.0-0.x86_64 -y

设置kubelet开机自启动

systemctl enable kubelet.service

 1.4 部署集群

由于kubeadm默认会从谷歌拉取镜像,国内环境无法获取,建议提前准备好docker镜像;

1.3安装的kubeadm版本为1.18.0,需要准备1.18.15版本的其他包。

 获取安装包地址:https://www.sealyun.com/goodsDetail?type=cloud_kernel&name=kubernetes  需要付费

上传至所有节点 

[root@myhost01 src]# ls 
kube  kube1.18.15.tar.gz
[root@myhost01 ~]# docker load -i /opt/app/src/kube/images/images.tar
[root@myhost01 ~]# docker images 
REPOSITORY                           TAG        IMAGE ID       CREATED         SIZE
k8s.gcr.io/kube-proxy                v1.18.15   6947b0d99ceb   11 days ago     117MB
k8s.gcr.io/kube-controller-manager   v1.18.15   4b3915bbba95   11 days ago     162MB
k8s.gcr.io/kube-apiserver            v1.18.15   21e89bb12d33   11 days ago     173MB
k8s.gcr.io/kube-scheduler            v1.18.15   db6167a559ba   11 days ago     95.3MB
fanux/lvscare                        latest     38af5ed07c1b   4 weeks ago     14.9MB
k8s.gcr.io/pause                     3.2        80d28bedfe5d   11 months ago   683kB
k8s.gcr.io/coredns                   1.6.7      67da37a9a360   12 months ago   43.8MB
k8s.gcr.io/etcd                      3.4.3-0    303ce5db0e90   15 months ago   288MB
calico/node                          v3.8.2     11cd78b9e13d   17 months ago   189MB
calico/cni                           v3.8.2     c71c24a0b1a2   17 months ago   157MB
calico/kube-controllers              v3.8.2     de959d4e3638   17 months ago   46.8MB
calico/pod2daemon-flexvol            v3.8.2     96047edc008f   17 months ago   9.37MB

  [root@myhost01 ~]# scp /opt/app/src/kube/images/images.tar root@myhost02:/opt/app/src/kube/images
  images.tar 100% 1030MB 132.8MB/s 00:07
  [root@myhost01 ~]# scp /opt/app/src/kube/images/images.tar root@myhost03:/opt/app/src/kube/images
  images.tar


[root@myhost02 ~]# docker load -i /opt/app/src/kube/images/images.tar
[root@myhost03 ~]# docker load -i /opt/app/src/kube/images/images.tar

 

报错:

[root@master ~]# kubeadm init --pod-network-cidr 172.16.0.0/16 
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
    [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.2. Latest validated version: 19.03
error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

解决:

 关闭swap

[root@master ~]# swapoff -a

注释掉/etc/fstab关于swap分区挂在行,否则重启后kubelet服务无法启动。

[root@myhost01 ~]# cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Wed Nov 20 16:58:21 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=16ab7706-1680-4322-894d-4d0e69b6fc04 /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

 

 初始化集群:

  出现以下回显代表成功了。

  

[root@myhost01 ~]# kubeadm init --pod-network-cidr 172.16.0.0/16 
I0124 22:19:22.450912    9791 version.go:252] remote version is much newer: v1.20.2; falling back to: stable-1.18
W0124 22:19:23.414525    9791 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.15
[preflight] Running pre-flight checks
    [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [myhost01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.2]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [myhost01 localhost] and IPs [192.168.1.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [myhost01 localhost] and IPs [192.168.1.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0124 22:19:26.893646    9791 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0124 22:19:26.894232    9791 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.002420 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node myhost01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node myhost01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: eszovk.7zrv5czhh0q9mv7w
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.2:6443 --token eszovk.7zrv5czhh0q9mv7w \
    --discovery-token-ca-cert-hash sha256:23df01d81c8ec153db14eac60157cd3166f30701985301ca6bd1dd150d1633dd 
[root@myhost01 ~]# mkdir .kube/
[root@myhost01 ~]# cp -i /etc/kubernetes/admin.conf .kube/config 
[root@myhost01 ~]# kubectl get pods -A 
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-66bff467f8-l6mlf           0/1     Pending   0          3m48s
kube-system   coredns-66bff467f8-lkcbf           0/1     Pending   0          3m48s
kube-system   etcd-myhost01                      1/1     Running   0          4m
kube-system   kube-apiserver-myhost01            1/1     Running   0          4m
kube-system   kube-controller-manager-myhost01   1/1     Running   0          4m
kube-system   kube-proxy-r99vs                   1/1     Running   0          3m49s
kube-system   kube-scheduler-myhost01            1/1     Running   0          4m

 在其他节点上执行,并将其加入k8s集群

[root@myhost02 ~]# kubeadm join 192.168.1.2:6443 --token eszovk.7zrv5czhh0q9mv7w     --discovery-token-ca-cert-hash sha256:23df01d81c8ec153db14eac60157cd3166f30701985301ca6bd1dd150d1633dd
W0124 22:32:32.730122    9531 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
    [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

 

[root@myhost02 ~]# systemctl start kubelet 
[root@myhost02 ~]# systemctl enable kubelet 
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

kubelet配置文件:/var/lib/kubelet/config.yaml

查看node节点已经加入,但是状态为NotReady

[root@myhost01 ~]# kubectl get nodes 
NAME       STATUS     ROLES    AGE     VERSION
myhost01   NotReady   master   18m     v1.18.0
myhost02   NotReady   <none>   5m23s   v1.18.0
myhost03   NotReady   <none>   6s      v1.18.0

 1.5 安装网络插件:CNI

在CNI正式启动之前coreDNS都不会正常运行。

CNI网络插件实现了:

  node和node之间网络互通

  pod和pod之间网络互通

  node和pod之间网络互通

下载calico的yaml文件

[root@myhost01 yaml]# wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml

改yaml文件中指定的版本为3.8.9,需要修改了3.8.2,因为我们准备的镜像都是3.8.2版本

 

 

 

 

应用配置文件

[root@myhost01 yaml]# kubectl apply -f calico.yaml

CNI完成后应该达到的预期如下

(1)所有pod正常运行

 

 (2) 所有node节点状态为Read

(3)集群健康

 

 查看资源简写

[root@myhost01 yaml]# kubectl api-resources

 

1.6 kubectl命令补全

[root@myhost01 yaml]# rpm -qa | grep completion 
bash-completion-2.1-8.el7.noarch
[root@myhost01 yaml]# kubectl completion bash > /etc/profile.d/kubectl.sh 
[root@myhost01 yaml]# source /etc/profile.d/kubectl.sh        
[root@myhost01 yaml]# vim /root/.bashrc 
[root@myhost01 yaml]# cat /root/.bashrc
# .bashrc

# User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# Source global definitions
if [ -f /etc/bashrc ]; then
    . /etc/bashrc
fi
source /etc/profile.d/kubectl.sh

 

标签:v1.18,kube,kubernetes,kubelet,myhost01,kubeadm,k8s,root,docker
来源: https://www.cnblogs.com/olda/p/14320634.html