其他分享
首页 > 其他分享> > Kubernetes集群部署(一)——简单部署

Kubernetes集群部署(一)——简单部署

作者:互联网

一 Kubernetes简介

在Docker 作为高级容器引擎快速发展的同时,在Google内部,容器技术已经应
用了很多年,Borg系统运行管理着成千上万的容器应用。

Kubernetes项目来源于Borg,可以说是集结了Borg设计思想的精华,并且吸收了Borg系统中的经验和教训。

Kubernetes对计算资源进行了更高层次的抽象,通过将容器进行细致的组合,
将最终的应用服务交给用户。

Kubernetes的好处:

隐藏资源管理和错误处理,用户仅需要关注应用的开发。

服务高可用、高可靠。

可将负载运行在由成千上万的机器联合而成的集群中。

Kubernetes集群包含有节点代理kubelet和Master组件(APIs, scheduler, etc),
一切都基于分布式的存储系统。

 Kubernetes主要由以下几个核心组件组成:

etcd:保存了整个集群的状态

apiserver:提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现
等机制

controller manager:负责维护集群的状态,比如故障检测、自动扩展、滚动更新等

scheduler:负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上

kubelet:负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理

Container runtime:负责镜像管理以及Pod和容器的真正运行(CRI)

kube-proxy:负责为Service提供cluster内部的服务发现和负载均衡

除了核心组件,还有一些推荐的Add-ons:

kube-dns:负责为整个集群提供DNS服务
Ingress Controller:为服务提供外网入口
Heapster:提供资源监控
Dashboard:提供GUI
Federation:提供跨可用区的集群
Fluentd-elasticsearch:提供集群日志采集、存储与查询

Kubernetes设计理念和功能其实就是一个类似Linux的分层架构

核心层:Kubernetes最核心的功能,对外提供API构建高层的应用,对内提供插件
式应用执行环境

应用层:部署(无状态应用、有状态应用、批处理任务、集群应用等)和路由(服
务发现、DNS解析等)

 管理层:系统度量(如基础设施、容器和网络的度量),自动化(如自动扩展、动
态Provision等)以及策略管理(RBAC、Quota、PSP、NetworkPolicy等)

接口层:kubectl命令行工具、客户端SDK以及集群联邦

 生态系统:在接口层之上的庞大容器集群管理调度的生态系统,可以划分为两个范
畴:
Kubernetes外部:日志、监控、配置管理、CI、CD、Workflow、FaaS、
OTS应用、ChatOps等
 Kubernetes内部:CRI、CNI、CVI、镜像仓库、Cloud Provider、集群自身
的配置和管理等 

二 Kubernetes部署

关闭节点的selinux和iptables防火墙

虚拟机需要上网

[root@foundation7 ~]# firewall-cmd --list-all
trusted (active)
  target: ACCEPT
  icmp-block-inversion: no
  interfaces: br0 enp0s25 wlp3s0
  sources: 
  services: 
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 
	
[root@foundation7 ~]# firewall-cmd --permanent --add-masquerade 
Warning: ALREADY_ENABLED: masquerade
success
[root@foundation7 ~]# firewall-cmd --reload 
success
[root@foundation7 ~]# firewall-cmd --list-all
trusted (active)
  target: ACCEPT
  icmp-block-inversion: no
  interfaces: br0 enp0s25 wlp3s0
  sources: 
  services: 
  ports: 
  protocols: 
  masquerade: yes
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

需要四台虚拟机

在server1,2,3,4上设置相同部分

docker设置开机自启

[root@server1 ~]# systemctl enable --now docker

在server1上编辑daemon.json文件,然后传给其他虚拟机,并且重启docker服务

[root@server3 ~]# cat /etc/docker/daemon.json 
{ 
 "registry-mirrors": ["https://reg.westos.org"],
 "exec-opts": ["native.cgroupdriver=systemd"]
}
[root@server1 ~]# scp /etc/docker/daemon.json server2:/etc/docker/daemon.json 
daemon.json                                                                        100%   99   121.8KB/s   00:00    
[root@server1 ~]# scp /etc/docker/daemon.json server3:/etc/docker/daemon.json 
daemon.json                                                                        100%   99   106.2KB/s   00:00    
[root@server1 ~]# scp /etc/docker/daemon.json server4:/etc/docker/daemon.json 
root@server4's password: 
daemon.json                                                                        100%   99   120.3KB/s   00:00    
[root@server1 ~]# systemctl daemon-reload 
[root@server1 ~]# systemctl restart docker

禁用swap分区:

swapoff -a
注释掉/etc/fstab文件中的swap定义

[root@server1 ~]# swapoff -a
[root@server1 ~]# vim /etc/fstab 

 

 在四个虚拟机上编辑软件下载仓库

[root@server1 ~]# vim /etc/yum.repos.d/k8s.repo
[root@server1 ~]# cat /etc/yum.repos.d/k8s.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
[root@server1 ~]# scp /etc/yum.repos.d/k8s.repo server2:/etc/yum.repos.d/
k8s.repo                                                                           100%  130   196.1KB/s   00:00    
[root@server1 ~]# scp /etc/yum.repos.d/k8s.repo server3:/etc/yum.repos.d/
k8s.repo                                                                           100%  130   171.1KB/s   00:00    
[root@server1 ~]# scp /etc/yum.repos.d/k8s.repo server4:/etc/yum.repos.d/
root@server4's password: 
k8s.repo                                                                           100%  130   152.8KB/s   00:00    

在四个虚拟机上下载软件,并设置软件开机自启

[root@server1 ~]# yum install -y kubelet kubeadm kubectl

systemctl enable --now kubelet

查看默认配置信息

[root@server1 ~]# kubeadm config print init-defaults
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 1.2.3.4
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: 1.21.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}

列出所需镜像

[root@server1 ~]# kubeadm config images list --image-repository registry.aliyuncs.com/google_containers
registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.3
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.3
registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.3
registry.aliyuncs.com/google_containers/kube-proxy:v1.21.3
registry.aliyuncs.com/google_containers/pause:3.4.1
registry.aliyuncs.com/google_containers/etcd:3.4.13-0
registry.aliyuncs.com/google_containers/coredns:v1.8.0

拉取镜像

[root@server1 ~]# kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.21.3
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.4.1
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0
failed to pull image "registry.aliyuncs.com/google_containers/coredns:v1.8.0": output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/coredns:v1.8.0 not found: manifest unknown: manifest unknown
, error: exit status 1
To see the stack trace of this error execute with --v=5 or higher

 把拉取的镜像设置标签

[root@server1 packages]# docker images | grep ^registry.aliyuncs.com |awk '{print $1":"$2}'|awk -F/ '{system("docker tag "$0" reg.westos.org/k8s/"$3"")}'

上传镜像

[root@server1 ~]# docker images | grep ^registry.aliyuncs.com |awk '{print $1":"$2}'|awk -F/ '{system("docker tag "$0" reg.westos.org/k8s/"$3"")}'
[root@server1 ~]# docker images |grep ^reg.westos.org/k8s|awk '{system("docker push "$1":"$2"")}'
The push refers to repository [reg.westos.org/k8s/kube-apiserver]
79365e8cbfcb: Layer already exists 
3d63edbd1075: Layer already exists 
16679402dc20: Layer already exists 
v1.21.3: digest: sha256:910cfdf034262c7b68ecb17c0885f39bdaaad07d87c9a5b6320819d8500b7ee5 size: 949
The push refers to repository [reg.westos.org/k8s/kube-scheduler]
9408d6c3cfbd: Layer already exists 
3d63edbd1075: Layer already exists 
16679402dc20: Layer already exists 
v1.21.3: digest: sha256:b61779ea1bd936c137b25b3a7baa5551fbbd84fed8568d15c7c85ab1139521c0 size: 949
The push refers to repository [reg.westos.org/k8s/kube-proxy]
8fe09c1d10f0: Layer already exists 
48b90c7688a2: Layer already exists 
v1.21.3: digest: sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b size: 740
The push refers to repository [reg.westos.org/k8s/kube-controller-manager]
46675cd6b26d: Layer already exists 
3d63edbd1075: Layer already exists 
16679402dc20: Layer already exists 
v1.21.3: digest: sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b size: 949
The push refers to repository [reg.westos.org/k8s/pause]
915e8870f7d1: Layer already exists 
3.4.1: digest: sha256:9ec1e780f5c0196af7b28f135ffc0533eddcb0a54a0ba8b32943303ce76fe70d size: 526
The push refers to repository [reg.westos.org/k8s/coredns]
69ae2fbf419f: Pushed 
225df95e717c: Pushed 
v1.8.0: digest: sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61 size: 739
The push refers to repository [reg.westos.org/k8s/etcd]
bb63b9467928: Layer already exists 
bfa5849f3d09: Layer already exists 
1a4e46412eb0: Layer already exists 
d61c79b29299: Layer already exists 
d72a74c56330: Layer already exists 
3.4.13-0: digest: sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a size: 1372

 

 

初始化集群

[root@server1 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository reg.westos.org/k8s
[init] Using Kubernetes version: v1.21.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local server1] and IPs [10.96.0.1 172.25.7.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost server1] and IPs [172.25.7.1 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost server1] and IPs [172.25.7.1 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.003026 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node server1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node server1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: p8n9r4.9cusb995n7bnohgy
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.25.7.1:6443 --token p8n9r4.9cusb995n7bnohgy \
	--discovery-token-ca-cert-hash sha256:237abdcd84f8ad748c8a2e1555da9af8c5153c1145def48b662c42b998bd87bf 

节点扩容

配置kubectl

[root@server1 ~]# useradd kubeadm
[root@server1 ~]# vim /etc/sudoers

 

[root@server1 ~]# mkdir -p $HOME/.kube
[root@server1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@server1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

配置kubectl命令补齐功能

[root@server1 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc

标签:k8s,kubernetes,Kubernetes,部署,server1,etc,集群,kube,root
来源: https://blog.csdn.net/weixin_56993834/article/details/119057992