关于使用kubeoperator安装k8s集群,容器运行时使用containerd的有关说明
作者:互联网
1.在安装过程中容器运行时选择containerd后,节点集群主机就不会安装docker了
查看安装的版本:
[root@jdd-k8s-worker-2 ~]# ctr --version
ctr github.com/containerd/containerd v1.6.0
[root@jdd-k8s-worker-2 ~]# ctr version
Client:
Version: v1.6.0
Revision: 39259a8f35919a0d02c9ecc2871ddd6ccf6a7c6e
Go version: go1.17.2
Server:
Version: v1.6.0
Revision: 39259a8f35919a0d02c9ecc2871ddd6ccf6a7c6e
UUID: 084a860a-0956-47ff-864c-f918cff9cb1e
查看默认的containerd配置文件内容:
# cat /etc/containerd/config.toml
root = "/var/lib/containerd"
state = "/run/containerd"
oom_score = 0
[grpc]
address = "/run/containerd/containerd.sock"
uid = 0
gid = 0
max_recv_message_size = 16777216
max_send_message_size = 16777216
[debug]
address = ""
uid = 0
gid = 0
level = ""
[metrics]
address = ""
grpc_histogram = false
[cgroup]
path = ""
[plugins]
[plugins.cgroups]
no_prometheus = false
[plugins.cri]
stream_server_address = "127.0.0.1"
stream_server_port = "0"
enable_selinux = false
sandbox_image = "registry.kubeoperator.io:8082/kubeoperator/pause:3.5"
stats_collect_period = 10
systemd_cgroup = false
enable_tls_streaming = false
max_container_log_line_size = 16384
[plugins.cri.containerd]
snapshotter = "overlayfs"
no_pivot = false
[plugins.cri.containerd.default_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = ""
runtime_root = ""
[plugins.cri.containerd.untrusted_workload_runtime]
runtime_type = ""
runtime_engine = ""
runtime_root = ""
[plugins.cri.cni]
bin_dir = "/opt/cni/bin"
conf_dir = "/etc/cni/net.d"
conf_template = ""
[plugins.cri.registry]
[plugins.cri.registry.mirrors]
[plugins.cri.registry.mirrors."10.16.16.110:8082"]
endpoint = [
"http://10.16.16.110:8082"
]
[plugins.cri.registry.mirrors."10.16.16.110:8083"]
endpoint = [
"http://10.16.16.110:8083"
]
[plugins.cri.registry.mirrors."registry.kubeoperator.io:8082"]
endpoint = [
"http://10.16.16.110:8082"
]
[plugins.cri.registry.mirrors."registry.kubeoperator.io:8083"]
endpoint = [
"http://10.16.16.110:8083"
]
[plugins.cri.registry.mirrors."docker.io"]
endpoint = [
"https://docker.mirrors.ustc.edu.cn",
"http://hub-mirror.c.163.com"
]
[plugins.cri.registry.mirrors."gcr.io"]
endpoint = [
"https://gcr.mirrors.ustc.edu.cn"
]
[plugins.cri.registry.mirrors."k8s.gcr.io"]
endpoint = [
"https://gcr.mirrors.ustc.edu.cn/google-containers/"
]
[plugins.cri.registry.mirrors."quay.io"]
endpoint = [
"https://quay.mirrors.ustc.edu.cn"
]
[plugins.cri.x509_key_pair_streaming]
tls_cert_file = ""
tls_key_file = ""
[plugins.diff-service]
default = ["walking"]
[plugins.linux]
shim = "containerd-shim"
runtime = "runc"
runtime_root = ""
no_shim = false
shim_debug = false
[plugins.opt]
path = "/opt/containerd"
[plugins.restart]
interval = "10s"
[plugins.scheduler]
pause_threshold = 0.02
deletion_threshold = 0
mutation_threshold = 100
schedule_delay = "0s"
startup_delay = "100ms"
注意到有几个镜像加速配置,可以测试一下
[root@jdd-k8s-worker-2 ~]# crictl pull k8s.gcr.io/pause:3.6
FATA[0030] pulling image: rpc error: code = Unknown desc = failed to pull and unpack image "k8s.gcr.io/pause:3.6": failed to resolve reference "k8s.gcr.io/pause:3.6": pulling from host gcr.mirrors.ustc.edu.cn failed with status code [manifests 3.6]: 403 Forbidden
发现拉取镜像:k8s.gcr.io/pause:3.6
实际是从网址:gcr.mirrors.ustc.edu.cn 拉取的,但是这个网站访问报错403
2.查看网络插件目录
[root@jdd-k8s-worker-2 net.d]# pwd
/etc/cni/net.d
[root@jdd-k8s-worker-2 net.d]# ll
总用量 8
-rw-r--r--. 1 root root 670 7月 1 11:28 10-calico.conflist
-rw-------. 1 root root 2767 7月 1 15:33 calico-kubeconfig
[root@jdd-k8s-worker-2 net.d]# cat 10-calico.conflist
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"log_file_path": "/var/log/calico/cni/cni.log",
"datastore_type": "kubernetes",
"nodename": "jdd-k8s-worker-2",
"mtu": 1440,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
},
{
"type": "bandwidth",
"capabilities": {"bandwidth": true}
}
]
}
可以获知,节点主机只是单纯使用containerd,并没有使用网络插件使用,而是使用的k8s集群提供的calico插件,来给pod提供网络服务
查看网络插件可执行文件目录,发现里面也都是k8s集群提供的calico插件有关命令
[root@jdd-k8s-worker-2 bin]# pwd
/opt/cni/bin
[root@jdd-k8s-worker-2 bin]# ll
总用量 213144
-rwxr-xr-x. 1 root root 3990548 7月 1 11:28 bandwidth
-rwxr-xr-x. 1 root root 4671647 5月 14 2020 bridge
-rwsr-xr-x. 1 root root 47026188 7月 1 11:28 calico
-rwsr-xr-x. 1 root root 47026188 7月 1 11:28 calico-ipam
-rwxr-xr-x. 1 root root 12124326 5月 14 2020 dhcp
-rwxr-xr-x. 1 root root 5945760 5月 14 2020 firewall
-rwxr-xr-x. 1 root root 3357992 7月 1 11:28 flannel
-rwxr-xr-x. 1 root root 4174394 5月 14 2020 host-device
-rwxr-xr-x. 1 root root 3402808 7月 1 11:28 host-local
-rwsr-xr-x. 1 root root 47026188 7月 1 11:28 install
-rwxr-xr-x. 1 root root 4314598 5月 14 2020 ipvlan
-rwxr-xr-x. 1 root root 3472123 7月 1 11:28 loopback
-rwxr-xr-x. 1 root root 4389622 5月 14 2020 macvlan
-rwxr-xr-x. 1 root root 3924908 7月 1 11:28 portmap
-rwxr-xr-x. 1 root root 4590277 5月 14 2020 ptp
-rwxr-xr-x. 1 root root 3392826 5月 14 2020 sbr
-rwxr-xr-x. 1 root root 2885430 5月 14 2020 static
-rw-r--r--. 1 root root 4555575 7月 1 11:28 tags.txt
-rwxr-xr-x. 1 root root 3622648 7月 1 11:28 tuning
-rwxr-xr-x. 1 root root 4314446 5月 14 2020 vlan
2.在集群节点主机上查看镜像
[root@jdd-k8s-worker-2 ~]# ctr images ls
REF TYPE DIGEST SIZE PLATFORMS LABELS
[root@jdd-k8s-worker-2 ~]# crictl images ls
IMAGE TAG IMAGE ID SIZE
docker.io/library/nginx alpine cc44224bfe208 10.2MB
k8s.gcr.io/kube-proxy v1.22.8 c1cfbd59f7747 105MB
k8s.gcr.io/pause 3.5 ed210e3e4a5ba 686kB
registry.kubeoperator.io:8082/kubeoperator/pause 3.5 ed210e3e4a5ba 686kB
registry.kubeoperator.io:8082/calico/cni v3.21.4 f1de15d70851b 80.5MB
registry.kubeoperator.io:8082/calico/node v3.21.4 c59896fc7ca44 74MB
registry.kubeoperator.io:8082/calico/pod2daemon-flexvol v3.21.4 ab768d7a914ff 9.23MB
registry.kubeoperator.io:8082/kubeoperator/ingress-nginx-controller v1.1.1 2461b2698dcd5 104MB
registry.kubeoperator.io:8082/kubeoperator/k8s-dns-node-cache 1.17.0 3a187183b3a8c 56.8MB
registry.kubeoperator.io:8082/kubeoperator/kube-bench v0.6.8 43684c5de97d2 26.7MB
3.在集群节点主机上查看运行的容器
[root@jdd-k8s-worker-2 ~]# ctr container ls
CONTAINER IMAGE RUNTIME
[root@jdd-k8s-worker-2 ~]# crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
2dc6db8bc454e cc44224bfe208 17 minutes ago Running nginx 0 e51bbae3f9b02
054efcebc2ee1 3a187183b3a8c 4 hours ago Running node-cache 0 00e195d4e2a4e
339e50b2fb546 2461b2698dcd5 4 hours ago Running ingress-nginx-controller 0 4aaecb99b1417
f2762f54bd9aa c59896fc7ca44 4 hours ago Running calico-node 0 6b632de8647df
2ac9e9b24f505 c1cfbd59f7747 4 hours ago Running kube-proxy 0 11ff7c85696e1
注意:从查看节点主机的镜像和容器可知,使用crictl命令代替ctr命令使用
4.crictl配置文件内容
# cat /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
标签:containerd,kubeoperator,registry,io,plugins,xr,k8s,root 来源: https://www.cnblogs.com/sanduzxcvbnm/p/16434979.html