二进制安装kubernetes1.14.3
作者:互联网
一般初学者,使用kubeadm能够快速搭建k8s集群环境,但是二进制安装依旧对学习kubernetes很有帮助,能够系统的帮助你了解集群的各个组件、证书等。
一、环境准备
3台服务器,2C、2G
主机名 组件 IP
k8s-master kube-apiserver 10.1.24.103
kube-controller-manager
kube-scheduler
etcd
------------------------------------------------------------------------------------
k8s-node1 kubelet 10.1.24.104
kube-proxy
docker
flannel
etcd
------------------------------------------------------------------------------------
k8s-node2 kubelet 10.1.24.105
kube-proxy
docker
flannel
etcd
-------------------------------------------------------------------------------------
操作系统:CentOS Linux release 7.4.1708
kubernetes:1.14.3
docker:18.09.7
etcd:v3.3.13
flannel:v0.11.0
二、初始化环境
1、关闭firewalld以及selinux
2、在所有结点设置sysctl
#cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
#sysctl -p
如果提示:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
则加载br_netfilter模块
#modprobe br_netfilter
3、修改hosts
#vi /etc/hosts
10.1.24.103 k8s-master
10.1.24.104 k8s-node1
10.1.24.105 k8s-node2
# scp /etc/hosts 10.1.24.104:/etc/hosts
# scp /etc/hosts 10.1.24.105:/etc/hosts
4、开启ipvs(kubernetes在1.11以后支持使用ipvs),所有节点都操作一遍
# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
5、关闭swap(kubernetes1.8开始要求关闭系统的swap,如果不关闭,默认配置下的kubelet将无法启动)
# swapoff -a
注释/etc/fstab里面的swap的自动挂载
6、两个node节点安装docker
# wget https://download.docker.com/linux/centos/docker-ce.repo
# yum install docker-ce -y
# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://010686ec.m.daocloud.io
三、自签TLS证书
组件: 需要的证书
etcd ca.pem server.pem server-key.pem
flannel ca.pem server.pem server-key.pem
kube-apiserver ca.pem server.pem server-key.pem
kubelet ca.pem ca-key.pem
kube-proxy ca.pem kube-proxy.pem kube-proxy-key.pem
kubectl ca.pem admin.pem admin-key.pem ------用于管理员访问集群
这里我使用CFSSL工具来生成证书。CFSSL是CloudFlare开源的一款PKI/TLS工具。 CFSSL 包含一个命令行工具 和一个用于 签名,验证并且捆绑TLS证书的 HTTP API 服务。 使用Go语言编写。用CFSSL可以创建一个获取和操作证书的内部认证中心,运行认证中心需要一个CA证书和相应的CA私钥。任何知道私钥的人,都可以充当CA颁发证书,因此私钥的保护很重要。
1、安装CFSSL工具
# curl -s -L -o /bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
# curl -s -L -o /bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
# curl -s -L -o /bin/cfssl-certinfo https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
# chmod +x /bin/cfssl*
2、生成CA证书和私钥
创建一个文件ca-csr.json:
[root@k8s-master k8s-ssl]# cat ca-csr.json
{
"CN": "kubernetes", --浏览器使用该字段验证网站是否合法,一般写的是域名
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN", --国家
"L": "Guangzhou", --城市
"ST": "Guangzhou", --省
"O": "k8s", --公司名称
"OU": "System" --部门
}
]
}
生成CA证书ca.pem、CA私钥ca-key.pem和CSR(证书签名请求):
[root@k8s-master k8s-ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
[root@k8s-master k8s-ssl]# ls
ca.csr ca-csr.json ca-key.pem ca.pem
3、配置证书生成策略,规定CA可以颁发哪种类型的证书
[root@k8s-master k8s-ssl]# cat ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": { --指定证书的用途
"expiry": "87600h",
"usages": [
"signing", --表示该证书可以用于签名其它证书,生成的ca.pem证书中CA=TRUE
"key encipherment",
"server auth", --表示 client 可以用该 CA 对 server 提供的证书进行验证
"client auth" --表示 server 可以用该 CA 对 client 提供的证书进行验证
]
}
}
}
}
注解:这里有一个默认的策略default和一个profile,可以设置多个profile。
4、生成server证书
[root@k8s-master k8s-ssl]# cat server-csr.json
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"10.1.24.103",
"10.1.24.104",
"10.1.24.105",
"kubernetes",
"k8s-node1",
"k8s-master",
"k8s-node2",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Guangzhou",
"ST": "Guangzhou",
"O": "k8s",
"OU": "System"
}
]
}
[root@k8s-master k8s-ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
5、生成admin证书
[root@k8s-master k8s-ssl]# cat admin-csr.json
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Guangzhou",
"ST": "Guangzhou",
"O": "System:masters",
"OU": "System"
}
]
}
[root@k8s-master k8s-ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
6、生成kube-proxy证书
[root@k8s-master k8s-ssl]# cat kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Guangzhou",
"ST": "Guangzhou",
"O": "k8s",
"OU": "System"
}
]
}
[root@k8s-master k8s-ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
7、同步证书到所有节点
[root@k8s-master ~]# scp -r /root/k8s-ssl 10.1.24.104:/root/
[root@k8s-master ~]# scp -r /root/k8s-ssl 10.1.24.105:/root/
四、部署etcd集群
为了方便管理,将所有kubernetes相关的文件都放在同一个目录下:
#mkdir -p /data/kubernetes/{cfg,bin,ssl,etcd}
#mv k8s-ssl/*.pem /data/kubernetes/ssl/
1、下载etcd二进制文件
# wget https://github.com/etcd-io/etcd/releases/download/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz
# tar -zxf etcd-v3.3.13-linux-amd64.tar.gz
# mv etcd-v3.3.13-linux-amd64/etcd* /data/kubernetes/bin/
2、配置etcd配置文件
[root@k8s-master etcd]# cat /data/kubernetes/cfg/etcd.conf
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/data/kubernetes/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.1.24.103:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.1.24.103:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.1.24.103:2380"
ETCD_INITIAL_CLUSTER="etcd01=https://10.1.24.103:2380,etcd02=https://10.1.24.104:2380,etcd03=https://10.1.24.105:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-clusters"
ETCD_ADVERTISE_CLIENT_URLS="https://10.1.24.103:2379"
3、设置etcd.service服务启动
[root@k8s-master etcd]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/data/kubernetes/cfg/etcd.conf
ExecStart=/data/kubernetes/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=${ETCD_INITIAL_CLUSTER_STATE} \
--cert-file=/data/kubernetes/ssl/server.pem \
--key-file=/data/kubernetes/ssl/server-key.pem \
--peer-cert-file=/data/kubernetes/ssl/server.pem \
--peer-key-file=/data/kubernetes/ssl/server-key.pem \
--trusted-ca-file=/data/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/data/kubernetes/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
4、同步到所有节点
# scp /usr/lib/systemd/system/etcd.service 10.1.24.104:/usr/lib/systemd/system/
# scp /data/kubernetes/cfg/etcd.conf 10.1.24.104:/data/kubernetes/cfg/etcd.conf
# scp /data/kubernetes/ssl/* 10.1.24.104:/data/kubernetes/ssl/
5、启动etcd并测试
在所有节点执行
# systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd
# /data/kubernetes/bin/etcdctl --ca-file=/data/kubernetes/ssl/ca.pem --cert-file=/data/kubernetes/ssl/server.pem --key-file=/data/kubernetes/ssl/server-key.pem --endpoints="https://10.1.24.103:2379,https://10.1.24.104:2379,https://10.1.24.105:2379" cluster-health
member 4559b31692ea7db0 is healthy: got healthy result from https://10.1.24.103:2379
member 67398f74fa475c04 is healthy: got healthy result from https://10.1.24.104:2379
member a7a2743d12c023e2 is healthy: got healthy result from https://10.1.24.105:2379
cluster is healthy
五、部署flannel网络
生产中最开始我所在的公司用的就是flannel,根据业务要求来说,目前的业务用flannel就够了,而且因为是一开始直接用kubeadm安装的,所以默认用的是udp的后端机制,这里用vxlan来当作backend
1、下载flannel二进制包
# mkdir /data/kubernetes/flannel
# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
# tar -zxf flannel-v0.11.0-linux-amd64.tar.gz
# mv flanneld mk-docker-opts.sh /data/kubernetes/bin/
2、向etcd写入集群Pod网段信息
flannel利用kubernetes API或者etcd用于存储整个集群的网络配置,因此将集群pod网段信息写入到etcd中:
[root@k8s-master ssl]# etcdctl --ca-file=./ca.pem --cert-file=./server.pem --key-file=./server-key.pem --endpoints="https://10.1.24.103:2379,https://10.1.24.104:2379,https://10.1.24.105:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
可以用get查看
[root@k8s-master ssl]# etcdctl --ca-file=./ca.pem --cert-file=./server.pem --key-file=./server-key.pem --endpoints="https://10.1.24.103:2379,https://10.1.24.104:2379,https://10.1.24.105:2379" get /coreos.com/network/config
3、设置flanneld配置文件及启动管理文件
[root@k8s-node1 ssl]# cat /data/kubernetes/cfg/flanneld.conf
FLANNEL_OPTIONS="--etcd-endpoints=https://10.1.24.103:2379,https://10.1.24.104:2379,https://10.1.24.105:2379 -etcd-cafile=/data/kubernetes/ssl/ca.pem -etcd-certfile=/data/kubernetes/ssl/server.pem -etcd-keyfile=/data/kubernetes/ssl/server-key.pem"
[root@k8s-node1 ssl]# cat /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/data/kubernetes/cfg/flanneld.conf
ExecStart=/data/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/data/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
[root@k8s-node1 ssl]# systemctl daemon-reload
[root@k8s-node1 ssl]# systemctl enable flanneld
[root@k8s-node1 ssl]# systemctl start flanneld
4、配置docker启动指定的flanneld子网段
[root@k8s-node1 ssl]# mv /usr/lib/systemd/system/docker.service /usr/lib/systemd/system/docker.service_back
[root@k8s-node1 ssl]# cat /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
[root@k8s-node1 ssl]# systemctl daemon-reload
[root@k8s-node1 ssl]# systemctl restart docker
六、创建node节点的kubeconfig文件
kubernetes1.4开始支持由kube-apiserver为客户端生成TLS证书的 TLS Bootstrapping功能,这样就不需要为每个客户端生成证书了,该功能目前仅支持为kubelet生成证书。
先在master上下载kubectl工具
[root@k8s-master bin]# curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.14.3/bin/linux/amd64/kubectl
[root@k8s-master ~]# kubectl version
1、创建kubelet bootstrapping kubeconfig文件
a、创建TLS Bootstrapping Token
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > /data/kubernetes/ssl/token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
[root@k8s-master ssl]# cat token.csv
9bef21ed1af7bacf6a197a5e26aafbe0,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
b、配置相关参数
export KUBE_APISERVER="https://10.1.24.103:6443"
设置cluster参数
kubectl config set-cluster kubernetes \
--certificate-authority=/data/kubernetes/ssl/ca.pem\
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=/data/kubernetes/cfg/bootstrap.kubeconfig
设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=/data/kubernetes/cfg/bootstrap.kubeconfig
设置上下文
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=/data/kubernetes/cfg/bootstrap.kubeconfig
设置默认上下文
kubectl config use-context default --kubeconfig=/data/kubernetes/cfg/bootstrap.kubeconfig
--embed-certs 为 true 时表示将 certificate-authority 证书写入到生成的 bootstrap.kubeconfig 文件中;
设置客户端认证参数时没有指定秘钥和证书,后续由 kube-apiserver 自动生成;
2、创建kube-proxy kubeconfig文件
kubectl config set-cluster kubernetes \
--certificate-authority=/data/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=/data/kubernetes/cfg/kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=/data/kubernetes/ssl/kube-proxy.pem \
--client-key=/data/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=/data/kubernetes/cfg/kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=/data/kubernetes/cfg/kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=/data/kubernetes/cfg/kube-proxy.kubeconfig
七、部署master组件
1、下载相关二进制包
# wget https://dl.k8s.io/v1.14.3/kubernetes-server-linux-amd64.tar.gz
# tar -zxf kubernetes-server-linux-amd64.tar.gz
# cp kube-apiserver kube-controller-manager kube-scheduler /data/kubernetes/bin/ -a
2、部署kube-apiserver组件
[root@k8s-master system]# cat /data/kubernetes/cfg/kube-apiserver.conf
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://10.1.24.103:2379,https://10.1.24.104:2379,https://10.1.24.105:2379 \
--insecure-bind-address=127.0.0.1 \
--insecure-port=8080 \
--bind-address=10.1.24.103 \
--secure-port=6443 \
--advertise-address=10.1.24.103 \
--allow-privileged=true \
--service-cluster-ip-range=10.10.10.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/data/kubernetes/ssl/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/data/kubernetes/ssl/server.pem \
--kubelet-https=true \
--tls-private-key-file=/data/kubernetes/ssl/server-key.pem \
--client-ca-file=/data/kubernetes/ssl/ca.pem \
--service-account-key-file=/data/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/data/kubernetes/ssl/ca.pem \
--etcd-certfile=/data/kubernetes/ssl/server.pem \
--etcd-keyfile=/data/kubernetes/ssl/server-key.pem"
[root@k8s-master system]# cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/data/kubernetes/cfg/kube-apiserver.conf
ExecStart=/data/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
3、部署kube-scheduler组件
[root@k8s-master system]# cat /data/kubernetes/cfg/kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"
[root@k8s-master system]# cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/data/kubernetes/cfg/kube-scheduler.conf
ExecStart=/data/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
4、部署kube-controller-manager组件
[root@k8s-master system]# cat /data/kubernetes/cfg/kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.10.10.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/data/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/data/kubernetes/ssl/ca-key.pem \
--root-ca-file=/data/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/data/kubernetes/ssl/ca-key.pem"
[root@k8s-master system]# cat kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/data/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/data/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
5、启动服务
# systemctl daemon-reload
# systemctl enable kube-apiserver
# systemctl enable kube-scheduler
# systemctl enable kube-controller-manager
# systemctl start kube-apiserver && systemctl start kube-scheduler && systemctl start kube-controller-manager
八、部署node节点组件
1、部署kubelet组件
[root@k8s-node1 system]# cat /data/kubernetes/cfg/kubelet.conf
OPTS="--logtostderr=true \
--v=4 \
--address=10.1.24.104 \
--hostname-override=10.1.24.104 \
--kubeconfig=/data/kubernetes/cfg/kubelet.kubeconfig \
--experimental-bootstrap-kubeconfig=/data/kubernetes/cfg/bootstrap.kubeconfig \
--cert-dir=/data/kubernetes/ssl \
--allow-privileged=true \
--cluster-dns=10.10.10.2 \
--cluster-domain=cluster.local \
--fail-swap-on=false"
[root@k8s-node1 system]# cat kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=/data/kubernetes/cfg/kubelet.conf
ExecStart=/data/kubernetes/bin/kubelet $OPTS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
2、部署kube-proxy组件
[root@k8s-node1 system]# cat /data/kubernetes/cfg/kube-proxy.conf
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=10.1.24.104 \
--kubeconfig=/data/kubernetes/cfg/kube-proxy.kubeconfig"
[root@k8s-node1 system]# cat kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=-/data/kubernetes/cfg/kube-proxy.conf
ExecStart=/data/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
3、启动服务
# systemctl daemon-reload && systemctl enable kubelet && systemctl enable kube-proxy
这个时候,user kubelet-bootstrap是没有权限访问集群的,所以要创建clusterrolebinding 绑定user到clusterrole上:
[root@k8s-master ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
在master上查看csr信息:
[root@k8s-master ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-oK6pjySuqUUncfHsq9VYF9jg69rtQPRy4r_VNIv4-DM 56s kubelet-bootstrap Pending
[root@k8s-master ~]# kubectl certificate approve node-csr-oK6pjySuqUUncfHsq9VYF9jg69rtQPRy4r_VNIv4-DM
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
10.1.24.104 Ready <none> 6s v1.14.3
标签:kube,kubernetes,ssl,二进制,kubernetes1.14,data,pem,k8s,安装 来源: https://blog.51cto.com/11436096/2417682