其他分享
首页 > 其他分享> > k8s安装ck集群

k8s安装ck集群

作者:互联网

安装k8s集群

         1.下载各种镜像到本地

         

vi rpm_install_playbook.yaml 

- hosts: k8s-all
  remote_user: admin
  vars:
  - name: "rpm_install"
  tasks:
  - name: "copy_docker_rpms"
    copy: src=/opt/dockers_rpm dest=/opt/
    become: yes
  - name: "Install those rpms: docker deps"
    become: yes
    shell: rpm -ivh /opt/dockers_rpm/dep/*.rpm
  - name: "Install those rpms: docker"
    become: yes
    shell: rpm -ivh /opt/dockers_rpm/*.rpm
  - name: "copy_k8s_rpms"
    copy: src=/opt/k8s_rpm dest=/opt/
    become: yes
  - name: "Install those rpms: k8s deps"
    become: yes
    shell: rpm -ivh /opt/k8s_rpm/dep/*.rpm
  - name: "Install those rpms: k8s"
    become: yes
    shell: rpm -ivh /opt/k8s_rpm/*.rpm
  - name: "copy_other_rpms"
    copy: src=/opt/other_rpm dest=/opt/
    become: yes
  - name: "Install those rpms: othertpms"
    become: yes
    shell: rpm -ivh /opt/other_rpm/*.rpm
rpm_install_playbook.yaml

       ansible-playbook rpm_install_playbook.yaml

       

k8s master高可用 

      1.选三个节点安装keepalived

global_defs {
   router_id master-1
}
vrrp_instance VI_1 {
    state MASTER 
    interface ens160
    virtual_router_id 50
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.30.200
    }
}
master-1
global_defs {
   router_id master-2
}
vrrp_instance VI_1 {
    state BACKUP 
    interface ens160
    virtual_router_id 50
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.30.200
    }
}
master-2
global_defs {
   router_id master-3
}
vrrp_instance VI_1 {
    state BACKUP 
    interface ens160
    virtual_router_id 50
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.30.200
    }
}
master-3

   需要注意的地方是192.168.30.200为虚拟IP,需要和集群处于同一个网段,且没有被配置为其它的物理机的IP,ens160为物理网卡的设备名称。

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.23.6
imageRepository: registry.aliyuncs.com/google_containers 
controlPlaneEndpoint: 192.168.30.200:6443
networking:
  podSubnet: 10.244.0.0/16 
  serviceSubnet: 10.96.0.0/12 
kubeadm.conf

  在master-1上编辑初始化配置:keepalived的情况下controlPlaneEndpoint需要设置为虚拟IP地址

master单节点

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.23.6
imageRepository: registry.aliyuncs.com/google_containers 
networking:
  podSubnet: 10.244.0.0/16 
  serviceSubnet: 10.96.0.0/12 
kubeadm.conf

     初始化master节点

      kubeadm init --config kubeadm.conf

      

      

备用master节点加入k8s集群

        登录master-2和master-3 让它们以control-node的方式加入:
        kubeadm join 192.168.30.200:6443 --token g55zwf.wu671xiryl2c0k7z --discovery-token-ca-cert-hash sha256:2b6c285bdd34cc5814329d5ba8cec3302d53aa925430330fb35c174565f05ad0 --control-plane

        master-2和master-3上如果要执行kubectl 也需要把master-1上的集群认证文件拷贝到master2和master3上

        mkdir .kube
        cp -i /etc/kubernetes/admin.conf ~/.kube/config

worker节点加入k8s集群

       kubeadm join 192.168.30.99:6443 --token g55zwf.wu671xiryl2c0k7z --discovery-token-ca-cert-hash sha256:2b6c285bdd34cc5814329d5ba8cec3302d53aa925430330fb35c174565f05ad0

安装k8s集群网络插件

      kubectl apply -f calico.yaml

      

 安装click-house operator

       kubectl create -f  clickhouse-operator-install-bundle.yaml

       

 安装click-house 集群pod

      

标签:ck,opt,name,集群,master,yes,k8s,rpm
来源: https://www.cnblogs.com/yxh168/p/16547931.html