其他分享
首页 > 其他分享> > k8s集群中安装rook-ceph

k8s集群中安装rook-ceph

作者:互联网

# 克隆指定版本的
git clone --single-branch --branch v1.6.3 https://github.com/rook/rook.git

# 进入到目录
cd rook/cluster/examples/kubernetes/ceph

#全部的pod都会在rook-ceph命名空间下建立
kubectl create -f common.yaml

kubectl create -f crds.yaml 
#部署Rook操做员
kubectl create -f operator.yaml


# 如下这些镜像是rook-ceph中使用的,但是会从k8s.gcr.io中拉取,网络的问题拉取不到,这里采用从其他地方拉取,然后重新tag的方法
k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0  
docker pull antmoveh/csi-snapshotter:v4.0.0
docker tag antmoveh/csi-snapshotter:v4.0.0 k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0 

k8s.gcr.io/sig-storage/csi-provisioner:v2.0.4
docker pull antmoveh/csi-provisioner:v2.0.4
docker tag antmoveh/csi-provisioner:v2.0.4 k8s.gcr.io/sig-storage/csi-provisioner:v2.0.4

k8s.gcr.io/sig-storage/csi-resizer:v1.0.1
docker pull antmoveh/csi-resizer:v1.0.1
docker tag antmoveh/csi-resizer:v1.0.1 k8s.gcr.io/sig-storage/csi-resizer:v1.0.1

k8s.gcr.io/sig-storage/csi-attacher:v3.0.2
docker pull antmoveh/csi-attacher:v3.0.2
docker tag antmoveh/csi-attacher:v3.0.2 k8s.gcr.io/sig-storage/csi-attacher:v3.0.2

k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
docker pull antmoveh/csi-node-driver-registrar:v2.0.1 
docker tag  antmoveh/csi-node-driver-registrar:v2.0.1 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1

#建立Rook Ceph集群
kubectl create -f cluster.yaml

#部署Ceph toolbox 命令行工具
#默认启动的Ceph集群,是开启Ceph认证的,这样你登录Ceph组件所在的Pod里,是无法去获取集群状态,以及执行CLI命令,这时须要部署Ceph toolbox,命令以下
kubectl create -f toolbox.yaml

#进入ceph tool容器
kubectl exec -it pod/rook-ceph-tools-fc5f9586c-6ff7r -n rook-ceph bash

#查看ceph状态
ceph status
  cluster:
    id:     b0228f2b-d0f4-4a6e-9c4f-9d826401fac2
    health: HEALTH_WARN
            mons are allowing insecure global_id reclaim
            1 osds down
            1 host (1 osds) down
            Degraded data redundancy: 1 pg undersized
 
  services:
    mon: 3 daemons, quorum a,b,c (age 52s)
    mgr: a(active, since 98m)
    osd: 3 osds: 2 up (since 4m), 3 in (since 54m)
 
  task status:
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 297 GiB / 300 GiB avail
    pgs:     1 active+undersized

#至此已经部署完成了,查看rook-ceph命名空间下的pod,首先看pod的状况,有operator、mgr、agent、discover、mon、osd、tools,且osd-prepare是completed的状态,其它是running的状态

#暴露方式有多种选择适合本身的一个便可
https://github.com/rook/rook/blob/master/Documentation/ceph-dashboard.md

#执行完cluster.yaml后rook会自动帮咱们建立ceph的Dashboard,pod及service以下图,默认dashboard为ClusterIP,须要咱们改成NodePort对外暴露服务。
kubectl  edit svc rook-ceph-mgr-dashboard -n rook-ceph

kubectl create -f  dashboard-external-https.yaml

访问地址,注意是https,http会访问不成功
https://192.168.10.215:32111

默认用户名为
admin

密码获取方式执行以下命令
kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo

Ceph Dashboard首页,点击首页小齿轮修改admin的密码


未完,待续

标签:rook,csi,ceph,gcr,io,k8s
来源: https://www.cnblogs.com/sanduzxcvbnm/p/14842496.html