其他分享
首页 > 其他分享> > Ceph 常用命令整理(集群篇)

Ceph 常用命令整理(集群篇)

作者:互联网

启停ceph进程

systemctl restart|stop|start ceph-mon@host

systemctl restart|stop|start ceph-osd@id

[root@ecos75r018-meijia-31-161 ~]# systemctl restart ceph-mon@ecos75r018-meijia-31-161[root@ecos75r018-meijia-31-161 ~]# systemctl status ceph-mon@ecos75r018-meijia-31-161● ceph-mon@ecos75r018-meijia-31-161.service - Ceph cluster monitor daemon   Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; enabled; vendor preset: disabled)   Active: active (running) since Mon 2020-12-14 14:49:37 CST; 14s ago Main PID: 369889 (ceph-mon)   CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@ecos75r018-meijia-31-161.service           └─369889 /usr/bin/ceph-mon -f --cluster ceph --id ecos75r018-meijia-31-161 --setuser ceph --setgroup cephDec 14 14:49:37 ecos75r018-meijia-31-161 systemd[1]: Started Ceph cluster monitor daemon.Dec 14 14:49:37 ecos75r018-meijia-31-161 systemd[1]: Starting Ceph cluster monitor daemon...


检查集群健康状态

ceph health

[root@ecos75r018-meijia-31-161 ~]# ceph healthHEALTH_OK


检查集群状态信息

ceph -s

[root@ecos75r018-meijia-31-161 ~]# ceph -s  cluster:    id:     f60e6370-14ff-44cc-b99c-70b17df8549c    health: HEALTH_OK
 services:    mon: 1 daemons, quorum ecos75r018-meijia-31-161 (age 7m)    mgr: ecos75r018-meijia-31-161(active, since 4d)    mds: cephfs_mj:1 {0=ecos75r018-meijia-31-161=up:active}    osd: 3 osds: 3 up (since 4d), 3 in (since 2M)
 task status:    scrub status:        mds.ecos75r018-meijia-31-161: idle
 data:    pools:   4 pools, 240 pgs    objects: 51.56k objects, 199 GiB    usage:   597 GiB used, 2.4 TiB / 3.0 TiB avail    pgs:     240 active+clean


观察集群实时状态

ceph -w

[root@ecos75r018-meijia-31-161 ~]# ceph -w  cluster:    id:     f60e6370-14ff-44cc-b99c-70b17df8549c    health: HEALTH_OK
 services:    mon: 1 daemons, quorum ecos75r018-meijia-31-161 (age 9m)    mgr: ecos75r018-meijia-31-161(active, since 4d)    mds: cephfs_mj:1 {0=ecos75r018-meijia-31-161=up:active}    osd: 3 osds: 3 up (since 4d), 3 in (since 2M)
 task status:    scrub status:        mds.ecos75r018-meijia-31-161: idle
 data:    pools:   4 pools, 240 pgs    objects: 51.56k objects, 199 GiB    usage:   597 GiB used, 2.4 TiB / 3.0 TiB avail    pgs:     240 active+clean


观察集群状态明细

ceph health detail


查看ceph存储空间

ceph df

[root@ecos75r018-meijia-31-161 ~]# ceph dfRAW STORAGE:    CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED     hdd       3.0 TiB     2.4 TiB     594 GiB      597 GiB         19.43     TOTAL     3.0 TiB     2.4 TiB     594 GiB      597 GiB         19.43 
POOLS:    POOL                ID     STORED      OBJECTS     USED        %USED     MAX AVAIL    rbd_pool             3     197 GiB      51.54k     594 GiB     23.48       645 GiB    ceph_pool            4         0 B           0         0 B         0       645 GiB    cephfs_data          5         0 B           0         0 B         0       645 GiB    cephfs_metadata      6     8.3 KiB          22     1.5 MiB         0       645 GiB

输出包含两个维度,全局维度使用量(RAW STORAGE),存储池维度使用量(POOLS)


清理数据并卸载软件包

ceph-deploy purge hostname

ceph-deploy purgedata hostname

purge会删除/var/lib/ceph下的生成的配置文件,数据文件,并且卸载ceph软件包,pergedata则仅删除/var/lib/ceph下的生成的配置文件,数据文件。

该操作并不会清理osd disk。


创建用户和密钥

ceph auth get-or-create client.admin mds 'allow' osd 'allow ' mon 'allow ' > /etc/ceph/ceph.client.admin.keyring 

ceph auth get-or-create client.admin mds 'allow' osd 'allow ' mon 'allow ' -o /etc/ceph/ceph.client.admin.keyring

这里会创建一个client.admin的用户


为osd.0 创建一个用户并创建一个key(集群的认证文件)

ceph auth get-or-create osd.0 mon 'allow rwx' osd 'allow ' -o /var/lib/ceph/osd/ceph-0/keyring


查看ceph集群中的认证用户及相关的key

ceph auth list

[root@ecos75r018-meijia-31-161 ~]# ceph auth listinstalled auth entries:
mds.ceph-ecos75r018-meijia-31-161  key: AQAcHrtfvD9pHhAAD0MVY0t/W7wyy5YJHGeH6A==  caps: [mds] allow *  caps: [mgr] profile mds  caps: [mon] profile mds  caps: [osd] allow *mds.ecos75r018-meijia-31-161  key: AQDVcLdfIJUAAxAAZl6Exsdh4chF5+Nbti84yA==  caps: [mds] allow  caps: [mon] allow profile mds  caps: [osd] allow rwxosd.0  key: AQDcvTNf9KNpHhAATrdapsznJfcyS0iYLW8bKw==  caps: [mgr] allow profile osd  caps: [mon] allow profile osd  caps: [osd] allow *osd.1  key: AQDtvTNfUd2zGBAAoUkCZfPbo58tUsehEky6HQ==  caps: [mgr] allow profile osd  caps: [mon] allow profile osd  caps: [osd] allow *osd.2  key: AQAuEEZfpD7xKBAAxjbnOEeBbYZ/5HT+1P9aIQ==  caps: [mgr] allow profile osd  caps: [mon] allow profile osd  caps: [osd] allow *client.admin  key: AQDmrzNfNwbqCBAAkWCeTEyKnKH1ZsyYS4KVQw==  caps: [mds] allow *  caps: [mgr] allow *  caps: [mon] allow *  caps: [osd] allow *client.bootstrap-mds  key: AQDmrzNfDizqCBAAECze/Hibrqz2nDwdRZdCUA==  caps: [mon] allow profile bootstrap-mdsclient.bootstrap-mgr  key: AQDmrzNfjUXqCBAA5pscEZ2rf/1F4kAZSSYcZw==  caps: [mon] allow profile bootstrap-mgrclient.bootstrap-osd  key: AQDmrzNfv17qCBAAplYPD3S0fDrKs1AHTinCug==  caps: [mon] allow profile bootstrap-osdclient.bootstrap-rbd  key: AQDmrzNfA3rqCBAACxy1rqD2XPIc/knLqzFqug==  caps: [mon] allow profile bootstrap-rbdclient.bootstrap-rbd-mirror  key: AQDmrzNfXZPqCBAANXqHy2NjPWwt268mU88Czw==  caps: [mon] allow profile bootstrap-rbd-mirrorclient.bootstrap-rgw  key: AQDmrzNfIrDqCBAAX2KRv1DUfAdhWb6E801How==  caps: [mon] allow profile bootstrap-rgwmgr.ecos75r018-meijia-31-161  key: AQC7vTNfoQfjBxAA6kMB3hTOQgxzPdHyRpXPMw==  caps: [mds] allow *  caps: [mon] allow profile mgr  caps: [osd] allow *


删除集群中的一个认证用户

ceph auth del osd.0


查看集群运行参数配置

ceph daemon {daemon-name} config show | more

[root@ecos75r018-meijia-31-161 ~]# ceph daemon mon.ecos75r018-meijia-31-161 config show | more{    "name": "mon.ecos75r018-meijia-31-161",    "cluster": "ceph",    "admin_socket": "/var/run/ceph/ceph-mon.ecos75r018-meijia-31-161.asok",    "admin_socket_mode": "",    "auth_client_required": "cephx",    "auth_cluster_required": "cephx",    "auth_debug": "false",    "auth_mon_ticket_ttl": "43200.000000",    "auth_service_required": "cephx",    "auth_service_ticket_ttl": "3600.000000",    "auth_supported": "",    "bdev_aio": "true",    "bdev_aio_max_queue_depth": "1024",    "bdev_aio_poll_ms": "250",    "bdev_aio_reap_max": "16",    "bdev_async_discard": "false",    "bdev_block_size": "4096",    "bdev_debug_aio": "false",    "bdev_debug_aio_log_age": "5.000000",    "bdev_debug_aio_suicide_timeout": "60.000000",    "bdev_debug_inflight_ios": "false",    "bdev_enable_discard": "false",    "bdev_inject_crash": "0",    "bdev_inject_crash_flush_delay": "2",    "bdev_nvme_retry_count": "-1",    "bdev_nvme_unbind_from_kernel": "false",    "bluefs_alloc_size": "1048576",    "bluefs_allocator": "bitmap",    "bluefs_buffered_io": "false",    "bluefs_compact_log_sync": "false",    "bluefs_log_compact_min_ratio": "5.000000",    "bluefs_log_compact_min_size": "16777216",    "bluefs_max_log_runway": "4194304",    "bluefs_max_prefetch": "1048576",    "bluefs_min_flush_size": "524288",--More--


查看ceph log所在目录

ceph-conf --name mds.ecos75r018-meijia-31-161 --show-config-value log_file

[root@ecos75r018-meijia-31-161 ~]# ceph-conf --name mds.ecos75r018-meijia-31-161 --show-config-value log_file/var/log/ceph/ceph-mds.ecos75r018-meijia-31-161.log


标签:ceph,ecos75r018,31,Ceph,meijia,集群,allow,常用命令,161
来源: https://blog.51cto.com/15080020/2654737