其他分享
首页 > 其他分享> > Ceph rbd的寻址(rbd块文件的下载)

Ceph rbd的寻址(rbd块文件的下载)

作者:互联网

1. Ceph rbd 与 rgw的寻址(rbd块/对象存储文件的下载)

1.1. 索引的存储

ceph的索引都存储在omap中

1.2. rbd 的寻址

1.3. 小笔记

顺便记一下 rados listompvals 等omap操作命令的流程

1.4. rgw的寻址

1.5. 数据恢复思路

1.5.1. 场景

机房掉电后,数据损坏,omap损坏,无法启动服务
集群无法读写,也读不到rbd信息

1.5.2. 思路

新建集群,将元数据导入,再将数据导入

# crush获取
osdmaptool osdmap.43__0_641716DC__none --export-crush /tmp/crushmap.bin
crushtool -d /tmp/crushmap.bin -o /tmp/crushmap.txt
# omap
osdmaptool --print osdmap.43__0_641716DC__none
#----
epoch 43
fsid 8685ec71-96a6-413a-9e4d-ff47071dc4f5
created 2020-12-22 16:57:37.845173
modified 2021-04-04 15:51:46.729929
flags sortbitwise,recovery_deletes,purged_snapdirs
crush_version 6
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
require_min_compat_client jewel
min_compat_client jewel
require_osd_release luminous

pool 1 '.rgw.root' replicated size 1 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 7 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgw
pool 2 'test001' replicated size 1 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 40 flags hashpspool stripe_width 0
	removed_snaps [1~3]
pool 3 'default.rgw.control' replicated size 1 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 21 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgw
pool 4 'default.rgw.meta' replicated size 1 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 23 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgw
pool 5 'default.rgw.log' replicated size 1 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 25 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgw
pool 6 'default.rgw.buckets.index' replicated size 1 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 28 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgw
pool 7 'default.rgw.buckets.data' replicated size 1 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 31 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgw
pool 8 'default.rgw.buckets.non-ec' replicated size 1 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 34 owner 18446744073709551615 flags hashpspool stripe_width 0 application rgw

max_osd 1
osd.0 up   in  weight 1 up_from 42 up_thru 42 down_at 41 last_clean_interval [37,40) 9.134.1.121:6801/12622 9.134.1.121:6802/12622 9.134.1.121:6803/12622 9.134.1.121:6804/12622 exists,up 59ebf8d5-e7f7-4c46-8e05-bac5140eee89

看似可行,但pg与osdmap的关系,omap版本问题看着不太好解决,有场景或者有需求的时候再来看

标签:crush,00,num,rgw,Ceph,寻址,rbd,size
来源: https://blog.csdn.net/Moolight_shadow/article/details/119276840