Ceph V4.0 环境搭建与推荐(3)
作者:互联网
Linux OS Ceph Configuration(Minimum) 1) Ceph OSD server: Volume Storage: 1x storage drive per daemon block.db: Optional, Recommended, 1x SSD or NVMe or Optane partition or logical volume per daemon. Sizing is 4% of block.data for BlueStore block.wal:Optional, 1x SSD or NVMe or Optane partition or logical volume per daemon. Use a small size, for example 10 GB, and only if it’s faster than the block.db device.
2) ceph-mon: Monitor Disk: Optionally,1x SSD disk for leveldb monitor data.
3) ceph-mgr: 4) ceph-radosgw 5) ceph-mds: Disk: 2 MB per daemon, plus any space required for logging, which might vary depending on the configured log levels.
Containerized Ceph Configuration(Minimum):
1) Ceph-osd-container:
OSD Storage: 1x storage drive per OSD container. Cannot be shared with OS Disk.
block.db:Optional, Recommended, 1x SSD or NVMe or Optane partition or lvm per daemon. Sizing is 4% of block.data for BlueStore
block.wal: Optionally, 1x SSD or NVMe or Optane partition or logical volume per daemon. Use a small size, for example 10 GB, and only if it’s faster than the block.db device.
2) Ceph-mon-container: Monitor Disk: Optionally, 1x SSD disk for Monitor rocksdb data
3) Ceph-mgr-container 4)ceph-radosgw-container 5)ceph-mds-container
Ceph Client architect:
标签:V4.0,ceph,1x,daemon,per,Ceph,block,搭建 来源: https://www.cnblogs.com/arcing/p/14031223.html