其他分享
首页 > 其他分享> > MFS分布式存储系统----系统部署,文件写入与恢复,存储类

MFS分布式存储系统----系统部署,文件写入与恢复,存储类

作者:互联网

文章目录

一、系统部署

1.1 简介

MFS是一个具有容错性的网络分布式文件系统,它把数据分散存放在多个物理服务器上,而呈现给用户的则是一个统一的资源。
MFS的组成:

1.2 镜像源配置、下载

[root@server1 ~]# curl "http://ppa.moosefs.com/MooseFS-3-el7.repo" > /etc/yum.repos.d/MooseFS.repo
[root@server1 ~]# cd /etc/yum.repos.d/
[root@server1 yum.repos.d]# ls
MooseFS.repo  redhat.repo  westos.repo
[root@server1 yum.repos.d]# vim MooseFS.repo  ##设置gpgcheck=0
[root@server1 yum.repos.d]# cat MooseFS.repo 
[MooseFS]
name=MooseFS $releasever - $basearch
baseurl=http://ppa.moosefs.com/moosefs-3/yum/el7
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS
enabled=1
[root@server1 yum.repos.d]# yum install moosefs-master moosefs-cgi moosefs-cgiserv moosefs-cli -y  ##安装相关rpm包

在这里插入图片描述
在这里插入图片描述

复制源到其它两个结点上:
[root@server1 yum.repos.d]# scp MooseFS.repo server2:/etc/yum.repos.d/
[root@server1 yum.repos.d]# scp MooseFS.repo server3:/etc/yum.repos.d/

[root@server2 ~]# yum install moosefs-chunkserver -y
[root@server3 ~]# yum install moosefs-chunkserver -y

在这里插入图片描述在这里插入图片描述
在这里插入图片描述

1.3 master部署、启动服务

[root@server1 ~]# cd /etc/mfs/
[root@server1 mfs]# ls
mfsexports.cfg  mfsexports.cfg.sample  mfsmaster.cfg  mfsmaster.cfg.sample  mfstopology.cfg  mfstopology.cfg.sample
[root@server1 mfs]# vim /etc/hosts  ##添加server1的解析mfsmaster
192.168.0.1 server1 mfsmaster

[root@server1 mfs]# systemctl start moosefs-master.service    ##启动服务
[root@server1 mfs]# yum install -y net-tools
[root@server1 mfs]# systemctl start moosefs-cgiserv.service   ##开启相关cgi服务(提供可视化界面)
[root@server1 mfs]# netstat -antlp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:9419            0.0.0.0:*               LISTEN      3486/mfsmaster      
tcp        0      0 0.0.0.0:9420            0.0.0.0:*               LISTEN      3486/mfsmaster      
tcp        0      0 0.0.0.0:9421            0.0.0.0:*               LISTEN      3486/mfsmaster      
tcp        0      0 0.0.0.0:9425            0.0.0.0:*               LISTEN      3521/python2 

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

1.4 chunk server部署、启动服务

chunk server是真正存储数据的地方,在这里我们首先要划出来一个区域来提供存储,生产环境中一般选择添加一块儿硬盘设备来作为存储挂载点,而不是直接挂载到系统根分区上。这样的存储方式具有很好的隔离性,并且当服务器出现故障时,只需要把硬盘设备取出放置在其它主机上即可,数据不会因系统故障而丢失

[root@server2 ~]# vim /etc/hosts 
[root@server3 ~]# vim /etc/hosts 
192.168.0.1 server1 mfsmaster

server2 3 各添加10G的硬盘并设置分区、格式化文件系统,设置开机自动挂载
[root@server2 ~]# fdisk /dev/vdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x6e6b68bd.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 
First sector (2048-20971519, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519): 
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set

Command (m for help): p

Disk /dev/vdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x6e6b68bd

   Device Boot      Start         End      Blocks   Id  System
/dev/vdb1            2048    20971519    10484736   83  Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

在这里插入图片描述
在这里插入图片描述

[root@server2 ~]# mkfs.xfs /dev/vdb1 ##格式化文件系统
[root@server2 ~]# mkdir /mnt/chunk1  ##创建挂载目录
[root@server2 ~]# mount /dev/vdb1 /mnt/chunk1
[root@server2 ~]# df
[root@server2 ~]# chown mfs.mfs /mnt/chunk1/  ##更改存储的用户及用户组
[root@server2 ~]# blkid  ##查看设备uid,通过uid来设置开机自动挂载
/dev/vda1: UUID="1d134f29-d02d-457b-abaa-cf78a7eec47f" TYPE="xfs" 
/dev/vda2: UUID="jSgJQ8-fAD0-cFik-Z6eS-sPLT-h3uy-z1GWhM" TYPE="LVM2_member" 
/dev/mapper/rhel-root: UUID="89519b58-bab3-4eba-ab05-c49369c7a6a2" TYPE="xfs" 
/dev/mapper/rhel-swap: UUID="8a430600-99c4-490a-873e-a1a1b63dc580" TYPE="swap" 
/dev/vdb1: UUID="2400ec14-c22c-408d-af57-143cec7877e1" TYPE="xfs" 
[root@server2 ~]# vim /etc/fstab   ##设置开机自动挂载
UUID="2400ec14-c22c-408d-af57-143cec7877e1"  /mnt/chunk1   xfs     defaults        0 0
[root@server2 ~]# umount /mnt/chunk1/
[root@server2 ~]# mount -a
[root@server2 ~]# df
[root@server2 mfs]# systemctl start moosefs-chunkserver.service  ##启动服务
[root@server2 ~]# netstat -antlp  ##查看启动的端口
[root@server1 mfs]# netstat -antlp

[root@server2 ~]# cd /etc/mfs/
[root@server2 mfs]# ls
mfschunkserver.cfg  mfschunkserver.cfg.sample  mfshdd.cfg  mfshdd.cfg.sample
[root@server2 mfs]# vim mfshdd.cfg  ##编辑配置文件,指定mfs存储文件
/mnt/chunk1
[root@server2 mfs]# systemctl reload moosefs-chunkserver.service  

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
server2随机开启端口与server1端口建立连接
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

server3和server2上一样的配置
[root@server3 ~]# mkdir /mnt/chunk2
[root@server3 ~]# fdisk /dev/vdb
[root@server3 ~]# mkfs.xfs /dev/vdb1
[root@server3 ~]# blkid
[root@server3 ~]# vim /etc/fstab 
UUID="00e4eedf-238d-44f0-bacd-d9141985ddf0" /mnt/chunk2 xfs defaults 0 0
[root@server3 ~]# mount -a 
[root@server3 ~]# df
[root@server3 ~]# chown mfs.mfs /mnt/chunk2/
[root@server3 ~]# cd /etc/mfs/
[root@server3 mfs]# vim mfshdd.cfg
/mnt/chunk2
[root@server3 mfs]# systemctl start moosefs-chunkserver

在这里插入图片描述
在这里插入图片描述

1.5 client端部署

[root@foundation Desktop]# curl "http://ppa.moosefs.com/MooseFS-3-el8.repo" > /etc/yum.repos.d/MooseFS.repo
[root@foundation Desktop]# vim /etc/yum.repos.d/MooseFS.repo
[root@foundation Desktop]# cat /etc/yum.repos.d/MooseFS.repo
[MooseFS]
name=MooseFS $releasever - $basearch
baseurl=http://ppa.moosefs.com/moosefs-3/yum/el8
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS
enabled=1
[root@foundation Desktop]# yum install moosefs-client -y
[root@foundation Desktop]# rpm -qa | grep moosefs
moosefs-client-3.0.115-1.rhsystemd.x86_64

[root@foundation Desktop]# vim /etc/hosts   ##添加server1的mfsmaster解析
192.168.0.1 mfsmaster

[root@foundation Desktop]# cd /etc/mfs/
[root@foundation mfs]# ls
mfsmount.cfg  mfsmount.cfg.sample
[root@foundation mfs]# vim mfsmount.cfg
/mnt/mfs
[root@foundation mfs]# mkdir /mnt/mfs
[root@foundation mfs]# mfsmount  ##挂载mfs存储
mfsmaster accepted connection with parameters: read-write,restricted_ip,admin ; root mapped to root:root

[root@foundation mfs]# df
mfs#mfsmaster:9421                  20948992    590592  20358400   3% /mnt/mfs 

在这里插入图片描述
在这里插入图片描述

在这里插入图片描述

1.6 client端做存储测试

[root@foundation ~]# cd /mnt/mfs
[root@foundation mfs]# ls
[root@foundation mfs]# mkdir dir1
[root@foundation mfs]# mkdir dir2
[root@foundation mfs]# mfsgetgoal dir1 ##因为有两个chunk server,因此默认存储的数据备份两份,server2和server3上各一份
dir1: 2
[root@foundation mfs]# mfsgetgoal dir2
dir2: 2
[root@foundation mfs]# mfssetgoal -r 1 dir1 ##备份的份数可以自己设置
dir1:
 inodes with goal changed:                       1
 inodes with goal not changed:                   0
 inodes with permission denied:                  0
[root@foundation mfs]# mfsgetgoal dir1
dir1: 1
[root@foundation mfs]# cp /etc/passwd dir1/
[root@foundation mfs]# cp /etc/fstab dir2/
[root@foundation mfs]# cd dir1
[root@foundation dir1]# mfsfileinfo passwd  ##查看文件的存储分配信息
passwd:
	chunk 0: 0000000000000001_00000001 / (id:1 ver:1)
		copy 1: 192.168.0.3:9422 (status:VALID)  ##被分配存储到server3上
[root@foundation dir1]# cd ..
[root@foundation mfs]# cd dir2
[root@foundation dir2]# ls
fstab
[root@foundation dir2]# mfsfileinfo fstab
fstab:
	chunk 0: 0000000000000002_00000001 / (id:2 ver:1)
		copy 1: 192.168.0.2:9422 (status:VALID)
		copy 2: 192.168.0.3:9422 (status:VALID)

[root@server3 mfs]# systemctl stop moosefs-chunkserver ##当某个存储节点down掉之后,另外一台节点上的存储不会收到影响

[root@foundation dir2]# mfsfileinfo fstab
fstab:
	chunk 0: 0000000000000002_00000001 / (id:2 ver:1)
		copy 1: 192.168.0.2:9422 (status:VALID)
[root@foundation dir2]# cd ..
[root@foundation mfs]# cd dir1
[root@foundation dir1]# mfsfileinfo passwd
passwd:
	chunk 0: 0000000000000001_00000001 / (id:1 ver:1)
		no valid copies !!!

在这里插入图片描述
在这里插入图片描述

在这里插入图片描述

二、文件写入与恢复

2.1 文件写入

mfs默认的单个chunk大小为50M,如果写入文件大于50M,则mfs会为其分配多个chunk来存储.

[root@foundation dir2]# dd if=/dev/zero of=bigfile bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.175626 s, 597 MB/s
[root@foundation dir2]# mfsfileinfo bigfile 

在这里插入图片描述

2.2 文件恢复

[root@foundation mfs]# cd dir1
[root@foundation dir1]# ls
passwd
[root@foundation dir1]# rm -fr passwd  ##删除passwd文件
[root@foundation dir1]# ls
[root@foundation dir1]# cd /mnt/
[root@foundation mnt]# mkdir mfsmeta ##创建元数据目录
[root@foundation mnt]# cd mfsmeta/
[root@foundation mfsmeta]# ls
[root@foundation mfsmeta]# mfsmount -m /mnt/mfsmeta/
mfsmaster accepted connection with parameters: read-write,restricted_ip  ##-m参数指定元数据
[root@foundation mfsmeta]# ls
[root@foundation mfsmeta]# cd ..
[root@foundation mnt]# cd mfsmeta/
[root@foundation mfsmeta]# ls
sustained  trash
[root@foundation mfsmeta]# cd trash/
[root@foundation trash]# ls | wc -l
4097
[root@foundation trash]# find -name *passwd*
./004/00000004|dir1|passwd
[root@foundation trash]# cd 004/
[root@foundation 004]# ls
'00000004|dir1|passwd'   undel
[root@foundation 004]# mv '00000004|dir1|passwd' undel/
[root@foundation 004]# cd /mnt/mfs/dir1
[root@foundation dir1]# ls ##删除的文件已经恢复
passwd
[root@foundation dir1]# mfsfileinfo passwd 
passwd:
	chunk 0: 0000000000000001_00000001 / (id:1 ver:1)
		copy 1: 192.168.0.2:9422 (status:VALID)
[root@foundation dir1]# mount

在这里插入图片描述
在这里插入图片描述

三、存储类

master内存的开销主要取决于整个分布式文件系统中文件的数量而不是大小,cpu的消耗主要来源于用户的操作。为了更合理的利用系统资源,我们需要来创建存储类来对存储进行管理

3.1 添加chunk server节点

server4
[root@server2 yum.repos.d]# scp MooseFS.repo server4:/etc/yum.repos.d/
[root@server4 ~]# cat /etc/yum.repos.d/MooseFS.repo 
[MooseFS]
name=MooseFS $releasever - $basearch
baseurl=http://ppa.moosefs.com/moosefs-3/yum/el7
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS
enabled=1
[root@server4 ~]# yum install moosefs-chunkserver -y

[root@server4 ~]# mkdir /mnt/chunk3
[root@server4 ~]# vim /etc/hosts 
192.168.0.1 server1 mfsmaster
[root@server4 ~]# vim /etc/mfs/mfshdd.cfg
/mnt/chunk3
[root@server4 ~]# chown mfs.mfs /mnt/chunk3/
[root@server4 ~]# systemctl start moosefs-chunkserver
[root@server4 ~]# vim /etc/mfs/mfschunkserver.cfg
[root@server4 ~]# systemctl reload moosefs-chunkserver

在这里插入图片描述

在这里插入图片描述

3.2 设置查看文件的存储类–示例1

单标签
server2 3 4 更改chunk server配置文件,添加LABEL:
在这里插入图片描述

[root@server2 mfs]# pwd
/etc/mfs
[root@server2 mfs]# ls
mfschunkserver.cfg  mfschunkserver.cfg.sample  mfshdd.cfg  mfshdd.cfg.sample
[root@server2 mfs]# vim mfschunkserver.cfg
 84 LABELS = A
[root@server2 mfs]# systemctl reload moosefs-chunkserver

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

[root@foundation mfs]# cd dir2
[root@foundation dir2]# ls
bigfile  fstab
[root@foundation dir2]# mfsscadmin create 2A class_2A ##创建存储类,2A表示存两份,两份都在有标签A的节点上
storage class make class_2A: ok
[root@foundation dir2]# mfsscadmin create AB class_AB ##B表示存一份,在具有A和B标签的节点上
storage class make class_AB: ok
[root@foundation dir2]# mfsfileinfo fstab 
fstab:
	chunk 0: 0000000000000002_00000001 / (id:2 ver:1)
		copy 1: 192.168.0.2:9422 (status:VALID)
		copy 2: 192.168.0.3:9422 (status:VALID)
[root@foundation dir2]# mfssetsclass class_2A fstab  ##设置文件的存储类
fstab: storage class: 'class_2A'
[root@foundation dir2]# mfsfileinfo fstab ##查看文件的存储信息
fstab:
	chunk 0: 0000000000000002_00000001 / (id:2 ver:1)
		copy 1: 192.168.0.2:9422 (status:VALID)
		copy 2: 192.168.0.4:9422 (status:VALID)
[root@foundation dir2]# mfsfileinfo bigfile 
bigfile:
	chunk 0: 0000000000000003_00000001 / (id:3 ver:1)
		copy 1: 192.168.0.2:9422 (status:VALID)
		copy 2: 192.168.0.3:9422 (status:VALID)
	chunk 1: 0000000000000004_00000001 / (id:4 ver:1)
		copy 1: 192.168.0.2:9422 (status:VALID)
		copy 2: 192.168.0.3:9422 (status:VALID)
[root@foundation dir2]# mfssetsclass class_AB bigfile 
bigfile: storage class: 'class_AB'
[root@foundation dir2]# mfsfileinfo bigfile 
bigfile:
	chunk 0: 0000000000000003_00000001 / (id:3 ver:1)
		copy 1: 192.168.0.3:9422 (status:VALID)
	chunk 1: 0000000000000004_00000001 / (id:4 ver:1)
		copy 1: 192.168.0.3:9422 (status:VALID)
[root@foundation dir2]# mfsscadmin create A,B classAB ##A,B表示存两份,一份在具有标签A的节点上,一份在具有标签B的节点上
storage class make classAB: ok
[root@foundation dir2]# mfssetsclass classAB bigfile 
bigfile: storage class: 'classAB'
[root@foundation dir2]# mfsscadmin delete class_AB  ##删除存储类
storage class remove class_AB: ok
[root@foundation dir2]# mfsfileinfo bigfile  #因为bigfile为100M大小,因此分为两个chunk(块)来存
bigfile:
	chunk 0: 0000000000000003_00000001 / (id:3 ver:1)
		copy 1: 192.168.0.3:9422 (status:VALID)
		copy 2: 192.168.0.4:9422 (status:VALID)
	chunk 1: 0000000000000004_00000001 / (id:4 ver:1)
		copy 1: 192.168.0.2:9422 (status:VALID)
		copy 2: 192.168.0.3:9422 (status:VALID)

在这里插入图片描述
在这里插入图片描述

在这里插入图片描述

3.3 示例2

server2 3 4 更改标签
[root@server2 mfs]# vim mfschunkserver.cfg 
LABELS = A S  ##server2
[root@server2 mfs]# systemctl reload moosefs-chunkserver.service 

server3: LABELS = A B S H  
server4: LABELS = A H 

[root@foundation mfs]# cd dir2
[root@foundation dir2]# mfsscadmin create AS,BS  class_ASBS  ##创建存储类
storage class make class_ASBS: ok
[root@foundation dir2]# mfssetsclass class_ASBS fstab  ##设置文件以指定存储类型存储
fstab: storage class: 'class_ASBS'
[root@foundation dir2]# mfsfileinfo fstab 
fstab:
	chunk 0: 0000000000000002_00000001 / (id:2 ver:1)
		copy 1: 192.168.0.2:9422 (status:VALID)
		copy 2: 192.168.0.3:9422 (status:VALID)
[root@foundation dir2]# mfsscadmin create BS,2A[S+H] class4
storage class make class4: ok
[root@foundation dir2]# mfssetsclass class4 fstab 
fstab: storage class: 'class4'
[root@foundation dir2]# mfsfileinfo fstab 
fstab:
	chunk 0: 0000000000000002_00000001 / (id:2 ver:1)
		copy 1: 192.168.0.2:9422 (status:VALID)
		copy 2: 192.168.0.3:9422 (status:VALID)
		copy 3: 192.168.0.4:9422 (status:VALID)
[root@foundation dir2]# mfsscadmin create -C 2AS -K AS,BS -A AH,BH -d 30 class5  ##表示创建一个存储类class5,存储两份,在AS上,-K参数指定数据保留在AS和BS上各一份,-d指定30天后,-A指定迁移到AH和BH上
storage class make class5: ok
[root@foundation dir2]# mfssetsclass class5 fstab 
fstab: storage class: 'class5'
[root@foundation dir2]# mfsfileinfo fstab 
fstab:
	chunk 0: 0000000000000002_00000001 / (id:2 ver:1)
		copy 1: 192.168.0.2:9422 (status:VALID)
		copy 2: 192.168.0.3:9422 (status:VALID)

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

[root@server3 mfs]# systemctl stop moosefs-chunkserver
[root@server4 mfs]# systemctl stop moosefs-chunkserver

[root@foundation dir2]# mfsfileinfo fstab 
fstab:
	chunk 0: 0000000000000002_00000001 / (id:2 ver:1)
		copy 1: 192.168.0.2:9422 (status:VALID)

[root@server3 mfs]# systemctl start moosefs-chunkserver
[root@server4 mfs]# systemctl start moosefs-chunkserver
[root@foundation dir2]# cp /etc/redhat-release .
[root@foundation dir2]# mfsgetgoal .
.: 2
[root@foundation dir2]# mfsgetgoal redhat-release 
redhat-release: 2
[root@foundation dir2]# dd if=/dev/zero of=redhat-release bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.20569 s, 510 MB/s
[root@foundation dir2]# mfsfileinfo fstab 
fstab:
	chunk 0: 0000000000000002_00000001 / (id:2 ver:1)
		copy 1: 192.168.0.2:9422 (status:VALID)
		copy 2: 192.168.0.3:9422 (status:VALID)
[root@foundation dir2]# mfsfileinfo redhat-release 
redhat-release:
	chunk 0: 0000000000000006_00000001 / (id:6 ver:1)
		copy 1: 192.168.0.3:9422 (status:VALID)
		copy 2: 192.168.0.4:9422 (status:VALID)
	chunk 1: 0000000000000007_00000001 / (id:7 ver:1)
		copy 1: 192.168.0.2:9422 (status:VALID)
		copy 2: 192.168.0.4:9422 (status:VALID)

在这里插入图片描述

标签:foundation,mfs,dir2,存储系统,192.168,----,MFS,server2,root
来源: https://blog.csdn.net/weixin_45777669/article/details/116095885