其他分享
首页 > 其他分享> > GlusterFS文件系统

GlusterFS文件系统

作者:互联网

一、相关概念

二、选择版本和卷类型

1、

2、默认卷类型

Type: Replicate

gluster volume info  gv0

Volume Name: gv0
Type: Replicate
Volume ID: 12fc6557-7b14-4f14-b316-a35b0a2740e2
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: node1:/bricks/brick1/gv0
Brick2: node2:/bricks/brick1/gv0
Brick3: node3:/bricks/brick1/gv0
Options Reconfigured:
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

  

三、部署

示例为:glusterfs 9.5

gluster --version
glusterfs 9.5
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.

1、Deployment requirements

 Have at least two nodes

Note: GlusterFS stores its dynamically generated configuration files at /var/lib/glusterd, if at any point in time GlusterFS is unable to write to these files it will at minimum cause erratic behaviour for your system, or worse take your system offline completely. It is advisable to create separate partitions for directories such as /var/log to ensure this does not happen.

2、Format and mount the bricks

on all nodes

mkfs.xfs -i size=512 /dev/sdb1
mkdir -p /bricks/brick1

vi /etc/fstab
/dev/sdb1 /bricks/brick1 xfs defaults 1 2

mount -a && mount

3、Install the software

on all servers

安装源

cat /etc/yum.repos.d/glusterfs.repo
[gs]
name=glusterfs
baseurl=https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-9/
failovermethod=priority
enabled=0
gpgcheck=0

安装

yum install -y  glusterfs-server

启动

# systemctl enable glusterd
ln -s '/usr/lib/systemd/system/glusterd.service' '/etc/systemd/system/multi-user.target.wants/glusterd.service'

# systemctl start glusterd
# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
   Active: active (running) since 二 2022-04-12 05:39:37 EDT; 3s ago
     Docs: man:glusterd(8)
  Process: 2602 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 2603 (glusterd)
   CGroup: /system.slice/glusterd.service
           └─2603 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO

4月 12 05:39:37 node1 systemd[1]: Starting GlusterFS, a clustered file-system server...
4月 12 05:39:37 node1 systemd[1]: Started GlusterFS, a clustered file-system server.

4、Configure the trusted pool

From "node1"

# gluster peer probe node2
peer probe: success

# gluster peer probe node3
peer probe: success

Note: Once this pool has been established, only trusted members may probe new servers into the pool. A new server cannot probe the pool, it must be probed from the pool.

5、Set up a GlusterFS volume

On both node1 and node2:

mkdir /bricks/brick1/gv0

From any single server:

#gluster volume create gv0 replica 3 node1:/bricks/brick1/gv0 node2:/bricks/brick1/gv0  node3:/bricks/brick1/gv0
volume create: gv0: success: please start the volume to access data

#gluster volume start gv0
volume start: gv0: success

Confirm that the volume shows "Started":

gluster volume info

Volume Name: gv0
Type: Replicate
Volume ID: 12fc6557-7b14-4f14-b316-a35b0a2740e2
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: node1:/bricks/brick1/gv0
Brick2: node2:/bricks/brick1/gv0
Brick3: node3:/bricks/brick1/gv0
Options Reconfigured:
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

Note: If the volume is not started, clues as to what went wrong will be in log files under /var/log/glusterfs on one or both of the servers - usually in etc-glusterfs-glusterd.vol.log

6、Testing the GlusterFS volume

# mount -t glusterfs node1:/gv0 /mnt
# for i in `seq -w 1 100`; do cp -rp /var/log/messages /mnt/copy-test-$i; done

First, check the mount point:

# ls -lA /mnt | wc -l

You should see 100 files returned. Next, check the GlusterFS mount points on each server:

# ls -lA /bricks/brick1/gv0

You should see 100 per server using the method we listed here. Without replication, in a distribute only volume (not detailed here), you should see about 50 each.

 

SpecialInterestGroup/Storage/gluster-Quickstart - CentOS Wiki   wiki上的centos部署指南(centos7最高支持部署glusterfs9)

 四、常用命令

1、查看帮助

gluster   --help
 peer help                - display help for peer commands
 volume help              - display help for volume commands
 volume bitrot help       - display help for volume bitrot commands
 volume quota help        - display help for volume quota commands
 snapshot help            - display help for snapshot commands
 global help              - list global commands

2、查看volumn 介绍

gluster volume help

gluster volume commands
========================

volume add-brick <VOLNAME> [<stripe|replica> <COUNT> [arbiter <COUNT>]] <NEW-BRICK> ... [force] - add brick to volume <VOLNAME>
volume barrier <VOLNAME> {enable|disable} - Barrier/unbarrier file operations on a volume
volume clear-locks <VOLNAME> <path> kind {blocked|granted|all}{inode [range]|entry [basename]|posix [range]} - Clear locks held on path
volume create <NEW-VOLNAME> [stripe <COUNT>] [[replica <COUNT> [arbiter <COUNT>]]|[replica 2 thin-arbiter 1]] [disperse [<COUNT>]] [disperse-data <COUNT>] [redundancy <COUNT>] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK> <TA-BRICK>... [force] - create a new volume of specified type with mentioned bricks
volume delete <VOLNAME> - delete volume specified by <VOLNAME>
volume get <VOLNAME|all> <key|all> - Get the value of the all options or given option for volume <VOLNAME> or all option. gluster volume get all all is to get all global options
volume heal <VOLNAME> [enable | disable | full |statistics [heal-count [replica <HOSTNAME:BRICKNAME>]] |info [summary | split-brain] |split-brain {bigger-file <FILE> | latest-mtime <FILE> |source-brick <HOSTNAME:BRICKNAME> [<FILE>]} |granular-entry-heal {enable | disable}] - self-heal commands on volume specified by <VOLNAME>
volume help - display help for volume commands
volume info [all|<VOLNAME>] - list information of all volumes

  

 

 

GFS 分布式文件系统_lxmy的博客-CSDN博客

GlusterFS 4.1 版本选择和部署 · linux运维 · 看云 (kancloud.cn)

Install | Gluster 官网介绍

 

 

 

 

 

 

centos官网的其中两个子域名

Index of /7.9.2009/storage/x86_64 (centos.org)     vault.centos.org,支持的glusterfs版本最高到8

Index of /centos/7/storage/x86_64/gluster-9           buildlogs.centos.org

标签:help,bricks,glusterd,文件系统,volume,GlusterFS,gv0,gluster
来源: https://www.cnblogs.com/dgp-zjz/p/16135161.html