CentOS 7 GlusterFS NotesBy Dag J, on December 3rd, 2017
My first time using CentOS. I've been wanting to try it for a while as it's a well-respected enterprise OS, I thought it was a good fit to use it for my first Gluster experience to see if I can get it up and running for scalable storage as well as hopefully get a taste for a new GNU/Linux system. My very first GNU/Linux experience was Red Hat 6.1 so it should be fun, package management has come a LONG way since then. I want to let RAID take take of the fault tolerance and aim for distribution and space.
First some initial personal choice CentOS stuff:
# echo 'set bell-style none' >> ~/.inputrc # yum check-update (or just '-y update') # yum install nano NetworkManager-tui NetworkManager-wifi pciutils usbutils -y # yum remove firewalld -y # nano /etc/systemd/logind.conf ^ HandleLidSwitch=ignore # systemctl restart systemd-logind NOTE: Ensure that TCP and UDP ports 24007 and 24008 are open on all Gluster servers. Apart from these ports, you need to open one port for each brick starting from port 49152 (instead of 24009 onwards as with previous releases). The brick ports assignment scheme is now compliant with IANA guidelines. For example: if you have five bricks, you need to have ports 49152 to 49156 open.
Install and start GlusterFS:
# yum install centos-release-gluster -y # yum install glusterfs-server -y # yum install samba samba-client samba-common samba-vfs-glusterfs selinux-policy-targeted -y # systemctl enable glusterd # systemctl start glusterd # systemctl status glusterd
Adding and removing peers and getting status:
# gluster peer probe [IP] # gluster peer detach [IP] # gluster peer status # gluster volume status NOTE: I'll start with putting hostnames of all nodes in /etc/hosts on all nodes if using that instead.
Preparing bricks for volume (with persistent block device naming/UUID):
# mkfs.xfs -i size=4096 /dev/sdb1 # mkdir -p /bricks/brick1 # blkid (find UUID) # nano /etc/fstab ^ UUID="[UUID]" /bricks/brick1 xfs defaults 0 2 # mount -a && mount NOTE: Bus-based naming like /dev/sdb1 may change when using multiple HBA or SATA controllers. UUID is persistent.
Setting up, and deletion of GlusterFS volumes:
# mkdir /bricks/brick1/gv0 (on all nodes) # gluster volume create gv0 replica 2 transport tcp [IP1]:/bricks/brick1/gv0 [IP2]:/bricks/brick1/gv0 # gluster volume start gv0 # gluster volume info # gluster volume stop gv0 # gluster volume delete gv0 Available GlusterFS Volume types: Distributed (for maximum space) 1G + 1G = 2G Replicated (for high availability) 1G + 1G = 1G Striped (for large files) 1G + 1G = 2G Distributed and Replicated (1G+1G) + (1G+1G) = 2G Distributed and Striped (1G+1G) + (1G+1G) = 4G Distributed, Replicated and Striped [(1G+1G)+(1G+1G)] + [(1G+1G)+(1G+1G)] = 4G ^ "replica 2" means copy to 2 different bricks, e.g. one per node. Just remove that to create a distributed volume instead. NOTE: Only need to manage the volume from a single node.
Sharing via samba to my Windows 10 clients:
NOTE: Once a new GlusterFS Volume is created/started, it is added to the Samba configuration file automatically as gluster-<Volume_name> file share. # nano /etc/samba/smb.conf [gluster-gv0] comment = GlusterFS Pool vfs objects = glusterfs glusterfs:volume = gv0 glusterfs:logfile = /var/log/samba/glusterfs-gv0.%M.log glusterfs:loglevel = 7 read only = no path = / guest ok = no kernel share modes = no client min protocol = SMB2 client max protocol = SMB3 # gluster volume set gv0 stat-prefetch off # gluster volume set gv0 server.allow-insecure on # gluster volume set gv0 storage.batch-fsync-delay-usec 0 # systemctl restart glusterd.service # smbpasswd –a <new_samba_user> # setsebool -P samba_share_fusefs on # setsebool -P samba_load_libgfapi on # systemctl restart smb.service # systemctl restart nmb.service Remember to set suitable user rights on shared folders.
Mounting the volume locally to set rights etc:
# mount -t glusterfs [IP1]:/gv0 /whatever
Check SMB3 dialect being used on Windows clients:
PS C:\WINDOWS\system32> Get-SmbConnection -ServerName 10.0.0.181 ServerName ShareName UserName Credential Dialect NumOpens ---------- --------- -------- ---------- ------- -------- 10.0.0.181 gluster-gv0 DJ-GAMER-PC\thron MicrosoftAccount\dj 3.1.1 1
Replacing a brick in a distributed volume:
# gluster volume status # gluster volume add-brick gv0 [IP_new]:/brick/brick1/gv0 # gluster volume remove-brick gv0 [IP_old]:/brick/brick1/gv0 start # gluster volume remove-brick gv0 [IP_old]:/brick/brick1/gv0 status (wait for completion) # gluster volume remove-brick gv0 [IP_old]:/brick/brick1/gv0 commit # gluster volume status If node is offline and you need to force it: # gluster volume remove-brick gv0 IP:/brick/brick1 force ^ You can now detach the peer it belongs to if needed. NOTE: Above I'm assuming a brick per server. Adapt host/brick path as needed.
Replacing a brick in replicated volume is a bit more involved and I won't be doing it much, so I'll just refer to the docs below instead.