{"id":13053,"date":"2022-06-03T23:45:04","date_gmt":"2022-06-03T20:45:04","guid":{"rendered":"https:\/\/kifarunix.com\/?p=13053"},"modified":"2024-03-09T15:45:54","modified_gmt":"2024-03-09T12:45:54","slug":"setup-replicated-glusterfs-volume-on-ubuntu","status":"publish","type":"post","link":"https:\/\/kifarunix.com\/setup-replicated-glusterfs-volume-on-ubuntu\/","title":{"rendered":"Setup Replicated GlusterFS Volume on Ubuntu"},"content":{"rendered":"\n
Follow through this tutorial to learn how to setup replicated GlusterFS volume on Ubuntu. There are different types of Volume architectures<\/a> that you may want to consider. These include;<\/p>\n\n\n\n Replicated GlusterFS volume provides reliability and data redundancy and cushion against data loss in case one of the bricks get damaged. This is because exact copies of the data are stored on all the bricks that makes up the volume. You need at least two bricks to create a volume with 2 replicas or a minimum of three bricks to create a volume of 3 replicas<\/em>.<\/p>\n\n\n\n It is recommended to use at least 3 bricks in order to avoid the issue with split brain<\/a>.<\/p>\n\n\n\n In our deployment architecture, we are using two storage servers with extra storage partition attached apart from the root partition.<\/p>\n\n\n\n Before you can proceed;<\/p>\n\n\n\n Install GlusterFS Server on Ubuntu<\/a><\/p>\n\n\n\n Open GlusterFS Ports on Firewall<\/a><\/p>\n\n\n\n A storage pool is a cluster of storage nodes which provides bricks to the storage volume.<\/p>\n\n\n\n To create GlusterFS TSP, run the command below from either of the nodes, replacing SERVER<\/strong> with the other node being probed.<\/p>\n\n\n\n For example, to create the trusted storage pool containing Node 02 and Node 03 from Node 01;<\/p>\n\n\n\n If all is well, you should get a successful probe; To get the status of the TSP peers;<\/p>\n\n\n\n Sample output;<\/p>\n\n\n\n First of all ensure the storage drive is mounted. In our setup, we are using non root partition, To create a replicated GlusterFS storage volume, use the For example, to create a replicated storage volume using the two nodes, replace the name of the volume, Sample command output;<\/p>\n\n\n\n Once you have created the volume, you can now start it for you to start storing data in the volume.<\/p>\n\n\n\n The command, Sample command output;<\/p>\n\n\n\n Check the volume status;<\/p>\n\n\n\n Check gluster information;<\/p>\n\n\n\n In order for the clients to connect to the volumes created, you need to open the respective node volume port on firewall. The ports are shown using the Similarly, ensure that the nodes can communicate to each other on these nodes<\/strong><\/p>\n\n\n\n For example if your using UFW, on Node 01, allow clients and other Gluster nodes to connect to port 49152\/tcp by running the command below;<\/p>\n\n\n\n On Node 02, allow clients to connect to port 50073\/tcp by running the command below;<\/p>\n\n\n\n On Node 03, allow clients to connect to port 60961\/tcp by running the command below;<\/p>\n\n\n\n As an example, we are using Ubuntu systems as GlusterFS clients.<\/p>\n\n\n\n Thus, install GlusterFS client<\/a> and proceed as follows to mount the replicated GlusterFS volume.<\/p>\n\n\n\n Ensure the client can resolve the Gluster nodes hostnames.<\/p>\n\n\n\n Create the mount point<\/p>\n\n\n\n Mount the distributed volume. If using domain names, ensure they are resolvable.<\/p>\n\n\n\n Run the df command to check the mounted filesystems.<\/p>\n\n\n\n From other clients, you can mount the volume on the other node;<\/p>\n\n\n\n To auto-mount the volume on system boot, you need to add the line below to To test the data distribution, create two test files on the client. One of the file will be stored one of the volumes and the other file on the other volume. see example below;<\/p>\n\n\n\n If you can check on node01, node02 and node 03, they should all contain the same files<\/p>\n\n\n\n On node02,<\/p>\n\n\n\n On node03,<\/p>\n\n\n\n That concludes our guide on setting up replicated GlusterFS volume on Ubuntu.<\/p>\n\n\n\n Easily Install and Configure Samba File Server on Ubuntu 22.04<\/a><\/p>\n\n\n\n Install Couchbase Server on Ubuntu 22.04\/Ubuntu 20.04<\/a><\/p>\n\n\n\n\n
Setup Replicated GlusterFS Volume on Ubuntu<\/h3>\n\n\n\n
\n
\n
\n
\n
\n
\n
Create GlusterFS Trusted Storage Pool<\/h3>\n\n\n\n
gluster peer probe SERVER<\/em><\/code><\/pre>\n\n\n\n
gluster peer probe gfs02<\/code><\/pre>\n\n\n\n
gluster peer probe gfs03<\/code><\/pre>\n\n\n\n
peer probe: success<\/code><\/strong>.<\/p>\n\n\n\n
gluster peer status<\/code><\/pre>\n\n\n\n
Number of Peers: 2\n\nHostname: gfs02\nUuid: b81803a8-893a-499e-9a87-6bac00a62822\nState: Peer in Cluster (Connected)\n\nHostname: gfs03\nUuid: 88cf40a0-d458-4080-8c7a-c3cddbce86c0\nState: Peer in Cluster (Connected)\n<\/code><\/pre>\n\n\n\n
Create Replicated GlusterFS Storage Volume<\/h3>\n\n\n\n
\/dev\/sdb1<\/code><\/strong> mounted under
\/gfsvolume<\/code><\/strong>.<\/p>\n\n\n\n
df -hTP \/gfsvolume\/<\/code><\/pre>\n\n\n\n
Filesystem Type Size Used Avail Use% Mounted on\n\/dev\/sdb1 ext4 3.9G 24K 3.7G 1% \/gfsvolume<\/code><\/pre>\n\n\n\n
gluster volume create<\/code><\/strong> command, whose CLI syntax is shown below;<\/p>\n\n\n\n
gluster volume create <NEW-VOLNAME> [[replica <COUNT> \\\n[arbiter <COUNT>]]|[replica 2 thin-arbiter 1]] [disperse [<COUNT>]] \\\n[disperse-data <COUNT>] [redundancy <COUNT>] [transport <tcp|rdma|tcp,rdma>] \\\n<NEW-BRICK> <TA-BRICK>... [force]<\/code><\/pre>\n\n\n\n
replicated_volume<\/code><\/strong> as well as the nodes host-names accordingly;<\/p>\n\n\n\n
gluster volume create replicated_volume<\/strong> replica 3 transport tcp gfs01:\/gfsvolume\/gv0 \\\ngfs02:\/gfsvolume\/gv0 \\\ngfs03:\/gfsvolume\/gv0<\/code><\/pre>\n\n\n\n
volume create: replicated_volume: success: please start the volume to access data<\/code><\/pre>\n\n\n\n
Start GlusterFS Volume<\/h3>\n\n\n\n
gluster volume start VOLUME_NAME<\/code>, can be used to start volume.<\/p>\n\n\n\n
gluster volume start replicated_volume<\/code><\/pre>\n\n\n\n
volume start: replicated_volume: success<\/code><\/pre>\n\n\n\n
gluster volume status<\/code><\/pre>\n\n\n\n
Status of volume: replicated_volume\nGluster process TCP Port RDMA Port Online Pid\n------------------------------------------------------------------------------\nBrick gfs01:\/gfsvolume\/gv0 49152 0 Y 2050 \nBrick gfs02:\/gfsvolume\/gv0 50073 0 Y 16260\nBrick gfs03:\/gfsvolume\/gv0 60961 0 Y 1421 \nSelf-heal Daemon on localhost N\/A N\/A Y 2071 \nSelf-heal Daemon on gfs03 N\/A N\/A Y 1438 \nSelf-heal Daemon on gfs02 N\/A N\/A Y 16277\n \nTask Status of Volume replicated_volume\n------------------------------------------------------------------------------\nThere are no active volume tasks\n<\/code><\/pre>\n\n\n\n
gluster volume info<\/code><\/pre>\n\n\n\n
\nVolume Name: replicated_volume\nType: Replicate\nVolume ID: 4f522843-13df-4042-b73a-ff6722dc9891\nStatus: Started\nSnapshot Count: 0\nNumber of Bricks: 1 x 3 = 3\nTransport-type: tcp\nBricks:\nBrick1: gfs01:\/gfsvolume\/gv0\nBrick2: gfs02:\/gfsvolume\/gv0\nBrick3: gfs03:\/gfsvolume\/gv0\nOptions Reconfigured:\ntransport.address-family: inet\nstorage.fips-mode-rchecksum: on\nnfs.disable: on\nperformance.client-io-threads: off\n<\/code><\/pre>\n\n\n\n
Open GlusterFS Volumes Ports on Firewall<\/h3>\n\n\n\n
gluster volume status<\/code><\/strong> command above.<\/p>\n\n\n\n
ufw allow from <Client-IP-or-Network> to any port 49152 proto tcp comment \"GlusterFS Client Access\"<\/code><\/pre>\n\n\n\n
ufw allow from <Node02 IP> to any port 49152 proto tcp comment \"GlusterFS Node02\"<\/code><\/pre>\n\n\n\n
ufw allow from <Node03 IP> to any port 49152 proto tcp comment \"GlusterFS Node03\"<\/code><\/pre>\n\n\n\n
ufw allow from <Client-IP-or-Network> to any port 50073 proto tcp comment \"GlusterFS Client Access\"<\/code><\/pre>\n\n\n\n
ufw allow from <Node01-IP> to any port 50073 proto tcp comment \"GlusterFS Node01\"<\/code><\/pre>\n\n\n\n
ufw allow from <Node03 IP> to any port 50073 proto tcp comment \"GlusterFS Node03\"<\/code><\/pre>\n\n\n\n
ufw allow from <Client-IP-or-Network> to any port 60961 proto tcp comment \"GlusterFS Client Access\"<\/code><\/pre>\n\n\n\n
ufw allow from <Node01 IP> to any port 60961 proto tcp comment \"GlusterFS Node01\"<\/code><\/pre>\n\n\n\n
ufw allow from <Node02 IP> to any port 60961 proto tcp comment \"GlusterFS Node02\"<\/code><\/pre>\n\n\n\n
Mount Replicated GlusterFS Volume on Clients<\/h3>\n\n\n\n
mkdir \/mnt\/gfsvol<\/code><\/pre>\n\n\n\n
mount -t glusterfs gfs01:\/replicated_volume \/mnt\/gfsvol\/<\/code><\/pre>\n\n\n\n
df -hTP \/mnt\/gfsvol\/<\/code><\/pre>\n\n\n\n
Filesystem Type Size Used Avail Use% Mounted on\ngfs01:\/replicated_volume fuse.glusterfs 3.9G 41M 3.7G 2% \/mnt\/gfsvol<\/code><\/pre>\n\n\n\n
mount -t glusterfs gfs02:\/replicated_volume \/mnt\/gfsvol\/<\/code><\/pre>\n\n\n\n
\/etc\/fstab<\/code>.<\/p>\n\n\n\n
gfs01:\/replicated_volume \/mnt\/gfsvol glusterfs defaults,_netdev 0 0<\/code><\/pre>\n\n\n\n
mkdir \/mnt\/gfsvol\/Test-dir\ntouch \/mnt\/gfsvol\/Test-dir\/{test-file,test-file-two}<\/code><\/pre>\n\n\n\n
ls \/gfsvolume\/gv0\/Test-dir\/\ntest-file test-file-two<\/code><\/pre>\n\n\n\n
ls \/gfsvolume\/gv0\/Test-dir\/\ntest-file test-file-two<\/code><\/pre>\n\n\n\n
ls \/gfsvolume\/gv0\/Test-dir\/\ntest-file test-file-two<\/code><\/pre>\n\n\n\n
Other tutorials<\/h3>\n\n\n\n