How to configure GlusterFS with a volume replicated over 2 nodes

root's picture

The servers setup:

To install the required packages run on both servers:

$ sudo apt-get install glusterfs-server

If you want a more up to date version of GlusterFS you can add the following repo for buster:

$ sudo echo deb [arch=amd64] https://download.gluster.org/pub/gluster/glusterfs/8/LATEST/Debian/buster/amd64/apt buster main > /etc/apt/sources.list.d/gluster.list
For others, check this

Now from one of the servers you must connect to the other:

server1$ sudo gluster peer probe <server2_IP_addr>
You should see the following output: peer probe: success

You can check the status from any of the hosts with:

$ sudo gluster peer status

Now we need to create the volume where the data will reside. For this run the following comand:

$ sudo gluster volume create SOMENAME replica 2 transport tcp <server1_IP_addr>:/mnt/gfs <server2_IP_addr>:/mnt/gfs
Where /mnt/gfs is the mount point where the data will be on each node and somename is the name of the volume you are creating.

If this has been sucessful, you should see:

Creation of volume SOMENAME has been successful. Please start the volume to access data.

As the message indicates, we now need to start the volume:

$ sudo gluster volume start SOMENAME

As a final test, to make sure the volume is available, run gluster volume info.

$ sudo gluster volume info
Your GlusterFS volume is ready and will maintain replication across two nodes.

If you want to Restrict Access to the Volume, you can use the following command:

$ sudo gluster volume set SOMENAME auth.allow <client1_ip>,<client2_ip>

If you need to remove the restriction at any point, you can type:

$ sudo gluster volume set volume1 auth.allow *

The client setup:

Install the needed packages with:

$ sudo apt-get install glusterfs-client

To mount the volume you must edit the fstab file:

$ sudo nano /etc/fstab
And append the following to it:
server1:/somename /some/mount/point glusterfs defaults,_netdev,backupvolfile-server=server2 0 0

Also, you can also mount the volume using a volume config file:

Create a volume config file for your GlusterFS client.
Create /etc/glusterfs/datastore.vol with the following content:

volume remote1
 type protocol/client
 option transport-type tcp
 option remote-host server1
 option remote-subvolume somename
 end-volume

 volume remote2
 type protocol/client
 option transport-type tcp
 option remote-host server2
 option remote-subvolume somename
 end-volume

 volume replicate
 type cluster/replicate
 subvolumes remote1 remote2
 end-volume

 volume writebehind
 type performance/write-behind
 option window-size 1MB
 subvolumes replicate
 end-volume

 volume cache
 type performance/io-cache
 option cache-size 512MB
 subvolumes writebehind
 end-volume

Finally, edit /etc/fstab to add this config file and it's mount point:

/etc/glusterfs/datastore.vol /some/mount/point glusterfs rw,allow_other,default_permissions,max_read=131072 0 0

That's it. Enjoy ;)

Inspiration:
http://edoceo.com/howto/glusterfs
http://blog.bobbyallen.me/2013/01/26/creating-a-highly-available-file-server-cluster-for-a-web-farm-using-ubuntu-12-04-lts/
http://how-to.linuxcareer.com/configuration-of-high-availability-storage-server-using-glusterfs
http://www.howtoforge.com/high-availability-storage-with-glusterfs-on-debian-lenny-automatic-file-replication-across-two-storage-servers-p2
http://www.howtoforge.com/high-availability-storage-with-glusterfs-on-ubuntu-9.10-automatic-file-replication-mirror-across-two-storage-servers
http://blogs.reliablepenguin.com/2013/09/05/glusterfs-cluster-with-ubuntu-on-rackspace-cloud
http://www.blah-blah.ch/it/how-to-s/glusterfs/

Thou shalt not steal!

If you want to use this information on your own website, please remember: by doing copy/paste entirely it is always stealing and you should be ashamed of yourself! Have at least the decency to create your own text and comments and run the commands on your own servers and provide your output, not what I did!

Or at least link back to this website.

Recent content

root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root