Creating a File Server with 3 Debian 12 Servers Using GlusterFS

Introduction

In this guide, we will walk you through the process of building a highly available and distributed file server using GlusterFS on three Debian 12 servers. This setup ensures data redundancy, scalability, and high availability, making it ideal for enterprise environments that require reliable file storage solutions.

We will configure three nodes with IP addresses 192.168.10.61, 192.168.10.62, and 192.168.10.63, which will work together as a unified system to provide seamless access to shared files. GlusterFS will handle data replication and distribution across the servers, ensuring uninterrupted file access even during server failures. By the end of this guide, you will have a robust and scalable file server ready for deployment in your infrastructure.

1.- Configure Files Servers

				
					Step 1:
Update and Configure Hosts
On all servers (cluster01, cluster02, cluster03) and the client, edit /etc/hosts:

				
			

Add the following lines:

				
					192.168.10.61 cluster01
192.168.10.62 cluster02
192.168.10.63 cluster03

				
			

Step 2:
Update and install basic packages:

				
					apt update && apt upgrade -y
apt install -y ufw

				
			

Step 3:
Allow the necessary traffic on the firewall:

				
					ufw allow 24007/tcp
ufw allow 24008/tcp
ufw allow 20048/tcp
ufw allow 20048/udp
ufw allow 2049/tcp
ufw allow 111/tcp
ufw allow 111/udp
ufw allow 49152:49251/tcp
ufw allow ssh
ufw allow from 192.168.10.0/23
ufw enable
ufw reload
ufw status

				
			

Step 4:
Install GlusterFS on All Servers
On cluster01, cluster02, and cluster03:

				
					apt install glusterfs-server -y
				
			

Enable and start the GlusterFS service:

				
					systemctl enable --now glusterd
				
			

Verify the service status:

				
					systemctl status glusterd
				
			

Step 5:
Set Up the GlusterFS Cluster
On the cluster01 server, add the other two servers to the cluster:

				
					gluster peer probe cluster02
gluster peer probe cluster03

				
			

Verify the cluster status:

				
					gluster peer status
				
			

Expected output:

				
					Number of Peers: 2

Hostname: cluster02
Uuid: 780feb30-f1c4-4ab5-aced-6739eb7acc36
State: Peer in Cluster (Connected)

Hostname: cluster03
Uuid: 7edbdf45-0c46-438a-855f-8ae5f9735156
State: Peer in Cluster (Connected)

				
			

Step 6:
Prepare the storage for GlusterFS:
On each server, create a directory for GlusterFS and mount the storage:

				
					mkdir -p /data/vpbx
chown -R gluster /data/vpbx
chmod -R 755 /data/vpbx

				
			

Step 7:
On cluster01:
Create a replicated volume named vol1:

				
					gluster volume create vol1 replica 3 transport tcp cluster01:/data/vpbx cluster02:/data/vpbx cluster03:/data/vpbx force
				
			

This sets up a replicated volume where any changes made on one node will be propagated to the other two.

Start the volume:

				
					gluster volume start vol1
				
			

2.- Configuring the GlusterFS Client

Step 1:
Mount the Volume on the Client
On client:
a.- Install the GlusterFS client:

				
					apt install glusterfs-client -y
				
			

b.- Create the Mount Point:
On the client, create a directory where the NFS resource will be mounted:

				
					mkdir -p /mnt/vpbx
chmod -R 755 /mnt/vpbx

				
			

c.- Ensure Proper Name Resolution

				
					nano /etc/hosts
192.168.10.61 cluster01
192.168.10.62 cluster02
192.168.10.63 cluster03

				
			

d.- Mount the NFS Resource:
Mount the volume using all servers for load balancing:

				
					mount -t glusterfs cluster01:/vol1,cluster02:/vol1,cluster03:/vol1 /mnt/vpbx
				
			

e.- To ensure the volume is mounted automatically at boot, add the following line to /etc/fstab:

				
					nano /etc/fstab
cluster01:/vol1,cluster02:/vol1,cluster03:/vol1 /mnt/vpbx glusterfs defaults,_netdev 0 0

				
			

When adding the appropriate entry to /etc/fstab, the system will automatically mount the GlusterFS volume at startup, ensuring it is available every time the server is rebooted. The _netdev option is essential as it ensures that the volume is not mounted before the network is ready.

Step 2:

Copy Files to the File Server
Now, we need to copy and create symbolic links to the following directories:
• Asterisk configuration files: /etc/asterisk/
• Recordings, voicemail, etc.: /var/spool/asterisk/
• AGI files: /var/lib/asterisk/agi-bin
• Music on hold files: /var/lib/asterisk/moh
• Caller ID configurations: /var/lib/asterisk/priv-callerintros
• Audio files: /var/lib/asterisk/sounds
• VitalPBX-specific files: /var/lib/vitalpbx/

Copy Files to the Shared Directory

				
					rsync -aR /etc/asterisk /mnt/vpbx/
rsync -aR /var/spool/asterisk /mnt/vpbx/
rsync -aR /var/lib/asterisk/agi-bin /mnt/vpbx/
rsync -aR /var/lib/asterisk/moh /mnt/vpbx/
rsync -aR /var/lib/asterisk/priv-callerintros /mnt/vpbx/
rsync -aR /var/lib/asterisk/sounds /mnt/vpbx/
rsync -aR /var/lib/vitalpbx /mnt/vpbx/

				
			

Delete files from the original directories and create a backup

				
					mv /etc/asterisk /etc/asterisk.backup
mv /var/spool/asterisk /var/spool/asterisk.backup
mv /var/lib/asterisk/agi-bin /var/lib/asterisk/agi-bin.backup
mv /var/lib/asterisk/moh /var/lib/asterisk/moh.backup
mv /var/lib/asterisk/priv-callerintros /var/lib/asterisk/priv-callerintros.backup
mv /var/lib/asterisk/sounds /var/lib/asterisk/sounds.backup
mv /var/lib/vitalpbx /var/lib/vitalpbx.backup

				
			

Create a Symbolic Link to the Shared Folder.

				
					ln -s /mnt/vpbx/etc/asterisk /etc/asterisk
ln -s /mnt/vpbx/var/spool/asterisk /var/spool/asterisk
ln -s /mnt/vpbx/var/lib/asterisk/agi-bin /var/lib/asterisk/agi-bin
ln -s /mnt/vpbx/var/lib/asterisk/moh /var/lib/asterisk/moh
ln -s /mnt/vpbx/var/lib/asterisk/priv-callerintros /var/lib/asterisk/priv-callerintros
ln -s /mnt/vpbx/var/lib/asterisk/sounds /var/lib/asterisk/sounds
ln -s /mnt/vpbx/var/lib/vitalpbx /var/lib/vitalpbx

				
			

Restart Asterisk
After copying the files and setting up the symbolic links, restart the Asterisk service to apply the changes.

				
					systemctl restart asterisk
				
			

These steps ensure that the shared folder is correctly populated and accessible across the servers.

Conclusion

This setup provides a GlusterFS cluster with load balancing and high availability. If one server fails, the client will automatically connect to the remaining servers. Let me know if you need further assistance!

Setting up a file server using GlusterFS and HAProxy on three Debian 12 servers provides a robust, scalable, and highly available solution for managing critical data in enterprise environments. Throughout this blog, we configured a distributed system that ensures data redundancy and service continuity, even in the event of a node failure.

With GlusterFS’s replication and distribution capabilities combined with HAProxy’s load balancing, users can consistently access their files without interruptions. Moreover, integrating these technologies with services like Asterisk and VitalPBX makes this system an efficient and reliable solution for growing businesses.

This approach not only enhances availability and performance but also simplifies the management of shared resources. You now have a file server ready to handle the demands of a modern enterprise environment, with the flexibility to adapt to future challenges. It’s time to make the most of this infrastructure!

Our Latest Post