Shruti Naik
Apr 23 · 6 min read

We recently migrated a FinTech conglomerate from an VM based on-premise environment to GCP, while modernizing their application and infrastructure and deploying on GKE. It was an exciting journey, and I thought I’d share one of the challenges we faced during migration and how we solved.

The problem is that of shared storage. Some of the services we deployed depended on NFS mounts before migration and we had to make sure that these continue to work with shared storage on GKE. We looked at GKE’s persistent volumes. Although documentation lists ReadWriteMany as one of the access methods, unfortunately a persistent volume can not be attached to more than one node in write mode

From GKE’s documentation

We needed a highly available shared storage platform, so we turned to GlusterFS and Heketi — RESTful based volume management framework for GlusterFS. Heketi provides a convenient way to unleash the power of dynamically provisioned GlusterFS volumes. It is kind of glue between Glusterfs and Kubernetes. Without this access, you would have to manually create GlusterFS volumes and map them to k8s persistent volume. The rest of this post explains how to configure this whole setup.

Some of the Terminology used

  • Trusted Storage Pool: It is a group of multiple servers that trust each other and form a storage cluster.
  • Node: A node is storage server which participates in the trusted storage pool
  • Brick: A brick is LVM based XFS (512-byte inodes) file system mounted on folder or directory.
  • Volume: A Volume is a file system which is presented or shared to the clients over the network. A volume can be mounted using glusterfs, nfs and smbs methods.

What this tutorial covers

  1. Configure GlusterFS in 3 virtual machines with the attached additional disk.
  2. Heketi Setup on one of the Gluster node.
  3. Create Heketi Topology for Gluster nodes.
  4. GlusterFS-client as DeamonSet in GKE cluster.
  5. Create a Storage class for provisioners.
  6. Create a PVC for dynamic provision in Gluster Storage.
  7. Pod/Deployment with the volume referenced with PVC.

Glusterfs Configurations For CentOS

Add the following lines in the /etc/hosts file

192.168.43.10  server1.example.com server1
192.168.43.20 server2.example.com server2
192.168.43.30 server3.example.com server3

Install GlusterFS Server Packages On All Servers.

Glusterfs packages are not included in the default CentOS. Run the following commands one after the another on all 3 servers.

yum install wget
yum install centos-release-gluster -y
yum install epel-release -y
yum install glusterfs-server -y

Start and enable the GlusterFS Service on all the servers.

systemctl start glusterd
systemctl enable glusterd

Allow the ports in the firewall so that servers can communicate and from glusterfs storage cluster (trusted pool).

firewall-cmd --zone=public --add-port=24007-24008/tcp --permanentfirewall-cmd --zone=public --add-port=24009/tcp --permanentfirewall-cmd --zone=public --add-service=nfs --add-service=samba --add-service=samba-client --permanentfirewall-cmd --zone=public --add-port=111/tcp --add-port=139/tcp --add-port=445/tcp --add-port=965/tcp --add-port=2049/tcp --add-port=38465-38469/tcp --add-port=631/tcp --add-port=111/udp --add-port=963/udp --add-port=49152-49251/tcp --permanentfirewall-cmd --reload

Don’t Forget to change CentOS default Configuration:

root@server1:# vi /etc/ssh/sshd_config
PermitRootLogin yes
root@server1:# vi /etc/selinux/config
SELINUX=disabled
root@server1:# setenforce 0

Distribute Volume Setup

Create a trusted storage pool which consists of server 1 and server 2 in and will create bricks on that and after that will create distributed volume.

Run the below command from server 1 console to form a trusted storage pool with server 2.

root@server1:#gluster peer probe server2.example.com
peer probe: success.

We can check the peer status using below command :

root@server1:#gluster peer status
Number of Peers: 1
Hostname: server2.example.com
State: Peer in Cluster (Connected)

Repeat the same for server 3 and check the peer status for the same. Speaking of repeating — here is a Calvin ;)

Heketi Setup

Install Heketi on one of the GlusterFS nodes.

root@server1:# wget https://github.com/heketi/heketi/releases/download/v8.0.0/heketi-v8.0.0.linux.amd64.tar.gzroot@server1:# tar xzvf heketi-v8.0.0.linux.amd64.tar.gz
root@server1:# cd heketi
root@server1:# cp heketi heketi-cli /usr/local/bin/
root@server1:# heketi -v

Create the heketi user and the directory structures for the configuration:

root@server1:# groupadd -r -g 515 heketiroot@server1:# useradd -r -c "Heketi user" -d /var/lib/heketi -s /bin/false -m -u 515 -g heketi heketiroot@server1:# mkdir -p /var/lib/heketi && chown -R heketi:heketi /var/lib/heketiroot@server1:# mkdir -p /var/log/heketi && chown -R heketi:heketi /var/log/heketiroot@server1:# mkdir -p /etc/heketi

Heketi has several provisioners but here I will be using the ssh We need to set up password-less ssh login between the Gluster nodes so heketi can access them. Generate RSA key pair.

root@server1:# ssh-keygen -f /etc/heketi/heketi_key -t rsa -N ''root@server1:# chown heketi:heketi /etc/heketi/heketi_key*

Change Permission for ssh key files in all 3 nodes for heketi access :

root@server1:# cd /root
root@server1:# mkdir .ssh
root@server1:# cd .ssh/
root@server1:# vi authorized_keys
paste public key file in this file
root@server1:# chmod 600 /root/.ssh/authorized_keys
root@server1:# chmod 700 /root/.ssh
root@server1:# service sshd restart

Create the Heketi config file in/etc/heketi/heketi.json

Create the following Heketi service file /etc/systemd/system/heketi.service

[Unit]
Description=Heketi Server
Requires=network-online.target
After=network-online.target

[Service]
Type=simple
User=heketi
Group=heketi
PermissionsStartOnly=true
PIDFile=/run/heketi/heketi.pid
Restart=on-failure
RestartSec=10
WorkingDirectory=/var/lib/heketi
RuntimeDirectory=heketi
RuntimeDirectoryMode=0755
ExecStartPre=[ -f "/run/heketi/heketi.pid" ] && /bin/rm -f /run/heketi/heketi.pid
ExecStart=/usr/local/bin/heketi --config=/etc/heketi/heketi.json
ExecReload=/bin/kill -s HUP $MAINPID
KillSignal=SIGINT
TimeoutStopSec=5

[Install]
WantedBy=multi-user.target

Start the service and check with journalctl:

root@server1:# systemctl daemon-reload
root@server1:# systemctl start heketi.service
root@server1:# journalctl -xe -u heketi
-- Logs begin at Tue 2019-04-09 06:06:52 UTC, end at Tue 2019-04-09 07:20:00 UTC. --
Apr 09 07:19:30 server1 systemd[1]: [/etc/systemd/system/heketi.service:17] Executable path is not absolute, ignoring: [ -f "/run/heketi/heketi.pid" ] && /bin/rm -f /run
Apr 09 07:19:30 server1 systemd[1]: Started Heketi Server.
-- Subject: Unit heketi.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit heketi.service has finished starting up.
--
-- The start-up result is done.
Apr 09 07:19:31 server1 heketi[5009]: Heketi v8.0.0
Apr 09 07:19:31 server1 heketi[5009]: [heketi] INFO 2019/04/09 07:19:31 Loaded ssh executor
Apr 09 07:19:31 server1 heketi[5009]: [heketi] INFO 2019/04/09 07:19:31 Adv: Max bricks per volume set to 33
Apr 09 07:19:31 server1 heketi[5009]: [heketi] INFO 2019/04/09 07:19:31 Adv: Max brick size 1024 GB
Apr 09 07:19:31 server1 heketi[5009]: [heketi] INFO 2019/04/09 07:19:31 Adv: Min brick size 1 GB
Apr 09 07:19:31 server1 heketi[5009]: [heketi] INFO 2019/04/09 07:19:31 GlusterFS Application Loaded
Apr 09 07:19:31 server1 heketi[5009]: [heketi] INFO 2019/04/09 07:19:31 Started Node Health Cache Monitor
Apr 09 07:19:31 server1 heketi[5009]: Authorization loaded
Apr 09 07:19:31 server1 heketi[5009]: Listening on port 8080
Apr 09 07:19:41 server1 heketi[5009]: [heketi] INFO 2019/04/09 07:19:41 Starting Node Health Status refresh
Apr 09 07:19:41 server1 heketi[5009]: [heketi] INFO 2019/04/09 07:19:41 Cleaned 0 nodes from health cache
Apr 09 07:19:53 server1 systemd[1]: [/etc/systemd/system/heketi.service:17] Executable path is not absolute, ignoring: [ -f "/run/heketi/heketi.pid" ] && /bin/rm -f /run
lines 1-22/22 (END)

Now enable the service by restarts:

[root@server1 ~]# systemctl enable heketi
Created symlink from /etc/systemd/system/multi-user.target.wants/heketi.service to /etc/systemd/system/heketi.service.
[root@server1 ~]# systemctl status heketi
● heketi.service - Heketi Server
Loaded: loaded (/etc/systemd/system/heketi.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-04-09 07:19:30 UTC; 29s ago
Main PID: 5009 (heketi)
CGroup: /system.slice/heketi.service
└─5009 /usr/local/bin/heketi --config=/etc/heketi/heketi.json

Create topology/etc/heketi/topology.json config file:

where /dev/sdb is a 10GB raw block device attached to each gluster node. Then we load topology:

[root@server1 ~]# export HEKETI_CLI_SERVER=http://server1:8080
[root@server1 ~]# export HEKETI_CLI_USER=admin
[root@server1 ~]# export HEKETI_CLI_KEY=PASSWORD
root@ip-10-99-3-216:/opt/heketi# heketi-cli topology load --json=/opt/heketi/topology.json
Found node glustera.tftest.encompasshost.internal on cluster 37cc609c4ff862bfa69017747ea4aba4
Adding device /dev/xvdf ... OK
Found node glusterb.tftest.encompasshost.internal on cluster 37cc609c4ff862bfa69017747ea4aba4
Adding device /dev/xvdf ... OK
Found node glusterc.tftest.encompasshost.internal on cluster 37cc609c4ff862bfa69017747ea4aba4
Adding device /dev/xvdf ... OK
[root@server1 ~]# heketi-cli cluster list
Clusters:
Id:d1694da0ea9710c9ab44829db617094d [file][block]
[root@server1 ~]# heketi-cli node list
Id:2bcc7da8d6d556062cd0f72901f2ee5e Cluster:d1694da0ea9710c9ab44829db617094d
Id:95ec22225d398a9e3fb2fd304e2ab370 Cluster:d1694da0ea9710c9ab44829db617094d
Id:ff3aeb28dcb2a6c61be7672b40bbea62 Cluster:d1694da0ea9710c9ab44829db617094d

Kubernetes Dynamic Provisioner

GlusterFS-client.yaml needs to be installed on all k8s nodes otherwise the mounting of the GlusterFS volumes will fail. Need to create DeamonSet likewise:

Create a kubernetes Secret for the admin user password in the following gluster-secret.yaml file:

Kuberentes has built-in plugin for GlusterFS. We need to create a new glusterfs storage class that will use our Heketi service. Create YAML file gluster-storageclass.yaml likewise:

Now, It’s time to create the resources :

$ kubectl create -f gluster-secret.yaml
secret/heketi-secret created
$ kubectl create -f gluster-storageclass.yaml
storageclass.storage.k8s.io/gluster-heketi-external created

To test it we create a PVC (Persistent Volume Claim) that should dynamically provision a 1GB volume for us in the Gluster storage. Create glusterfs-pvc.yaml likewise :

$ kubectl create --save-config -f glusterfs-pvc.yaml
persistentvolumeclaim/gluster-dyn-pvc created

If we check now:

$ kubectl get pv,pvc -n default
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-3e5e6e30-5aab-11e9-bf0f-4201ac140044 1Gi RWX Delete Bound default/gluster-pvc gluster-heketi-external 6s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/gluster-pvc Bound pvc-3e5e6e30-5aab-11e9-bf0f-4201ac140044 1Gi RWX gluster-heketi-external 12s

To use the volume we reference the PVC in the YAML file of any Pod/Deployment like this for example:

$ kubectl apply -f test.yaml 
pod/gluster-pod1 created

Hope you find this useful! Happy containerizing! :)

Thanks to Suganya G — The Linux Geek.😃

Searce Engineering

We identify better ways of doing things!

Shruti Naik

Written by

"Clearly Cloudy" Tech Enthusiastic

Searce Engineering

We identify better ways of doing things!

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade