Finding Gluster Volumes in Kubernetes

Raja Venkataraman
Star Systems Labs
Published in
5 min readSep 20, 2019
Gluster Volume Info in K8S

As with most companies that do Kubernetes, stateless applications are a breeze and will work mostly after a few days/weeks of trying out deployment objects, services and ingress. The real problem with most companies start when you start putting Stateful data into the cluster and you need to find Storage solutions that play well with Kubernetes.

We, at Star Systems, had the same problem and have been investigating a variety of storage solutions, ranging from Hostpath to NFS to Longhorn to our latest find, which is Gluster. This post is one of the many that we will publish to expose our findings on how to debug information on Gluster for traditional sysadmin/devs who are used to physically “seeing” their volumes and not hidden behind many layers of storage solutions.

What is Gluster?

To quote the official reference,

GlusterFS is a scalable network filesystem suitable for data-intensive tasks such as cloud storage and media streaming

Simply put, Gluster uses bricks that are pre-configured on a storage pool. For people who are used to traditional file servers, a brick is like a directory on a server that can host volumes under which the data will be stored.

Why did we use Gluster?

Gluster is one of the few Persistent Volumes supported by Kubernetes with ReadWriteMany capability. This means that you can mount the same volume in a Read/Write capacity across multiple nodes of your cluster. We needed this capability because our pods can run in any of our nodes and they all need RW access.

How do you know where the data is stored from k8s

One of the biggest problems for us , coming from sys/dev operation folks who are used to traditional NFS type storage, was to figure out where in the Gluster Cluster is our data stored. This was needed to us for a variety of reasons such as

  • Just visibility into where my data is so I can check if everything is okay
  • Backup and Restore

Our kubernetes test setup is the following:

  • Two gluster nodes (for e.g. 192.168.129.29 and 192.168.132.24)
  • Three node Kubernetes cluster with one master
  • Heketi running on the gluster node as well. Heketi is a REST based volume management framework for gluster. K8S uses this to create the volumes when we request for it

Assuming we have a PVC setup for one of the pods like this

emerald:helmized raja$ k get pvc -n v4
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
datadir-solrcloud-0 Bound pvc-2dfceca2-fd44-4622-b855-6d7f2cbb6252 1Gi RWO gluster-heketi 8d

Now, we have a PVC, in the case of above, a SolrCloud volume sitting on my gluster storage. I have no idea where its stored and which of my nodes has it (Not that I need it, but as traditional sysadmins, we like to see our data).

If we do a describe on the above, there isn’t much info either

emerald:helmized raja$ k describe pvc datadir-solrcloud-0 -n v4
Name: datadir-solrcloud-0
Namespace: v4
StorageClass: gluster-heketi
Status: Bound
Volume: pvc-2dfceca2-fd44-4622-b855-6d7f2cbb6252
Labels: app=services-solrcloud
chart=services-0.1.0
heritage=Tiller
release=ugly-maltese
Annotations: pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/glusterfs
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
VolumeMode: Filesystem
Events: <none>

If I lookup the Persistent Volume associated to this PVC, we get this

emerald:helmized raja$ k describe pv pvc-2dfceca2-fd44-4622-b855-6d7f2cbb6252 -n v4
Name: pvc-2dfceca2-fd44-4622-b855-6d7f2cbb6252
Labels: <none>
<snipped>
Claim: v4/datadir-solrcloud-0
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Source:
Type: Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime)
EndpointsName: glusterfs-dynamic-2dfceca2-fd44-4622-b855-6d7f2cbb6252
Path: vol_99e0dce9e4f81043d26c1b87cfc57dd3
ReadOnly: false

Finally, we have some information on what the gluster volume is. The Path bolded above is the gluster volume that could be living anywhere in my gluster storage and containing the actual data.

So far, so good. How do we get the actual files now? This is where Heketi helps us. Heketi exposes an API that we can use to query the volume directly and get information on where the physical volume is stored. To use Heketi, you would need to expose three environment variables

  • Heketi Server (HEKETI_CLI_SERVER)
  • Heketi User (HEKETI_CLI_USER)
  • Heketi Secret (HEKETI_CLI_KEY)

Once the above keys are exposed, the heketi URL can be called like

curl -XGET -H "Authorization: Bearer <token>" $HEKETI_CLI_SERVER/volumes/<volume_id>

The volume_id in the above example is the one without the “vol_” prefix, so for the above case, it is 99e0dce9e4f81043d26c1b87cfc57dd3.

To get the Bearer token, you need to use a script like below that uses JWT to get a hash based on the URL that is being fed to Heketi. Please remember to setup the HEKETI_CLI_USER and HEKETI_CLI_KEY environment variable before running this

import jwt
import datetime
import hashlib
import os
uri = '/volumes/<volume_id>' # Replace the Volume Id from above
secret = os.environ['HEKETI_CLI_KEY']
method='GET'
claims = {}
claims['iss'] = os.environ['HEKETI_CLI_USER']
claims['iat'] = datetime.datetime.utcnow()
claims['exp'] = datetime.datetime.utcnow() + datetime.timedelta(minutes=10)
claims['qsh'] = hashlib.sha256((method + '&' + uri).encode('utf-8')).hexdigest()
print(jwt.encode(claims, secret, algorithm='HS256'))

The output of the above script will be a Token that will expire in 10mins (If you need a longer token, pls change the claims[‘exp’] above as it sets the expiration to 10 mins from now.

Once you get the token, you can call the URL to get the Volume info:

curl -XGET -H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJhZG1pbiIsImlhdCI6MTU2ODk1NjMyMCwiZXhwIjoxNTY4OTU2OTI2LCJxc2giOiJkMzFkNmMxNmE1OGMyNGMwMWE0YjY5ZmVhYWI3ZWJlMWRkMWJhZjA1NGE0NDRhYTFhOWZjYTM2MTk1MTE4Nzc1In0.o-5DGem5NC37dcbonw2zJLL7duL5fhzc06lb4tVbGiE" $GLUSTER_SERVER/volumes/991ebaf20816f3395864acea60a138d9

and the output of the above is:

{
"size":
2,
"name":
"vol_991ebaf20816f3395864acea60a138d9",
"durability": {
"type":
"none",
"replicate": {},
"disperse": {}
},
"gid":
2008,
"glustervolumeoptions": [
"",
"ctime off",
""
],
"snapshot": {
"enable":
true,
"factor":
1
},
"id":
"991ebaf20816f3395864acea60a138d9",
"cluster":
"ae66678f5f57b6144417457dbcca9392",
"mount": {
"glusterfs": {
"hosts": [
"192.168.132.24",
"192.168.129.29"
],
"device":
"192.168.132.24:vol_991ebaf20816f3395864acea60a138d9",
"options": {
"backup-volfile-servers":
"192.168.129.29"
}
}
},
"blockinfo": {},
"bricks": [
{
"id":
"2c31fc3e45609588e99a00783ee523c7",
"path":
"/var/lib/heketi/mounts/vg_10ae3e4785f8e3a63f9fbd954f0841e5/brick_2c31fc3e45609588e99a00783ee523c7/brick",
"device":
"10ae3e4785f8e3a63f9fbd954f0841e5",
"node":
"94152a796eb5bfae27dbff9730431e3f",
"volume":
"991ebaf20816f3395864acea60a138d9",
"size":
2097152
}
]
}

There is a lot of information in the above, but the ones that are interesting to us are the below

  • bricks: This contains info about where the volume is physically available. In our case, this is a non-replicated setup, so only one brick is available and its located in node 94152a796eb5bfae27dbff9730431e3f and at /var/lib/heketi/mounts/vg_10ae3e4785f8e3a63f9fbd954f0841e5/brick_2c31fc3e45609588e99a00783ee523c7/brick

If we go into that node at that location, you will see the data for the PVC.

That’s it. This should help with identifying the volume information associated to a Kubernetes PVC.

Reference:

--

--