How do I create an OpenEBS storage pool on Google Persistent Disk

karthik satchitanand
Google Cloud - Community
3 min readApr 13, 2018

This article belongs to #HowDoI series on Kubernetes and OpenEBS.

The OpenEBS volume replicas, which are the actual backend storage units of the OpenEBS iSCSI target currently store the data in a hostPath on the Kubernetes nodes. By default, a folder with the volume (PV) name is created on the root filesystem, in a parent directory (/var/openebs) & bind mounted into the container during the replica pod instantiation. This parent directory (also created if not already available), which is basically a persistent path to hold the individual volumes is referred to as a Storage Pool.

Note: The notion of the storage pool described above is specific to the current default storage engine ,i.e., Jiva. Future releases may see availability of additional storage-engines which can consume block devices instead of hostdir to create storage pools

For various reasons, it may be desirable to create this storage pool on an external disk (GPD, EBS, SAN) mounted into specific locations on the Kubernetes nodes. This is facilitated by the OpenEBS storage pool policy, which defines the storage pool as a Kubernetes Custom Resource with the persistent path as an attribute.

This blog will focus on the steps to be followed to create the OpenEBS PV on Google Persistent Disks (GPD).

PRE-REQUISITES

STEP-1: Format the GPDs & Mount into desired path

On each node, perform the following actions :

  • Switch to root user sudo su -
  • Identify GPD attached fdisk -l
root@gke-oebs-staging-default-pool-7cc7e313-0xs4:~# fdisk -l
Disk /dev/sda: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x635eaac1
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 209715166 209713119 100G 83 Linux
Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
  • Format the disk with, say ext4 fs (mkfs.ext4 /dev/sd<>)
root@gke-oebs-staging-default-pool-7cc7e313-0xs4:~# mkfs.ext4 /dev/sdb 
mke2fs 1.42.13 (17-May-2015)
/dev/sdb contains a ext4 file system
last mounted on /openebs on Fri Apr 13 05:03:42 2018
Proceed anyway? (y,n) y
Discarding device blocks: done
Creating filesystem with 2621440 4k blocks and 655360 inodes
Filesystem UUID: 87d36681-d5f3-4169-b7fc-1f2f95bd527e
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
  • Mount the disk into desired mount point (mount -o sync /dev/sd<> /mnt/openebs)
root@gke-oebs-staging-default-pool-7cc7e313-0xs4:~# mount -o sync /dev/sdb /mnt/openebs/root@gke-oebs-staging-default-pool-7cc7e313-0xs4:~# mount | grep openebs 
/dev/sdb on /mnt/openebs type ext4 (rw,relatime,sync,data=ordered)

STEP-2 : Create a storage pool custom resource

  • Construct a storage pool resource specification as shown below & apply it (Note that the custom resource definition for the storage pool is already applied as part of the operator install)

STEP-3 : Refer the storage pool in a custom storage class

STEP-4 : Use the custom storage class in an application’s PVC spec

STEP-5 : Confirm volume is created on the storage pool

  • Once the OpenEBS PV is created (kubectl get pv, kubectl get pods), list the contents of the custom persistent path mentioned in the storage pool custom resource. It should contain a folder with the PV name consisting of the sparse files (disk image files)

GOTCHAS !!

Issue: GPDs are detached in the event of a) Cluster resize (downscale/upscale) , b) upgrades & c) VM halts

  • No options to add “additional disks” during cluster creation
  • Instance templates are “immutable”, disks have to be added to instances separately

Workaround : Perform a manual re-attach in above situations (Enlarged root disks are an option, but generally not recommended)

Originally published at medium.com on April 13, 2018.

--

--