Exploring GCP’s Multi-Writer Persistent Disks: A Guide to Building a Shared Filesystem

Ariel Filotti
Zencore Engineering
4 min readSep 16, 2023
An array of hard drives for a storage system [Photo by storagereview.com]

Introduction

Have you explored Google Cloud’s new multi-writer persistent disks feature? In this demonstration, we’ll highlight the power and capabilities of GCP’s multi-writer persistent disks.

Throughout this post, we’ll delve into:

  • Creating the VPC network, firewall rules, instances, and multi-writer disk with Terraform
  • Installing and configuring the OCFS2 clustered filesystem
  • Mounting the shared storage on both instances
  • Understanding the high availability offered by OCFS2
  • Exploring potential use cases for this architecture on GCP

By the time you finish reading, you’ll be equipped with the knowledge to launch an OCFS2 cluster on Google Cloud.

Multi-Writer Persistent Disks on GCP

GCP recently introduced a new feature for SSD persistent disks — the capacity to simultaneously attach them in multi-writer mode to two VMs.

This grants both VMs the ability to read and write to the disk concurrently, setting up a shared block storage layer.

However, there are some key caveats with multi-writer disks:

  • Supported only in specific regions and zones: mainly us-east1, us-central1, us-west1, europe-west1, and australia-southeast1.
  • Instances should utilize N2 machine types.
  • The minimum disk size is 10GB.
  • The maximum number of attachments per disk is 2.
  • IOPS and throughput limits are reduced compared to single-writer disks. For instance, it supports up to 100,000 read IOPS per VM and 1,200 MB/s read throughput per VM. But that is still a very good performance.
  • Persistent disk metrics aren’t available.
  • Resizing the disk post-creation isn’t possible.
  • Clustered File systems like OCFS2 are essential as most single-node systems can’t safely coordinate access.

The essence is, while multi-writer disks offer a straightforward path to a shared block device between two VMs, it’s vital to be mindful of the constraints concerning performance, scalability, and file system support.

Deploying the Infrastructure with Terraform

Begin by deploying the networking, compute, and storage assets for our OCFS2 cluster using Terraform:

git clone https://github.com/zencore-dev/gcp-pd-multiwriter.git
  • Navigate to the directory containing the Terraform configurations:
cd gcp-pd-multiwriter/terraform
  • Run the following commands:
terraform init
terraform apply -var "project_id=$GOOGLE_CLOUD_PROJECT"

After these steps, the VPC network, firewalls, two VMs, and a multi-writer persistent disk will be set up. Once Terraform wraps up, we’ll be prepared to install and configure OCFS2.

Configuring OCFS2 Across the Nodes

Having the VMs and multi-writer disk ready, the next step is to set up, configure, and initialize the OCFS2 clustered file system.

Accessing Each VM via GCP’s Web SSH Console:

  • Log into your Google Cloud Console.
  • Head to the Compute Engine section.
  • Identify the VM you wish to access from the list of virtual machine instances.
  • Click on the SSH button related to that VM. A new window or browser tab will launch, and you’ll gain access to the VM via a web-based SSH client.

On each VM instance, run the following commands:

Switch to the root user:

sudo -i

Install the ocfs2 modules:

GCP_KERNEL_VERSION=$(uname -r)
apt install -y ocfs2-tools linux-modules-extra-${GCP_KERNEL_VERSION}

Run the following command on one instance only, to format the filesystem:

mkfs.ocfs2 -b 4k -C 32K -L "ocfs2" -N 2 /dev/sdb

Now on both instances, run:

mkdir /data
dpkg-reconfigure ocfs2-tools

Accept the default configurations and enable OCFS2 at startup.

Next, modify /etc/ocfs2/cluster.conf on both VMs:

nano /etc/ocfs2/cluster.conf

Append the following configuration:

node:
ip_port = 7777
ip_address = 10.0.0.2
number = 1
name = nas-1
cluster = ocfs2
node:
ip_port = 7777
ip_address = 10.0.0.3
number = 2
name = nas-2
cluster = ocfs2
cluster:
node_count = 2
name = ocfs2

Finally:

o2cb register-cluster ocfs2
mount /dev/sdb /data

Testing Concurrent Access

To ensure that both VMs can simultaneously read and write to the OCFS2 shared volume, conduct a basic test:

On nas-1, write to a file once a second:

while true; do echo “Hello from nas-1” >> /data/test.txt; sleep 1; done

On nas-2, observe the file:

tail -f /data/test.txt

You should witness “Hello from nas-1” displaying every second on nas-2. This demonstrates OCFS2’s synchronization and coordination abilities for simultaneous access.

Conclusion

In this guide, we explored deploying a dual node OCFS2 cluster on Google Cloud utilizing multi-writer persistent disks. We highlighted:

  • How Terraform can be instrumental in infrastructure deployment
  • Setting up a multi-writer disk for shared block storage
  • Installing and setting up OCFS2 across the instances
  • The concurrent access validation from both VMs to a file

Multi-writer persistent disks offer an efficient way to establish a shared block device between Compute Engine instances. When paired with a clustered filesystem like OCFS2, it forms the groundwork for scalable, highly available storage systems on GCP.

While the two-node limit might curb some use cases, this structure can be beneficial for tasks requiring a shared disk with redundancy, such as stateless web services.

The complete code for this demonstration is available on GitHub. Feel free to deploy it on your end and test OCFS2 on GCP!

--

--