Deploy Stateful Microservices With Cellery

Madusha Gunasekara
wso2-cellery
Published in
5 min readNov 2, 2019

In monolith applications, data only exists until the application is live. After restarting a failed or crashed application, you may lose any data stored by/ for application in the filesystem. Therefore the ability to maintain the state of the application is considered an essential feature in current deployments. Unlike in monoliths, the microservices architecture allows allocating necessary resources only for required services. So the states of services can be maintained by using persistent volumes.

Cellery is a complete solution that enforces the cell-based architecture which can be used to deploy microservices in seconds in Kubernetes. In this article, let's have a look at how we can deploy stateful microservices by using Cellery. If you are new to Cellery, it is recommended to have a basic understanding of cell-based-architecture and Cellery before going forward.

Stateful Components

In Cellery, each microservice lies inside a cell or a composite and treated as a ‘component’. A Component has to have a persistent volume(PV) to manage its state and such a component is identified as a stateful component.

Users are free to use any kind of persistent volume as per their requirements (local volume, AWSElasticBlockStore, AzureDisk, etc). We can easily create a stateful component by providing a persistent volume claim(PVC) to the intended component. Here the persistent volume creation and management is out of cellery’s scope and it should be managed by the cluster admins.

Non-Shared Volumes/ Persistency

Auto-scale enabled components in Cellery can scale up and have multiple replicas when the load grows. Each replica in such a component can have a separate PVC for isolated storage locations in the PV as shown below.

Think of a scenario of three replicas, one can be a master node and the remaining two slave nodes. Here, the master has to replicate the data from the master to slave nodes. This will create <instance-name> — <claim-name> — <0..n> PVCs in the cluster. Eg: foo-instance — hr-claim-0, foo-instance — hr-claim-1, foo-instance — hr-claim-2 for replicas 0, 1, 2 respectively. These replications have to handle by the user in his application code.

Shared Volumes/ Persistency

There are mainly two use cases where you can have shared volumes/ persistency in Cellery.

  1. Replicas of a component share the same PVC and storage location

The replicas of the component can use the same PVC, and different cell instances can have separate PVCs. So each cell instance may have a PVC and isolated storage.

The application code has to handle the locking mechanism since concurrency issues can arise when accessing the same resource by multiple replicas.

2. Different cell instances from the same cell image share the same PVC

Each replica in a component shares a single PV and different cell instances deployed using the same cell image can also share that existing PV using the same PVC.

You might need this type of shared persistency when you try to perform a Canary deployment. You deploy a newer version of the cell image to a small subset of servers by changing the cell image which you have deployed previously. To test how the new deployment performs, it is essential to use the same resources used by the previous deployment/cell instance. No worries, Cellery allows you to share the same PVC across different instances.

This case differs from the previous case because of the shared strategy. The previous case uses ‘share among replicas’ strategy while this case uses ‘share among instances’.

Phew! I think this much theory is enough for us to get our hands dirty. So let’s try to be stateful with Cellery!

Let’s deploy a simple API that accepts and responds to some to-do requests. The application code can be found here in the cellery sample repository. We can deploy our API as a single cell with two components which are todo service component and the mysql-db component. The deployable cell file can find in here with step by step guide to deploy it in various environments.

Todos Component — Shared Secret

the todos service component has to have the mysql credentials to make a connection with the MySQL server running as the mysql-db component. Let us use that credential as a shared secret so that the todos service replicas can also use that volume when the auto-scale has happened.

If you don’t need to share the secret among replicas of the component, you can simply change to volume:<cellery:NonSharedSecret> .

Mysql-db Component — Non-shared Persistency, Non-shared config-map

This component has two types of volumes that are config-map and PVC. Config-map contains the database schema and it is not required to share among the replicas since the DB initialization is a one-time operation.

However, our main requirement is to persist in the data stored in the MySQL DB. This is achieved by using a persistent volume that has either created by the cluster-admin or dynamically provisioned by the dynamic provisioning enabled cluster.

Here we are creating a PVC for already created PV with the storage class named ‘local-storage’. You can change volume access modes and size by modifying the accessModes and request fields respectively.

After you deploy the cell, try to send some POST/ PUT requests to the todos-service to store some data in the DB. Now you can check the persistence of the data stored in the DB by restarting the mysql-db component. You can simply use $ kubectl delete pod <mysql-db-componet> to restart the mysql-db component. Now if you try to retrieve a record you store previously by using a GET request, you may receive exact information you stored using POST/PUT request 🎉🎉.

Before stop this article, I would like to invite you to try out more on Cellery. Visit the below links to get more familiar with Cellery.

--

--