Supporting Advanced Deployment Patterns with Cell Architecture

Isuru Haththotuwa
wso2-cellery
Published in
5 min readAug 15, 2019

TL;DR

Advanced deployment patterns are an essential part of today’s container based runtimes and MSA. This article aims provides a high level introduction on the importance of the same and how Cellery supports such deployment options.

Why Advanced Deployment Patterns?

In the traditional Physical/Virtual Machine based deployments for software monoliths, it’s a standard practice to maintain separate environments for Testing and Staging, which are supposed to be replicas of the actual Production setup. Such deployments are heavy weight, are not easy to update and typically requires a separate maintenance window to perform such changes. In addition, the changes are usually pushed from one environment to the other, which itself is time consuming. The following are a typical set of high level actions required to push an update to a live system, regardless of the complexity of the update.

  1. Decide on a maintenance window and communicate to all stakeholders
  2. During the maintenance window, make the environment read only
  3. Spin up the required virtual/physical machines
  4. Perform the migration
  5. Run the integration tests against the migrated setup
  6. Switch the traffic and remove the read-only restriction

The entire process could take up a considerable amount of time.

In contrast to the traditional runtimes, container based runtimes, which are used to run apps designed with Micro Service Architecture (MSA) patterns are very much dynamic. There is no limit to the number of containers that can be run at a given time, provided that there are resources available in the runtime. Additionally, in popular container runtimes such Kubernetes, there are built-in mechanisms to dynamically spin up new containers considering the resource consumption (CPU and memory usage) as well as the demand (number of requests sent per second).

Since the container orchestration layer itself has a management overhead, it’s not practical to run and maintain several such environments for QA, Staging, Production, etc. The preferred way is to use one single cluster with sufficient resources. For an example, this could mean a multi-master Kubernetes cluster with a number of nodes in which the containers are scheduled. This practically would mean that the same cluster can be used to serve your production traffic as well as the testing environment. However, in many cases, its not advisable to expose live traffic at one go to updated containers.

Enter advanced deployment patterns.

Advanced deployment patterns such as Canary, Blue-Green enables gradually exposing traffic to new containers without doing it in a big bang manner. Also, for simpler changes, it’s actually possible to do a rolling update which will directly update the containers without disrupting the traffic flow.

What is Cellery and Cell Architecture?

Cellery is an approach to build complex and composite applications in a code-first manner in Kubernetes. Cellery defines an architectural pattern which can consist of either a simple composite or also an opinionated ‘Cell’ with a boundary and a single access point. For more details refer the Cellery documentation.

How Cellery Supports Advanced Deployment Patterns

Cellery supports advanced deployment patterns such as Canary and Blue-Green as well as simple in place update of cell components. Each of these deployment patterns fits a specific use case.

In-place Updates

In place update mechanism is usually used to fix simple issues where the developer is confident that the changes will not break any current functionality. However, just in case if such a failure occurs, the changes can be again reverted back with minimum effect and effort. This method of updating running version directly maps to updating the components of a running Cell instance. The update happens in place in the running cell instance, and its completely transparent to the user.

1. In place updates of cell components

Here, the component A’s container image is updated in place to version 1.1.

Blue-Green and Canary Deployments

Blue-Green and Canary are advanced patterns which support gradual traffic switching. With Blue-Green method, there can be more than one version of a software running at a given time which are named as ‘Blue’ and ‘Green’. At one point, the traffic will be switched to the newer version completely. If any issue arises, traffic can be reverted back to the previous version.

Canary deployment pattern is similar to Blue-Green, but less risky. Why? Typically, once both the versions are running, the traffic is not fully switched to the newer version at one go; but it’s done in a more gradual approach. For example, switch 10% of traffic to the newer version and keep the other 90% still routed to the previous version. Likewise, the percentage of traffic sent to the new version can be increased phase-by-phase, and can be fully reverted at any stage.

A Deeper look into Blue-Green and Canary Patterns with Cellery

Unlike in-place component update, Blue-Green and Canary deployments use two separate Cell instances. In case of backward compatible changes, traffic originating from all cell instances which are depending on the previous version of dependency cell instance can be routed to the newer version, either fully or gradually. Once the traffic is switched 100%, the previous version cell instance can be terminated.

2. Advanced deployments with backward compatible changes

In the above diagram, the Cell instances P and Q were previously depending on Cell instance Y (shown from the dashed arrows). With introduction of Cell instance Y`, which is backward compatible with Cell instance Y, traffic originating from P and Q can be switched to Y` using either Blue-Green or Canary method. Once 100% traffic is switched to Y`, the Cell instance Y can be terminated.

If the changes are not backward compatible, a grace period should be provided to clients to migrate to the new APIs. Therefore, all clients cannot be updated to route to new cell instance, but only selected clients which have successfully migrated to the new version. Therefore, the previous and newer Cell instances should be kept running in parallel for some time.

3. Advanced deployments with non-backward compatible changes

Here in the diagram no.3, instances P & Q were dependent on instance Y previously. A new instance has been created, which is Z, which has some breaking changes compared to Y. Therefore, instance P is still fully dependent upon the previous instance Y. However instance Q has migrated to the new changes introduced by instance Z, and is using Canary pattern to gradually update. At the moment, traffic is routed with a split of 80:20 between Cell instance Y and Z.

Conclusion

This post is a high level overview of support in Cellery for advanced deployment patterns. Head over to Github for official documentation about the same, quick start with Cellery in your preferred environment and the CLI commands for traffic routing options.

--

--