Canary releasing with Vamp on Kubernetes and Azure Container Service

Canary Releasing on Kubernetes using Vamp and Azure Container Service

Tim Nolet 👨🏻‍🚀
vamp.io
Published in
7 min readOct 3, 2017

--

Releasing containerised application workloads on Kubernetes is almost too easy, and Kubernetes comes with some powerful release patterns out of the box. There are also already some great resources out there describing interesting blue/green and canary release deployment scenarios with Kubernetes — the official docs have a section that proposes a canary release strategy and this write up also talks about a very similar scenario —

In essence, the proposed strategy in most cases is to spin up as much replicas as you need to reflect the user distribution you want, i.e. if you want to have 1:3 of your users hit the “canary” version of your app and 2:3 hit the “stable” version, you spin up two stable versions and one canary version. You then map these replicas to the same (ingress) service and you’re done.

This completely works and totally valid in small scale, stateless and very general situations. It has its weaknesses however:

  1. Achieving a distribution is directly related to how many resources you are using in your cluster. Need to run 50 replicas just to serve stable traffic? Then you need 25 replicas of the canary version to achieve a 1:3 balance.
  2. You can only distribute traffic based on the percentage of the total traffic.
  3. There is no stickyness. User can flip flop between stable and canary versions on each request. This might break your app.

So, if you want to take it up a notch and gain more flexibility in programmatic routing and workflow you’ll probably want to checkout Vamp. Luckily, getting up and running with Vamp and Kubernetes is incredibly easy and quick, especially when running on Azure Container Service as this enables some neat and out-of-the-box load balancer and endpoint integration.

In this post we’ll walk you through all the initial steps to get up and running.

  1. Setup a Kubernetes cluster on ACS.
  2. Install Vamp and CLI tools kubectl and vamp-cli
  3. Run a first canary release using a combination of both tools.

Full disclosure: Our dev team is still finalising the full Kubernetes integration. 95% of Vamp’s feature work extremely nice with Kubernetes, but we have some open bugs that we still need to squash.

What is Vamp?

Vamp is an open source, self-hosted platform for managing (micro)service oriented architectures that rely on container technology. Vamp provides a DSL to describe services, their dependencies and required runtime environments in blueprints.

Vamp takes care of route updates, metrics collection and service discovery, so you can easily orchestrate complex deployment patterns, such as A/B testing and canary releases.

Kubernetes & Vamp Installation

If you have your Azure account setup and credentials in place, use the following script to bootstrap a Kubernetes cluster and install Vamp. The steps in this script are described below.

1. Setup Azure Container Service with Kubernetes

Microsoft has done an excellent job in providing a very easy and quick Kubernetes setup with their Azure Container Services. It takes just a handful of commands to get going. You need an active Azure subscription and the Azure command line interface installed to run the below commands

2. Install Vamp

Integrating Vamp into Kubernetes is made delightfully simple using our install script. It talks directly to kubectl and sets up Vamp and its dependencies. Read the full source of the install script here.

The script should finish with the following output and an SSH tunnel on port 8001, connecting you to your Kubernetes host on ACS.

A quick overview of Vamp on Kubernetes

Open a browser, navigate to http://localhost:8001/ui/ and go to the workloads tab. You will a see all Vamp components installed and running.

  1. Daemon Sets: To facilitate smart routing logic, Vamp relies on having a its routing component, the Vamp Gateway Agent (VGA) on every node.
  2. Deployments: Vamps components are described in Kubernetes deployments, which in turn describe the replica sets and pods. We can see one instance of the Vamp application, four instances of the Vamp Workflow Agent (used for background jobs) and Elasticsearch & Kibana installation, used for collecting metrics.
  3. Pods: The pods show the actual runtime state of the described deployments. If all things are good they should all be in the “Running” state.
  4. Replica Sets: Replica Sets describe the scaling behaviour of a specific set of Pods. In our case there is no scaling and they should match the Pods.

Vamp’s UI and API endpoints are available on the external IP and port defined in the Vamp service. Either get it using the following kubectl command or get in from the Kubernetes UI on the “Services” tab

kubectl get services vampNAME      TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)         vamp      LoadBalancer   10.0.215.28   13.93.81.196   8080:32132/TCP   

In our example this means:

Vamp UI: http://13.93.81.196:8080/#/vamp/
Vamp API: http://13.93.81.196:8080/api/v1/

Deploy and do a canary release

We’re going to perform a simple canary release using the Vamp CLI. First install it and set the VAMP_HOST environment variable to Vamp’s address.

npm install -g vamp-cli
export VAMP_HOST=http://13.93.81.196:8080

Use the following script to insert two Vamp blueprints.

Then deploy version 1.0.0 of our simple service…

vamp deploy simpleservice:1.0.0 simple_dep

…and check if our deployment is done.

$ vamp list deploymentsNAME        CLUSTERS        PORTS                     STATUS
simple_dep simpleservice simpleservice.web:40001 Deployed

Vamp integrates directly with Kubernetes LoadBalancer service types by setting up a service and external endpoint. We provide the selector io.vamp.gateway=simple_dep_9050 to filter for the right data where simpleDep is the name we gave to our deployment and 9050 is what we defined as our gateway in the blueprint for simpleservice version 1.0.0. If the EXTERNAL-IP shows <pending> please be patient as the environment bootstraps the necessary infrastructure. Luckily this only needs to happen once as the 9050 is our stable endpoint.

kubectl get services --selector=io.vamp.gateway=simple_dep_9050NAME   TYPE           CLUSTER-IP    EXTERNAL-IP     PORT(S)       
51.. LoadBalancer 10.0.44.213 13.73.165.183 9050:31048/TCP

But it doesn’t end there. The Kubernetes LoadBalancer service is in turn now also integrated into Azure’s Loadbalancer, creating a load balancing rule and health probe, exposing our service to the internet in a reliable fashion.

Having said all that, our service is now reachable on 13.73.165.183:9050 , as reported by kubectl . Open a browser and you should get:

As you might have noticed, that next to the LoadBalancer service this deployment will also show up as a Kubernetes Deployment, Pod and ReplicaSet. This demonstrates how Vamp uses the native scheduling and resource management of Kubernetes.

The deployment of 1.0.0 is done. We now merge version 1.1.0 into our existing deployment and expose it to 10% of traffic.

vamp merge simpleservice:1.1.0 simple_dep

We can now have a look at the internal gateway that Vamp has setup and that allows us to migrate traffic to our new version. This internal gateway is completely separate from the external one described above, neatly separating the stable ingress endpoints and the internal dynamic routing.

$ vamp describe gateway simple_dep/simpleservice/webName: simpleDep/simpleservice/web
Type: internal
Port: 40001/http
Service host: 10.0.13.125
Service port: 31071/http
Sticky: false
ROUTE WEIGHT CONDITION STRENGTH TARGETSsimple.../1.1.0/web 0% - 0% 10.244.0.18:3000
simple.../1.0.0/web 100% - 0% 10.244.2.14:3000

Now let’s update the routing and assign 70% to version 1.0.0 and 30% to version 1.1.0.

vamp update-gateway simple_dep/simpleservice/web --weights simple_dep/simpleservice/simpleservice:1.0.0/web@70%,simple_dep/simpleservice/simpleservice:1.1.0/web@30%

Now, hitting our endpoint a couple of times should yield the following screen.

You can of course take much smaller steps then 70/30, as long as the number add up to 100. Also, there is no hard limit on the amount of services you split up.

Wrap up and next steps

Setting weights on gateways is just one way of doing canary releases. Using Vamp’s conditions you can influence traffic based on HTTP headers like Cookies, User-Agents etc. As this is not Kubernetes specific we won’t dive into that in this write up, but here are some links for further reading:

--

--

Tim Nolet 👨🏻‍🚀
vamp.io
Editor for

Code and Product. Writing about solopreneurship, Javascript and containers. Founder at checklyhq.com