Simplifying Microservices with Istio in Google Kubernetes Engine — Part III

Nithin Mallya
Google Cloud - Community
5 min readFeb 15, 2018

What I write about Istio is a subset of the awesome documentation that is on the Istio site. Please read the official docs to understand more.

In Part I of this series, we saw how we could use Istio to simplify communication between our microservices.

In Part II of this series, we learned to use Istio egress rules to control access to services outside the Service Mesh.

In this part, we’ll see how we can do Canary Deployments and ramp traffic with Istio

Background: In past articles, I explained in detail how we could do Blue/Green deployments with Kubernetes. It is a deployment technique where we deploy identical production environments with the current version of our application and a new version. This technique enables us to do Zero Downtime Deployments (ZDD) to ensure that our users are not impacted when we switch to a new version. Having two versions (the current version and the new one) also gives us the ability to rollback if the new version has any issues.

What we also need is the ability to ramp traffic up (or down) to new versions of our app and monitor it to ensure that there is no adverse impact. One way to achieve this is with Canary deployments or Canary Releases.

Not-so-fun fact: Miners took canaries with them when entering mines. Any toxic gasses would kill the canaries first and act as a warning to the miners to get out of the mines.

Likewise, in the application deployment world, with Canary deployments, we can deploy a new version of our application to Production and send only a small percentage of the traffic to this new deployment. This new version would run in parallel with the current version and alert us to any issues before we switch all traffic to the new version.

For example: v1 of our app could front 90% of traffic and v2 could get the other 10%. If everything looks good, we can ramp the v2 traffic up to 25%, 50% and ultimately 100%. Another advantage of Istio Canary deployments is that we can ramp traffic based on custom headers in the requests. For example, ramp 10% of the traffic with a certain cookie header value to v2 of our app.

Note: While Canary deployments “can” be used in conjunction with A/B testing to see how users react to a new version from a business metric perspective, the real motivation behind it is to ensure that the application performs satisfactorily from a functional standpoint. Also, business owners might want to run a A/B testing campaign for a longer time (example: many days or even weeks) than what the Canary ramp might take. Hence, it would be wise to keep them separate.

Let’s see it in action!

We know from Part I that our PetService talks to the PetDetailsService(v1) and PetMedicalHistoryService(v1). The output from the call to the PetService looks like:

$ curl http://108.59.82.93/pet/123{
"petDetails": {
"petName": "Maximus",
"petAge": 5,
"petOwner": "Nithin Mallya",
"petBreed": "Dog"
},
"petMedicalHistory": {
"vaccinationList": [
"Bordetella, Leptospirosis, Rabies, Lyme Disease"
]
}
}

In the response above, you will notice that the petBreed says “Dog”. However, Maximus happens to be a German Shepherd Dog and we need to modify the PetDetailsService so that the breed is returned correctly.

So, we create v2 of the PetDetailsService which would return “German Shepherd Dog” now. But, we want to ensure that we can test v2 of this service with a small subset of users before driving all the traffic to v2.

In Figure 1 below, we see that the traffic is configured such that 50% of the requests will be directed to v1 and 50% to v2, our Canary deployment. (It could be any number depending on how major the changes are and to minimize any negative impact).

Figure 1: We can deploy v2 of the PetDetailsService as a canary deployment and ramp traffic

Steps:

  1. Create version v2 of our PetDetailsService and deploy it as before. (see petinfo.yaml under the petdetailservice/kube folder)
$ kubectl get podsNAME                                         READY     STATUS    RESTARTS   AGEpetdetailsservice-v1-2831216563-qnl10        2/2       Running   0          19hpetdetailsservice-v2-2943472296-nhdxt        2/2       Running   0          2hpetmedicalhistoryservice-v1-28468096-hd7ld   2/2       Running   0          19hpetservice-v1-1652684438-3l112               2/2       Running   0          19h

2. Create a RouteRule that splits traffic into 50%(v1) and 50%(v2) of the petdetailsservice as shown below:

cat <<EOF | istioctl create -f -
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: petdetailsservice-default
spec:
destination:
name: petdetailsservice
route:
- labels:
version: v1
weight: 50

- labels:
version: v2
weight: 50

EOF
$ istioctl get routeruleNAME KIND NAMESPACEpetdetailsservice-default RouteRule.v1alpha2.config.istio.io default

3. Now, if you access the PetService, you should see alternate requests returning “Dog” and “German Shepherd Dog” respectively, as below:

$ curl http://108.59.82.93/pet/123{
"petDetails": {
"petName": "Maximus",
"petAge": 5,
"petOwner": "Nithin Mallya",
"petBreed": "Dog"
},
"petMedicalHistory": {
"vaccinationList": [
"Bordetella, Leptospirosis, Rabies, Lyme Disease"
]
}
}
$ curl http://108.59.82.93/pet/123{
"petDetails": {
"petName": "Maximus",
"petAge": 5,
"petOwner": "Nithin Mallya",
"petBreed": "German Shepherd Dog"
},
"petMedicalHistory": {
"vaccinationList": [
"Bordetella, Leptospirosis, Rabies, Lyme Disease"
]
}
}

And it works!

This begs the question: Couldn’t we do this with Kubernetes Canary Deployments? The short answer is yes.

However, the steps to do so are more involved and there are limitations:

  • You would still create 2 Deployments of the PetDetailsService (v1 and v2) but you would need to manually restrict the number of v2 replicas during deployment, to maintain the v1:v2 ratio. For example: you could deploy v1 with 10 replicas and have v2 deployed with 2 replicas for a 10:2 load balancing, and so on.
  • Since all pods, irrespective of versions are treated the same, the traffic load balancing in a Kubernetes cluster is still subject to randomness.
  • Autoscaling pods based on traffic volume is also problematic because we would need to separately autoscale the 2 Deployments which can behave differently based on traffic load distribution on each of the services
  • If we want to allow/restrict traffic for only some users based on some criteria such as request headers, Kubernetes Canary Deployments may not be able to achieve this.

Conclusion: You just saw how easy it is to create a canary deployment and ramp traffic with Istio. And, Maximus is happy too!

Resources:

  1. Part I of this article series: https://medium.com/google-cloud/simplifying-microservices-with-istio-in-google-kubernetes-engine-part-i-849555f922b8
  2. Part II of this article series: https://medium.com/google-cloud/simplifying-microservices-with-istio-in-google-kubernetes-engine-part-ii-7461b1833089
  3. The Istio home page https://istio.io/
  4. DevOxx Istio presentation by Ray Tsang: https://www.youtube.com/watch?v=AGztKw580yQ&t=231s
  5. Github link to this example: https://github.com/nmallya/istiodemo
  6. All things Kubernetes: https://kubernetes.io/

--

--

Nithin Mallya
Google Cloud - Community

Engineering Leader. (Amazon, Audible, Amex, PayPal, eBay). All views are my own.