Istio — Part 2: Traffic Management, Consistent Hashing, Canary & Dark Releases

Raju Ahmed Shetu
intelligentmachines
8 min readMay 9, 2020

So yeah, I was talking about discovering your api via Istio service mesh in my blog Istio — Part 1: Discovering Services. I hope you all read that blog. It will really help you develop your understanding on Istio and some related topic like Gateway, Virtual Services e.t.c. I tend to keep my cloud resources running for sometime so that you all can get my example with code which might be helpful. So I will not tear down the microservice architecture we build on earlier.

Rather we will write all the codes in a different namespace and will redirect different domains to achieve the same thing as we have in part 1 then move onto our today’s discussion.

We will be keeping our code in here. So what I did is

  • Copied the contents fromk8s/1-discovering-services to k8s/2-traffic-mangement folder.
  • Changed the name of namespace from default to traffic-management
  • Edited each .yaml file to include namespace with value traffic-management Alternatively you can do it by running kubectl config set-context --current --namespace=traffic-management
  • We also replaces the domain name part microservice to traffic-mangement So user.microservice.retailvoice.xyz will make another copy with name user.traffic-mangement.retailvoice.xyz

At this point, we are all set up with micro services deployed in traffic-management namespace exactly identical to our part-1 discussion.

Deployment snaps from traffic-management namespace

Goal

We will break down our in different use case and we will change our .yaml config to address each use case.

  • Traffic Management
  • Canary Release
  • Consistent Hashing
  • Dark Releases

Traffic Management

So what is traffic management? The following example is completely fictional, just to give a glimpse of the use case we will be handling. Let’s think we have our user microservice with / route. For different reasons, we want to have a new response format for / route. But we want this version to be available for a certain portion of our user. Let’s say 10%. How could we handle that?

First of all, we can have a nasty fix. We can have two version of docker image for user micro service named user:v1 and user:v2 Now setting replica count in user:v1 deployment file to 9 and user:v2 deployment file to 1 is enough to achieve the upper assumption. As Kubernetes by default follows ROUND ROBIN method for traffic redirection so 9 pods of user:v1 as opposed to 1 pod of user:v2 will have the correct ratio split. But see, we have 8 redundant pods for user:v1

But with Istio we can easily have 1 pod of both user:v1 and user:v2 and still get our desired output. Let’s see how to do this.

  • First change the k8s/2-traffic-management/2-user-microservice.yaml to add new deployment for user:v2 image and change the image of existing deployment to user:v1
  • Add version under labels for both deployment and add another env variable named VERSION
  • Set replica count to 1 in each deployment. You can increase if you need. I will use 1 just to demonstrate.
k8s/2-traffic-management/2-user-microservice.yaml

Now apply it using kubectl apply -f k8s/2-traffic-management/2-user-microservice.yaml

Pod List and curl output

See now we have two pods for user microservice, one with user:v1 & user:v2 And also in the second picture we are seeing 50–50 split in between two pods which is taken care of kubernetes by default. Let’s write a DestinationRule to route the traffic.

Destination Rule

Destination Rule simply define policies where to forward traffic after routing has occurred in service. We will make subsets of our user pods by utilizing the version label we added and then we will use it to redirect traffic in the virtual service named traffic-management/user-virtual-service resource. Let’s edit k8s/2-traffic-management/5-user-virtual-service.yaml

k8s/2-traffic-management/5-user-virtual-service.yaml
left: curl output of user.traffic-management.retailvoice.xyz & right: curl output of traffic-management.retailvoice.xyz/user

Now you can see our one domain is working perfectly but other is not that because traffic-management.retailvoice.xyz virtual service is still using the default routing protocol. Let’s make changes in file k8s/2-traffic-management/7-microservice-virtual-service.yaml

So after applying kubectl apply -f k8s/2-traffic-management/7-microservice-virtual-service.yaml we get this output.

fixed curl output http://traffic-management.retailvoice.xyz/user

So looks like both our http://user.traffic-management.retailvoice.xyz and http://traffic-management.retailvoice.xyz/user are having 90–10 split in user:v1 and user:v2 pod as expected.

Canary Release

As I came across this quote

Canary release is a technique that is used to reduce the risk of introducing a new software version in production by gradually rolling out the change to a small subgroup of users.

So above example actually depicts this canary release scenario. Here we only exposed 10% of whole traffic to a service user:v2 and the rest 90% to user:v1 Thus by simply defining a destination rule, adjusting weight ratio and integrating with virtual service will allow to make weighted routing which is also known as Canary Release.

As you deploying canary deployment now, soon you will find yourself receiving complains from users on having inconsistent response from user microservice. That because though the routing is weighted that doesn’t actually track which client is actually receiving response from which pods. So a client might make 10 requests and end up served with 8 requests from user:v1 pod and other 2 from user:v2 pod. Remember, Istio so far took care of server requests weighting but could not stick a request to a client.

So, what’s the solution? Now here comes ConsistentHashing

ConsistentHashing

Consistent Hashing is a mechanism that will allow you to stick a request to a client by generating a hash internally. So if a user requests user microservice and get response from user:v2 pod then in the subsequent requests will also be forwarded to that user:v2 pod.

But here’s a catch. ConsistentHashing doesn’t work with weighted routing. Because, we define hashing mechanism under trafficPolicy in DestinationRule Before generating hash the weighted routing already made sure which service to hit. One more thing, the underlying component of Istio, Envoy proxy so far doesn’t support weighted routing with consistent hashing. You can find detail in this github issue.

So how do we achieve consistent hashing? We have to remove the weighted routing from virtual services and then declare consistentHash mechanism in the DestinationRule.

k8s/2-traffic-management/8-user-virtual-service-with-consistent-hash.yaml

Here we removed the subset v1 and v2 from the DestinationRule and added all subset which will have label app: user as we will be targeting all user pods and requests will be redirected based on httpHeaderName with key name my-header

Now let’s see the output.

left: my-header=668768768 right: my-header=1233456

See how the left one with my-header=668768768 redirects to user:v1 and my-header=1233456 redirects to user:v2 Thus we can easily figure out consistent hashing and give your end user a smooth experience.

Now let’s fix so that consistent hashing also works for http://traffic-management.retailvoice.xyz/user.

k8s/2-traffic-management/9-microservice-virtual-service-with-consistent-hash.yaml

Let’s verify if it keeps on sticking to a pod for http://traffic-management.retailvoice.xyz/user

See as expected the headermy-header=668768768 redirects to user:v1 and my-header=1233456 redirects to user:v2

One more thing to remember, You have to forward all the headers from one microservice to another if consistent hashing is based on httpHeaderName otherwise when you make call to another service from your if you don’t pass the header that you receive then you might not get consistent hashing.

So finally I will wrap this up with Dark Release

Dark Release

Keeping both staging server and production server with almost same configuration is both costly effort wise and financially. What if I say, we can release our dev code to production without letting the end user know about it so that only developers and engineers can access it. And when the feature is ready we can tweak some istio config and the feature will be live right away. This mechanism is call Dark Release. This might sound theoretically very bad but we all know we have done quite a lot of times in our career.

Remember, in our first part we made a routing based on matching routes. So for matching route we used uri Istio provides many more other that just uri Today I will talk about how you can achieve dark release using http header.

Let’s say we will pass a header with key mode and value dev So if any request contains a header with mode=dev then it will be forwarded to user:v2 otherwise by default it will be forwarded to user:v1

k8s/2-traffic-management/10-user-virtual-service-with-dark-release.yaml

See we are matching our routes based on header and at the end defined a default route for version 1. So now the dev can easily make requests using http header where this pod will never be exposed to end user as the end user’s api will not include header mode=dev

curl output of header and non-header request for dark release

Cool, huh?

Let’s do this for http://traffic-management.retailvoice.xyz/user

k8s/2-traffic-management/11-microservice-virtual-service-with-dark-release.yaml

Apply the above yaml and below is the output of it.

curl output of header and non header requests

So, Yay !!!

Finally we can wrap this up. I know it was a tiring one. I will upload the codes in here and also will add a README.md. If you try at my server you might not get the expected result because at the end only dark release update is there. So you won’t find traffic-management and session stickiness example in my server I hosted. But if you follow the instruction, I can assure you will easily get up and running. If you face any problem drop a comment in here or mail me.

Good bye till the next one.

Happy Coding !!!

--

--