Intro to various Deployment Strategies using Istio in GKE — Part 2

Vishal Patel
5 min readSep 26, 2022

--

In our previous blog — Setup Istio on GKE and visualize using Kiali, we saw how to set up a Kubernetes cluster in GKE, and then we set up Istio service mesh, later we enabled various telemetry services such as Kiali, Grafana, Prometheus, and Jaeger. We then deployed a Node.js application into the Kubernetes cluster and configured it to be accessed via Istio. Finally, we visualized the whole setup through Kiali and Grafana.

In this blog post, we will discuss various deployment strategies we can incorporate using Istio. The following image is an illustration of Canary Release and Traffic Splitting using istio in Kubernetes that we would have accomplished at the end of this blog.

Architecture Diagram for Canary Release

Let's deploy the second version of the helloapi node.js application into the Kubernetes cluster. The relevant source code is hosted on Github.

kubectl apply -f part-2/deployment-canary.yaml
Version v2 deployment

We will define a destination rule to route the traffic based on various policies. Apply the destination rule using the following command.

kubectl apply -f part-2/destination.yaml
Applying Destination Rule

Canary Rollout

With two versions of the hello app deployed in a Kubernetes cluster (version v1 and v2), we can look at various deployment strategies.

Subset/Version (Blue-Green)

Following kubectl command needs to be applied to enable Subset/Version deployment

kubectl apply -f part-2/virtualservice-bluegreen.yaml

We can define a subset/version of a route destination in a virtual service that is identified with a reference to a named service subset defined in the Destination Rule.

For example, if the subset of the route destination is “stable”, it is identified with a reference service subset defined in the destination rule, and all HTTP traffic is routed to the pods of the helloapi service with the label “v1”.

HTTP traffic routed towards Stable version
Kiali visualization of stable HTTP traffic Routing

In the second example, we changed the subset of the route destination to “canary”, so in this case, all HTTP traffic is routed to the pods of the helloapi service with the label “v2”.

HTTP traffic routed towards Canary version
Kiali visualization of canary HTTP Traffic Routing

Traffic splitting using Weight (Beta Testing)

Following kubectl command needs to be applied to enable weight based traffic splitting

kubectl apply -f part-2/virtualservice-weight.yaml

We can split the HTTP traffic based on weight, a certain percentage of the traffic can be routed to the new release.

In the below example, based on the weight parameter 90% of the traffic is routed to the stable release and 10% percent of the traffic is routed to the canary release.

Weight-based traffic routing
Kiali visualization of weight-based traffic splitting
Grafana visualization of weight-based traffic splitting

Header based traffic splitting (A/B testing)

Following kubectl command needs to be applied to enable header-based traffic splitting

kubectl apply -f part-2/virtualservice-header.yaml

We can split the HTTP traffic based on the header parameters defined in the request.

In the following example, a subset of the route destination is decided based on the header parameter “end-user”. If the request has a header “end-user” set to “phoenix-user” the traffic is routed to a canary release, if not it is routed to a stable release

Split HTTP traffic based on the request header

Note: We have used a chrome extension — ModHeader to add headers.

Header end-user set to phoenix-user
No header set

Another example to split the traffic based on the header is to detect the browser from which the request is received.

Following kubectl command needs to be applied to enable header-based traffic splitting

kubectl apply -f part-2/virtualservice-browser.yaml

If the user-agent is a firefox browser, the request is routed to a Stable version. If not the request is routed to the Canary release

Browser-based traffic splitting

Handling Failures

Istio helps to improve the reliability and availability of the services in the mesh. The deployment strategy helps handle service failures and take appropriate fallback actions.

For instance, when we deployed a new version of the hello app and it errored out for some reason. We can immediately roll back the new deployment by diverting all network traffic to the older version.

Following kubectl command needs to be applied to enable the previous deployment

kubectl apply -f part-2/virtualservice-rollback.yaml

The above command sets the subset/version of a route destination to Stable assuming the current subset value is set to Canary.

HTTP traffic routed towards Stable version
Kiali visualization of the traffic routed from error to success (Rollback)
Grafana visualization of service request

This will increase the reliability of the service and keep the service available at all times.

— Thank you for reading —

Please read our next article regarding Automated Canary Deployment (Flagger) using Istio in GKE.

--

--