Harnessing Istio to Effortlessly Rewrite Your Microservices

Erik Möller
Nordnet Tech
Published in
4 min readJan 9, 2024
Photo by orbtal media on Unsplash

Istio is Greek for “sail” and:

The idea of the sailboat is that it’s not just about who is in control. You can’t get anywhere without the boat. https://tetrate.io/blog/how-istio-got-its-name/

At Nordnet, we have been using Microservices for a long time but it’s only after we moved to the cloud using tools like Kubernetes and Istio that we could go all-in and fully harness the potential of Microservices patterns.

There are plenty of challenges writing MicroServices you need to solve compared to having a Monolithic application, such as logging, scaling, debugging, monitoring, testing and service communications. The upside of investing in a microservices platform is, of course tremendous, as it enables truly autonomous teams to have complete control over their application life cycles.

What Istio solves for us is:

  • Secure communication using mTLS between our services, authenticates both client and server.
  • Automate metrics, logs and traces for all traffic within the cluster
  • Fine-grained control of traffic behaviour with rich routing rules

This article will focus on traffic control. Istio solves this by adding a “sidecar” along with every application deployed to intercept all your network traffic. (https://istio.io/latest/about/service-mesh/)

That is really where the strength comes, to separate your application from handling all of the network setup and security and instead focusing on the business logic.

Example

We have a neat little feature on our site showing what we call Spark graphs (smaller sampled graph derived from the original stock graph):

This feature was something we just added as its own endpoint in our graph application and having an Istio Virtual Service route to it:

Istio Virtual Service routes to graph application

We had an internal load test session that discovered a flaw in our design that we didn’t think about when traffic gets intense, and we took the decision to go truly Micro and break out the feature as it’s own microservice application, using the graph application as the source of the data. That gave us the possibility of scaling the application individually and also make use of Redis cache layer to not hammer on the original graph application once the load increases:

Goal: Istio Virtual Service routes to spark application

We took some time to figure out the new design and then deployed the new spark application to our Kubernetes cluster, still not receiving any traffic. Now it’s time for Istio to shine as we wanted to gradually roll out our new spark application to find eventual bugs from the rewrite and also test the performance, to make sure we were on the right track.

The original Virtual Service route looked like this:

Route to graph application in the Istio Virtual Service

To start sending some traffic to the new spark application it’s as easy as adding another destination and using weight for the traffic split ratio:

Adding destination to spark application routing 10% traffic as a start

This setup is excellent as we now can monitor the traffic in real-time, ensuring the new application works out and behaves as expected. If everything proceeds according to plan, we can gradually switch over traffic to the new application by adjusting the traffic ratio. The charts below illustrate the transition in traffic distribution, showcasing the shift from the original graph application to the new spark application as it was rolled out in production. This transition occurred progressively, starting with a traffic weight of 10% and incrementally increasing to 100% for the new spark application.

Number of calls to the new spark application
Number of calls to the graph application

Once we reach a traffic weight of 100% for the new Spark application, we can remove the original destination from the Virtual Service and perform any necessary cleanup in the original application.

Virtual Service now routes 100% traffic to the new spark application

Conclusion

By harnessing the power we get from having Istio in our platform and by separating the network layer from the actual application we get a lot of room for finding creative solutions with minimal customer impact. In this case we didn’t even have to collaborate with our Frontend developers as the endpoint didn’t change, just re-routed the traffic behind the scenes.

As a developer you always have to be ready to release changes of your applications, what works today may not work tomorrow and new requirements are constantly coming in.

To have Istio in your toolbox takes you to the next level of building loosely coupled systems with a never ending evolution of your applications, minimizing downtime and ensuring smooth transitions of new microservice versions.

--

--