Iter8: Achieving Agility with Control

Fabio Oliveira
Published in
3 min readAug 17, 2020


In our software-driven economy, developers must be agile to deliver code frequently so that their organizations can remain relevant in the market. Innovation is needed to retain existing customers and attract new ones. This paramount need for agility has fueled several cultural and technological advances we have witnessed over the past decade; however, a question that often arises is: can one actually have peace of mind when delivering code frequently to the cloud?

When we think about agility, we must think about agility with control. Unfortunately, when discussing canary releases and A/B testing, the cloud-native community tends to focus on the underlying mechanisms that enable them, such as, traffic splitting for progressive rollout. Well, that is a very minor piece of the puzzle.

Make no mistake, agile practices such as canary releases and A/B testing are analytics problems at a fundamental level! Actually, comparative analytics problems! The underlying problem that needs to be solved is that of comparing competing versions, confidently deciding which version is the winner, and making traffic decisions accordingly throughout the process. Solely relying on ad hoc checks on metric values to gradually shift user traffic, even if fully automated, is too simplistic.

Enter iter8

Iter8 is an open source toolkit for continuous experimentation on Kubernetes. We use the term continuous experimentation to refer to practices such as canary releases and A/B testing. Why? That term’s connotation evokes the notion that agile practices are meant to be done continuously and allow you to learn about both your code and users. Using iter8’s machine learning driven experimentation capabilities, you can safely and rapidly orchestrate various types of live experiments, gain key insights into the behavior of your microservices, and roll out the best versions of your microservices in an automated, principled, and statistically robust manner.

The insights surfaced by iter8 during an experiment include the observed values for each metric of interest for each version, the range of values within which the metrics are most likely to be, how a candidate version compares against the baseline and other versions with respect to each metric, and how likely it is that a candidate version will beat all other versions when evaluated using a particular metric. Critically, iter8 can compare two or more versions. The practice of A/B/n testing, for instance, involves experimenting with “n” versions. Using its state-of-the-art analytics engine, iter8 provides the only solution that can incorporate SLOs (Service-level Objectives), such as tail latency guarantees and error rate guarantees, while simultaneously maximizing a reward metric like conversion rate, which is absolutely necessary for cloud-native A/B/n tests.

We will be publishing a series of blog articles and educational videos to demonstrate how you can take advantage of iter8’s key capabilities. Our planned articles will cover a lot ground, including:

  • How to use iter8 to identify and roll out the best out of several versions of your microservice, automatically and gradually, taking into account business-oriented metrics while meeting SLOs on performance and correctness.
  • How to use iter8 to automatically assess the behavior of a canary version of your microservice and safely roll it out.
  • How to use iter8-trend to uncover behavioral trends of your microservice as it evolves after several versions have been rolled out.
  • How to use iter8’s KUI ( plugin to perform human-in-the-loop experimentation, where iter8’s insights are surfaced to the user and the user is in total control.
  • How to use iter8’s Kiali ( extension to create and observe experiments. Kiali is Istio’s de facto open source UI.

It is time to unleash the full power of cloud-native continuous experimentation.

Stay tuned!

Where to find iter8?