Dynamic Kubernetes Environments with Orca

Maor Friedman
Nuvo Tech
Published in
8 min readNov 29, 2018

Introduction

Orca is a tool which focuses on the world around Kubernetes, Helm and CI\CD, and it is also handy in daily work. Orca is a simplifier — It takes complex tasks and makes them easy to accomplish.

Orca was built as part of our CI\CD journey, and is now used for all our deployments. Orca is developed as an open source project using Golang. The source code can be found in our GitHub account, along with some other really cool projects.

This post will walk you through on how to use one of Orca’s most powerful features —Dynamic environments.

Let’s get started!

Prerequisites

  1. Helm — Orca is a tool built around Helm, and is not intended to replace it, but rather to empower it and enable advanced usage with ease. Orca’s flags are very similar to Helm’s flags, to support quick adoption and conformance with Helm. Orca uses Helm under the hood, so this is a must. If you use Helm to manage your deployments, using Orca will be a walk in the park for you.
  2. Chart repository— Orca’s deployment commands use a chart repository to fetch charts from. If you are not yet using a chart repository — we encourage you to start right now. You can use Helm’s own Chart Museum, but any other repository implementation will work just as well.

Use cases

Let’s look at some use cases. We’ll start from the most basic ones on work our way up. But first, some definitions:

  • Environment — An environment is a Kubernetes namespace, along with all Helm releases deployed in it.
  • Dynamic environment — A Kubernetes namespace that will be created on-demand by Orca. A dynamic environment is based on an existing environment — the Reference environment, the namespace which we want to copy.
  • Reference environment — This can be any environment you choose, depending on where in the process you want to use dynamic environments. The most common use case is probably Production, so we will need to get the current state of the that environment (this will be the first step for all following examples).

Assuming our reference environment is called apps, use the following command to get a yaml representation of the currently deployed charts in that namespace:

orca get env -n apps -o yaml
  • -n stands for --name (equivalent to Helm’s --namespace flag)
  • -o stands for --output

An example output:

charts:
- name: service-a
version: 0.2.0
- name: service-b
version: 0.1.1
- name: service-c
version: 1.2.3

You should redirect the output to a file for the deployment command:

orca get env -n apps -o yaml > apps.yaml
  • For the sake of simplicity, we assume that all environments (reference and dynamic) will be on the same Kubernetes cluster. If that is not the case, you can use the --kube-context flag.
  • The file name isn’t relevant, name it how you like. We use apps.yaml in the rest of the guide, but go ahead and replace that if you want.

Part 0 — Deployment process overview

Orca will perform the following operations as part of a deployment:

  1. Create a new namespace.
  2. Add a orca.nuvocares.com/state: busy annotation to prevent parallel deployments to the same namespace (we will revisit this later).
  3. Fetch each chart specified in apps.yaml from a specified repository and deploy it to the new namespace with their respective versions (including dependency update), and according to any additional flags that are passed.
  4. Replace the orca.nuvocares.com/state: busy annotation with an orca.nuvocares.com/state: free annotation.
  5. In case of a failed deployment, the orca.nuvocares.com/state: busy annotation will be replaced with an orca.nuvocares.com/state: failed annotation.

And you have a dynamic environment which is a copy of the reference environment.

Now let’s look at some examples to help us understand how we can fully leverage Orca.

Part 1 — Basic usage

Use case 1 — Simple deployment

Deploy the same charts to a new environment, orca-01:

orca deploy env -n orca-01 -c apps.yaml \
--repo myRepo=http://myChartRepo.example.com
  • -c stands for --charts-file

A note about flags:

Orca has many flags for each command. They can be used in conjunction, and most can also be replaced by using environment variables.

For example, the --repo flag can be replaced by using the $ORCA_REPO environment variable, thus simplifying the command:

export ORCA_REPO="myRepo=http://myChartRepo.example.com"orca deploy env -n orca-01 -c apps.yaml

Use case 2 — Deployment with values files

If you have more than one namespace\cluster, in most cases you probably have different sets of values.yaml files for each of them. These values files are (in most cases) packaged along with the chart. For example, you might have dev-values.yaml and prod-values.yaml.

If the reference environment is Production and you want to create a copy which is more similar to the development environment (less replicas of the services can be a reason to do this), you can define which values file to use:

orca deploy env -n orca-01 -c apps.yaml \
-f dev-values.yaml
  • -f stand for --values

Orca will deploy the charts from apps.yaml with their respective versions, using dev-values.yaml for each of them, ignoring the flag if the file doesn’t exist. That means that some charts may include dev-values.yaml and others may not, and everything will work as expected.

You can pass the -f flag multiple times, it will be handled by helm as you would expect.

Use case 3 — Deployment with additional parameters

As with the previous use case, you can define different values files to use. But sometimes it isn’t enough. A common example for this use case are Ingress resources. You can set additional parameters on the command line to solve this issue:

orca deploy env -n orca-01 -c apps.yaml \
--set ingress.host=orca-01.example.com

The --set flag works exactly as it does with a helm command, and will override ingress.host in any values file.

Part 2 — Intermediate usage

Use case 4 — Parallel deployment

If speed is your thing, you can deploy all charts in parallel:

orca deploy env -n orca-01 -c apps.yaml \
-p <N>
  • -p stands for --parallel
  • N stands for “how many charts to deploy in parallel”. 0 stands for “all”. The default is 1

Use case 5— Deployment with chart version override

Why would you even want to create a dynamic environment? Probably to test a new feature before it is deployed to Production. In practice, that means that you want to copy the reference environment, except for a single service which you want to deploy using a different version. To accomplish this:

orca deploy env -n orca-01 -c apps.yaml \
--override service-a=0.3.0
  • 0.3.0 is the version of service-a that will be deployed and will override the version specified in apps.yaml
  • service-a will be added to the deployment even if it isn’t present in the apps.yaml file

You can specify --override multiple times to override multiple chart versions:

orca deploy env -n orca-01 -c apps.yaml \
--override service-a=0.3.0 \
--override service-b=0.1.2

Or:

orca deploy env -n orca-01 -c apps.yaml \
--override service-a=0.3.0,service-b=0.1.2

Use case 6 — Deployment with timeout

In case you have a chart that takes a long time to be deployed (uses hooks for example), you may want to enable a longer timeout for the deployment:

orca deploy env -n orca-01 -c apps.yaml \
--timeout 600

This will allow a timeout of 600 seconds for each chart, and not for all deployments together.

Part 3— Advanced usage

Use case 7— Deployment with additional annotations or labels

If for any reason, you want additional annotations or labels on the created namespace:

orca deploy env -n orca-01 -c apps.yaml \
--annotations key1=value1
  • You can specify multiple annotations the same way we did with --override

You can do the same with labels:

orca deploy env -n orca-01 -c apps.yaml \
--labels key1=value1

If you are using Istio, the last example will come in handy:

orca deploy env -n orca-01 -c apps.yaml \
--labels istio-injection=enabled

Use case 8 — Deployment of feature environments

In case you want to test a new feature which is implemented across multiple services, you can deploy to the same environment using separate commands. This is useful if you have CI processes which are individual for each service. The deployment command will be roughly the same for all services.

Service A:

orca deploy env -n orca-01 -c apps.yaml \
--override service-a=0.3.0 \
-x

Service B:

orca deploy env -n orca-01 -c apps.yaml \
--override service-b=0.1.2 \
-x

The logic behind this is: If this namespace already exists — it means that this is an environment used for a feature spanning multiple services. So instead of deploying a complete environment, only deploy the charts that are passed as overrides.

The eventual outcome will be an environment with all stable components, except for (in this case) — service-a and service-b, which are deployed with the specified version (this is the version to be tested).

This is where the orca.nuvocares.com/state annotations come into play. If you try to deploy to orca-01 from multiple locations simultaneously, there may be a race condition as to the versions that will be deployed to the environment. To prevent the race condition from affecting the expected outcome, a deployment will wait until the state annotation is set to free.

The environment name should be the same across all commands to deploy to the same environment. This is usually managed using matching or similar branch names for the feature development, and a bit of string manipulation.

Use case 9 — Deployment with environment refresh

If your reference environment is changing rapidly and you want your dynamic environments to be up-to-date with it, while still keeping the ability to test a feature spanning across multiple services:

Service A:

orca deploy env -n orca-01 -c apps.yaml \
--override service-a=0.3.0 \
--protected-chart service-a

Service B:

orca deploy env -n orca-01 -c apps.yaml \
--override service-b=0.1.2 \
--protected-chart service-b

The logic behind this is: When deploying, mark the specified protected-chart as, well, protected. Once an additional deployments attempts to deploy a refreshed configuration to the same environment, it will recognize that there is a protected chart, and will not override it.

The eventual outcome will be an environment with all stable components, with updated versions from the reference environment, except for (in this case) — service-a and service-b, which are deployed with the specified version (this is the version to be tested).

Use case 10 — Deployment with validation

After the deployment is complete, it might be wise to perform a basic validation of the environment:

orca deploy env -n orca-01 -c apps.yaml \
--validate

The validation (currently) includes the following checks:

  • All Pods are Running (except for ones that are controlled by a Job)
  • All containers are Ready
  • All Endpoints have addresses

The validation will be performed until the environment is validated, or until 30 attempts are complete and environment is not validated (30 attempts =~ 15 minutes). Note that validate is Boolean and the default is false.

Conclusion

In this post, we have mentioned some of the use cases that Orca can help you handle in regard of dynamic environments. Orca has many more commands and use cases — you can take a look at the projects GitHub repository. You can also use Orca as a Docker image directly from our Docker hub account.

If you have any questions — feel free to raise issues. We are more then happy to help!

If you want to get involved— your PRs are more then welcome! Follow the contributing guides in our repositories.

If you are already using Orca — Find us on Twitter (NuvoCares, Hagai Barel, Maor Friedman) or on GitHub and tell us about your experience!

--

--