Volatile environments in the development workflow

Jose Manuel Cardona
Softonic Engineering
4 min readOct 4, 2019

This post is based in Docker, Jenkins and Kubernetes. So you need to know the basics to better understand the post.

Where the volatile environments come from?

We had the typical integration/staging/production environment but that produced us some limitations:

  • All those environments work with the master branch so we cannot put features in development or non-production ready.
  • Those environments (except production) were closed to the internet.
  • Due to those environments works with the master, we cannot stop the commits in them for a long time and the commit push are very high in our repository.

So to fix these points, we created volatile environments.

What is a volatile environment?

It is an environment that works with continuous integration and continuous deployment. When a developer creates a release branch and push it to their repository automatically an environment based in that code it is created, so everyone has access, including from the internet. Furthermore, the developer can remove that environment just deleting the release branch and that is the reason because they are volatile environments.

How are we implementing volatile environments?

To develop our volatile environments we used the next technologies:

  • Jenkins is used to coordinating all the deploy and destroy process.
  • Kubernetes + Helm + Docker allow us to create environments of any product abstracting the complexity and technology used in them. Every product is just a receipt.
  • Git is used to storing our code. It not only stores the project source code, but it also stores the Dockerfile, Helm and Jenkins code, so all the information about a project is self-contained.

Git implementation

The git requisites are simply, you just need to set a hook to your Jenkins server. You have some Jenkins plugins that will help with that, like for example GitHub or Bitbucket.

Jenkins Implementation

The Jenkins job is configured using a Jenkinsfile instead of a job configured using the Jenkins interface. This allows us to have all our jobs versioned with all the benefits of code versioning.

The main parts of the Jenkinsfile are:

Volatile environment deployment

The branch release format that we use is like release/task-id-description like for example release/PRJ-123-important-feature. So we generate environments based on the task identifier, that in this case in PRJ-123.

So we can get the task-id with the code

env.TASK_ID = sh (returnStdout: true, script: """
git branch -r --points-at HEAD | grep -o "release/[A-Z]\\{2,5\\}-[0-9]*" | sed "s@release/@@g"
""").trim().toLowerCase()

After we have the current release task id we will be able to configure and deploy the project with that variable. A domain name for the current task identifier could be defined like

env.VOLATILE_URL = sprintf("http://%s.project.com", TASK_ID)

So you will be able to customize your project to your needs based on the task id.

Volatile environment destroy

In every push, the job should be executed so we can use that builds to check if some release branch was deleted and destroy the volatile environment.

In this example, we use a specific Helm release/namespace identified by the task-id per volatile environment with a common prefix.

The first step is getting the current existing release branches. With this step, we know which volatile environments should remain.

task_ids = sh (returnStdout: true, script: """
# Task identifiers for current release branches
git branch -r | grep release | grep -o '[A-Z]\\{2,5\\}\\-[0-9]\\+' | tr '[:upper:]' '[:lower:]'
""").trim()

Now we can list the current volatile environments deployed.

releases_to_remove=\$(KUBECONFIG=/tmp/${BUILD_TAG} helm list | grep ${NAMESPACE_PREFIX}- | cut -f1 -d" ")

With this information, we could just take a diff between both values and delete the non-matching values that are the release branches that don’t exist in the git remote repository.

sh """#!/bin/bash -ex
# Get volatile environments without branch.
if [ -n "\$task_ids" ]
then
OFS=\$IFS
IFS=\$'\n'
for task_id in \$task_ids
do
releases_to_remove=\$(echo "\$releases_to_remove" | grep -v "\$task_id" | tee)
done
IFS=\$OFS
fi
# If there are volatiles to be removed, it proceeds to remove them.
if [ -n "\$releases_to_remove" ]
then
OFS=\$IFS
IFS=\$'\n'
for namespace in \$releases_to_remove
do
helm delete \$namespace --purge
kubectl delete namespace \$namespace
done
IFS=\$OFS
fi
"""

Now we have a system that clean every volatile environment which release branch doesn’t exist anymore.

Helm+Docker+Kubernetes implementation

To allow the project run in volatile environments we tweak the deployments. Basically, we allowed the projects to define using environment variables for different configurations.

The main configurations to be changed are:

  • Project domain is generated based on the release branch like for example my-project-release-23.domain.com
  • Set specific URLs for external services to the project to point services in staging, production or another volatile environment.
  • Set specific secrets for volatile environments. This is optional but highly recommended to not mess with other environment secrets.

With this, we will have a specific domain for each volatile environment using the external services that we want.

Conclusion

The volatile environment doesn’t cover the staging environment features but it is a new tool that allows us to cover our needs. With volatile environments, we have a custom environment easy to deploy and destroy with the specific code we want to test, give access to third parties, product owners or anyone that we need without block the main deployment flow.

--

--