Platform as Code with Openshift & Terraform

Fábio José
10 min readNov 24, 2017

The Red Hat Openshift is a Platform-as-a-Service — PaaS — that delivers things like build and deployment of Docker images over Kubernetes and many other resources as I wrote in this article. The HashiCorp Terraform is a tool to implement Infrastructure-as-Code with many types of providers.

Notes about Openshift Build and Deployment Configuration

The platform is defined by code, this code is managed by Git, built, analyzed and versioned by the CI tool and deployed by Terraform. If we use build or deployment offered by Openshift the life cycle will be managed outside of our platform-as-code pipeline and we could not be able to reproduce the right state or we would never known what the right configuration is. These problems are maximized if our team do the stuffs using the web console.

The HashiCorp Terraform is a tool to implement Infrastructure-as-Code that offers many providers for infrastructure deployment. See all here: https://www.terraform.io/docs/providers/index.html.

There is no specific provider for Openshift, but, there is one for Kubernetes and we will use this.

In fact Openshift exposes the entire Rest API of Kubernetes.

Get the tools

  • Openshift Client Tools v3.6.0: download
  • Terraform v0.10.7: download
  • Docker v17.06.0-ce

Install Docker

To install Docker properly follow the instructions for you operational system.

Up the Openshift

For this article we use the Openshift Origin, that can be up following these few steps.

See more details about Origin here.

Configure the insecure registry 172.30.0.0/16

In Debian edit /lib/systemd/system/docker.service and add this configuration at the of line which starts with ExecStart.

--insecure-registry 172.30.0.0/16

After the file will looks like as follows.

And execute these two commands:

systemctl daemon-reload
systemctl restart docker

Now you are ready to up the Openshift Origin cluster. To do this is very simple, just type:

oc cluster up

If you got the output below everything is OK. But, if you got some issues read the official documentation.

Starting OpenShift using openshift/origin:v3.6.0 ...
OpenShift server started.
The server is accessible via web console at:
https://127.0.0.1:8443
You are logged in as:
User: developer
Password: <any value>
To login as administrator:
oc login -u system:admin

The Code

To illustrate a real use case I will deploy the platform for an APP written in nodejs with the following resources:

  • namespace: aka Project in Openshift, that will accommodate the resources below.
  • configmap: key/value configuration.
  • secret: passwords, certificates, credentials.
  • replication controller: control the replicas of Pods.
  • service: the load balancer.
  • route: Openshift native resource to exposes the service through an URL.

To address the route creation we use an Openshift JSON definition and deploy it using Openshift CLI.

It’s possible to classify the resources created in Openshift by the longevity of the life cycle. I can see three: long, short and ephemeral.

  • long: resources with long longevity will be replaced in some point in the future, but this will happens with low frequency because you can’t move it or once created you will replace it after a long period of use. It’s the case of namespace, resource quota and volume.

Volumes can’t be moved from one namespace to another.

  • short: resources with short life cycle are replaced with some frequency, because it’s necessary to apply new configurations or change some parameters. It’s the case of the limit range, route and service.
  • ephemeral: it’s life cycle is shorter than all others and has a high replace frequency. It’s the case of the replication controller, configmap and secret.

Namespace

The high level of segregation of resources in the cluster, commonly know as Project on Openshift ecosystem.

Persistent Volume

Volumes are persistent storages to mount in containers and used to save data.

Route

This is the way that Openshift exposes the APP to outside world.

Service

Used to balance the load between Pods.

Pods are instances that can accommodate one or more Docker containers. As a best practice use just one container per Pod.

Secret

To store credentials, certificates or sensitive data and consume them through volume mounts or environment variables.

Configmap

Stands for Configuration Map, that can be consumed as environment variables or volume mounts.

Replication Controller

This is our main resource. Replication Controllers are responsible by the number of replicas and auto-scale the Pods, creating them using template.

Never, really, really never, we don’t launch Pods directly.

The Pipeline

To manage the environments we use the pipeline (to get deep, read this awesome article wrote by Kief Morris). In a few words we have one definition to apply in development, uat and production, that will be parametrized through the pipeline stages.

The strategy used to address the environment segregation is one cluster, separating the development, pre-production and production environments using namespaces. In other words: Distinct namespaces (projects) within a cluster. Get deep here.

CI Tool

To run the pipeline I choose Jenkins with Pipeline plug-in.

Source Repository

All resources belongs to the same repository with the following layout.

When you have distinct sets of resources per environment, one for development, one for pre-production and finally another for production, put them in separate sub directories in the repository.

platform-as-code-example/
src/
dev/
namespace.tf
configmap.tf
secret.tf
pre/
namespace.tf
configmap.tf
secret.tf
pro/
namespace.tf
configmap.tf
secret.tf

For enterprise environments I suggest to put the namespace, resource quota and limit range files in another repository for governance purposes and for secrets use a vault tool, like HashiCorp Vault. But these are subject for another article.

Terraform has variables placeholders, use it to parametrize your resources. I use it to fill metadata annotations and label selectors because they are very common fields in Kubernetes resources.

platform-as-code-example/
src/
variables.tf

By conversion I suggest you to adopt a manifest file to publish metadata like version and name of project. Name it as package.tf and use the Terraform variable notation.

platform-as-code-example/
src/
package.tf
Example of package.tf as JSON syntax.

The Terraform provider for kubernetes does not have a definition to manage route creation, because routes are native to Openshift. But, we need to expose the APP to the world and we use routes to do this and we process this resource by our own in the pipeline.

platform-as-code-example/
src/
route.json

In every deploy use the oc create -f route.json to create the route because it’s not possible to manage the state of this resource, unless we implement something similar to Terraform tfstate.

The final repository layout may look like this.

platform-as-code-example/
src/
dev/
namespace.tf
configmap.tf
secret.tf
pre/
namespace.tf
configmap.tf
secret.tf
pro/
namespace.tf
configmap.tf
secret.tf
package.tf
replication-controller.tf
route.json
service.tf
variables.tf

All code is available here: https://github.com/fabiojose/platform-as-code-example

To implement the pipeline we use the Jenkins Declarative Pipeline that is amazing and has a lot of tools and plug-ins. And to perform the deployment we use Rundeck with some custom shell scripts.

Another important aspect is the workflow used to work with git branches. I am a fan of Gitflow — A Successful Git branching model — by Vincent Driessen and, because it’s a robust branching model, the pipeline is modeled to work side-by-side with gitflow.

Continuous

The case showed in this article I implement the Continuous Delivery. Basically, there’s an approval stage waiting for use input to proceed the deployment to Production environment. And, if you are interested to known the differences between continuous delivery and continuous deployment read this post.

Build

In the Build we inject metadata in the variables.tf, like: build date, build id, build name, git commit, git branch, etc, and create the versioned package to perpetuate it in the artifact server.

Stages in the Pipeline to Build the platform.
  • Setup: always start the pipeline with a setup to configure common stuffs to use in all stages, like: version, id, build number, package name, etc.
  • Build: responsible to inject the metadata within variables.tf, the generated version within package.tf and create the tar ball package.
  • Publish: publish the package to the artifact server. Here I am using Sonatype Nexus.

The package is a tar gzip file, created using the command below.

tar --exclude='./.git' \
--exclude='./Jenkinsfile' \
--exclude='*.tar.gz' \
-czv ./src \
-f package-name.tar.gz

It’s necessary to exclude git metadata, Jenkinsfile and some pre existing tar files and the package-name.tar.gz should be an variable with correct name and version of package that could be created in Setup stage.

To publish the package I use just the curl to call directly the upload API.

curl -u 'username:password' \
--upload-file 'package-name.tar.gz' \
http://nexus:8081/repository/package-name

The 'username:password' is the credential that has access to the upload API, that you should replace with the real one.

Deploy

This is responsible to get the versioned package, extract the content, identify the environment, inject the deploy metadata in the variables.tf, process the route.json and proceed to the deployment running the terraform apply command and oc create.

At this stage a call is made to Rundeck API, passing as parameters the versioned package, the build number, the job name and follow the execution log to get the success or failure status.

All previous stages plus the Deploy to Development environment.

To perform this operation I developed a shell script that trigger and follow the execution of Rundeck Jobs. See it below:

Shell script to trigger and follow Rundeck job executions.
  • DEV Deploy: deploy the platform perpetuated in the versioned package. All branches from gitflow are allowed to perform deployments in Development environment: develop, feature-*, release-*, hotfix-*, master, etc.
  • PRE Deploy: just the versioned package built from release-*, hotfix-* and master branches will be deployed to pre-production environment.
  • PRO Deploy: finally, just sources from master branch will be deployed to production.
The final stage: Deploy to Production.

Approval

As I said, we have an approval stage and at this point the pipeline stops and wait for user input to proceed or abort the deployment to production. This stage just executes when the sources are from master branch.

The Approval stage waiting for user input.

Test

It’s very important the tests to guarantee the integrity of the deployed platform, checking some key points and determine if all are as the expected. For now I will just show to you simple tests, but in a new article will get deep in infrastructure tests.

There is one stage for test: Acceptance Test.

  • Acceptance Test: perform acceptance tests to guarantee the safety of production deployment. These tests are ran in the pre-production environment.

Acceptance Testing

There are many tools to perform tests, but nothing specific to test the deployed platform in Openshift or Kubernetes. Then I choose the Cucumber, more precisely: cucumber.js.

To work properly its necessary a combination of tools and them are installed and configured as a slave node in Jenkins.

  • cucumber.js 3.10
  • nodejs 6.11
  • Openshift CLI 3.6.0

Basically, I wrote the steps calling openshift cli, querying the deployed platform and getting json files, that are parsed and tested using the cucumber engine.

First I wrote the features using the Gherkin syntax.

The scenario below is defined to test if all pods managed by a replication controller has the status running.

A feature wrote in Gherkin

Second I wrote the test code: a JavaScript file that implements the steps defined for each scenario.

The example below shows how I address the tests using native calls to openshift cli and get the returned json, parse it and do assertions to validate what is defined.

Steps implementation for cucumber.js.

Third, run the tests! When this command run and end with success, a json report will be generated and be archive in Jenkins.

./node_modules/.bin/cucumber.js --format json:cucumber-report.json

All the test files must belong to git repository, with the following layout.

platform-as-code-example/
test/
features/
step_definitions/
replication-controller.js
replication-controller.feature

Now, let me explain the basic flow and tools interaction used to implement the Pipeline: from platform coder workstation to running the platform within Openshift.

Flow and tools interaction used in the Pipeline.
  1. The platform coder pushes his or her code and test to Git repository.
  2. Git repository manager, through a webhook, trigger the Pipeline execution. Then Jenkins clones the entire repository and Build the versioned package.
  3. Jenkins Publish the versioned package within Sonatype Nexus.
  4. Jenkins trigger the Deploy within Rundeck, sending the URL of versioned package published in Sonatype Nexus.
  5. Rundeck performs the download of versioned package from Sonatype Nexus, unpack, inject metadata and proceed to deployment. At this moment the environments are managed by the pipeline, as I said in the beginning of “The Pipeline” section.
  6. Rundeck invoke Terraform with the code that must be deployed within platform.
  7. Terraform initialize the provider for Kubernetes and deploy the platform code within Openshift.
  8. Using the Openshift CLI, Rundeck perform the deploy of native resources.
  9. Jenkins with cucumber.js perform tests in the deployed platform.
  10. Finally, Jenkins send to the Rocket.Chat the end status of Pipeline execution.

This is the high level description of the main functionality implemented in the pipeline for deploy the platform-as-code, coded using Terraform Domain Specific Language — DSL.

The Jenkinsfile

Now, see the complete Jenkinsfile file with Pipeline declarative syntax used to build, test and deploy the platform code. I’ve put comments in strategic lines if you wanna get understand the details.

Full code of Jenkinsfile with Pipeline declarative syntax.

As you saw, there are some more stages for notifications that sends messages to Rocket.Chat and the team get details without access the Jenkins web console, like: status, version, branch, etc.

The Jenkinsfile is another file that could belongs to the git repository, as below.

platform-as-code-example/
Jenkinsfile

Final Words

These are my observations and experience from field work with Openshift in corporations.

Now, if you are building something similar of this or have any comments, questions or improvements, I’d love to hear from you.

Get the code here: https://github.com/fabiojose/platform-as-code-example

--

--