Jenkins as a 12 factor app

http://12factor.net/ contains a great set of design patterns for running cloud-native applications, but the same approach can also be applied to more traditional applications with huge benefits.

In this post I’m going to investigate running the popular Jenkins CI server as a 12 factor app. I aim to show how using Jenkins and Docker together we can fulfil the 12 principles defined by http://12factor.net/.

Codebase

One codebase tracked in revision control, many deploys

We store our Jenkins installation in git, by that, what I mean is we don’t store an instance of jenkins.war in version control, but a script that references a versioned docker image that contains our preinstalled instance of Jenkins.

Rather than using the LTS docker image, we’ve created our own as we encountered a few issues with the supported image.

In our custom image /var/jenkins_home/ is not declared as a volume, we found we would get file permission issues otherwise. The reason for this is that the Jenkins process is running as the “Jenkins user”, but an volumes inside a docker container are owned by root. This isn’t normally an issue, but can cause problems with certain files such as ssh keys.

Config changes are not persisted between restarts — this sounds like something that you want to have, but in our use case for Jenkins, we want to ensure that all configuration files are maintained within source control so we can keep track of the history and revert to a know state if required.

The Jenkins image can be run locally by using:

docker run \
-p 8080:8080 \
-p 50000:50000 \
-d \
-v /path/to/jobs/folder:/var/jenkins_home/jobs \
-name jenkins \
garethjevans/jenkins:latest

You should be able to view the Jenkins console by navigating to http://localhost:8080/

Dependencies

Explicitly declare and isolate dependencies

The Docker build process helps us out here. Jenkins doesn’t have too many dependencies, but it does support a large number of plugins. Jenkins provides a helper script to preinstall a series of plugins as part of the Docker build, in the listing below, you’ll notice two files:

  • plugin.sh — plugin installation script, slightly modified from the standard Jenkins one
  • plugin.txt — a list of plugins & versions

When the plugin.sh script is run, plugins are downloaded from “Jenkins Central” and added to $JENKINS_HOME/plugins directory, this is a really handy way of downloading a set of plugins for Jenkins, one advantage/disadvantage of this (depending on your view point), is that plugin dependencies also need to be specified at this point or the plugins may fail to start.

Config

Store config in the environment

Jenkins contains a $JENKINS_HOME/config directory where it stores all its configuration files, the important ones (that we need to persist) are stored in source control and copied into this directory as part of the Docker build. This works for most types of configuration but does have a few issues with any configuration that relies on secrets. For this we need a different approach.

Groovy to the rescue!

Jenkins supports the use of Groovy initialisation scripts. Any groovy script that is in the directory $JENKINS_HOME/init.groovy.d/ will be run when Jenkins starts up. Inside these scripts you have full access to the Jenkins API to programatically configure the environment.

The following shows an example groovy script to configure an LDAP server for authentication:

jenkins.securityRealm = new LDAPSecurityRealm(
“ldap://ldap1.example.com:389 ldap://ldap2.example.com:389”,
“DC=users,DC=example,DC=com”, “”, “(userPrincipalName={0})”, “”,
“”, new FromUserRecordLDAPGroupMembershipStrategy(),
“CN=ldap-bind-user,DC=users,DC=example,DC=com”,
Secret.fromString(“my-secret-password”),
false, false, null, null, ‘cn’, ‘mail’, null, null)

The important bit in this script is Secret.fromString(“my-secret-password”), whilst this script contains the password in plain text, it could easily be loaded from an environment variable passed to the Docker container.

Backing Services

Treat backing services as attached resources

Jenkins is a fairly simple application, it doesn’t require a database, for most use cases its only real requirement is storage. Docker makes it easy to attach the required storage by mounting a volume outside of the container, e.g.

-v /path/to/jobs/folder:/var/jenkins_home/jobs

Job data can then be backed up using an external process, or could be stored on a SAN, whatever supports your use case.

Build, Release, Run

Strictly separate build and run stages

The Docker build process helps us separate this out into distinct parts, using versioned Docker containers also assists with the release process.

Processes

Execute the app as one or more stateless processes

By mounting all of the state stored by Jenkins outside of the Docker container, it can be allowed to fail and restarted whenever needed. Upgrades become painless as you simply need to start an updated image, mounting the same data directories.

Port Binding

Export services via port binding

Docker allows us to control what ports are exposed by the application, in this instance we expose 2. 8080 for the web interface and 50000 for slave nodes to connect.

Concurrency

Scale out via the process model

The heavy lifting in Jenkins is meant to be done by the slaves, not the master. We can use the same patterns to build out slave instances. An example of a Docker based Jenkins slave configured to connect using the Swarm plugin can be found here. Using this pattern, as many slave nodes as required can be started and are automatically attached to the master. They can be tagged using startup parameters to config which jobs they are allowed to run.

Disposability

Maximize robustness with fast startup and graceful shutdown

By paying close attention to the startup times for both the master and slave we can keep startup times down to around 30 seconds. Although Jenkins hasn’t loaded all of its config by this time its able to serve requests and presents the user with a “Waiting for Jenkins to load” screen.

Its possible to use a blue-green deployment process to start a new master, re-attach slaves and allow it to process jobs but i’m going to save that for a future post.

Dev/Prod Parity

Keep development, staging, and production as similar as possible

Using the same docker image, we can mount different config files and initialisation scripts into the container so that we can create dev & prod instances running from the same image.

Logs

Treat logs as event streams

Using Dockers log driver support we can forward all container logs to centralised logging system. Some examples of this could be to modify the startup script to use either the syslog or splunk log driver e.g.

docker run --log-driver=splunk ...

Admin Processes

Run admin/management tasks as one-off processes

Jenkins supports a rich CLI and REST API that allows administrative tasks to be built into scripts that can be run as individual processes either on the host or built into short lived Docker containers.

Conclusion

It’s not just Jenkins that runs this way, we’re successfully running a full dev stack (Gerrit, Nexus, Sonarqube and others) using the processes & methods described here. This may not suit all use cases but we’re certainly having a lot of success with this, running around 10K jobs per day, supporting different build systems and developers from 7 countries.

This process allows us to focus on MTTR (mean time to recovery) incase of failure rather than running active/passive deployments for disaster recovery.