CloudBees + Docker

Levvel
Levvel
Published in
14 min readJan 6, 2017

--

by Jay Johnson, Senior Consultant at Levvel

At Levvel, we are always looking to improve how we build, test and deploy. It’s why we love DevOps. At its core, DevOps is a balancing act between cost and effort. Today, we’ll share an approach that is easy and cost-effective.

A Jenkins Groovy Docker Container Pipeline

Today’s post is about setting up your own DevOps artifact pipeline for docker containers running on CloudBees Jenkins Enterprise (CJE). For those who are new to the CloudBees Pipeline-as-code, it is software for defining your own custom build lifecycle and deployment actions. With just a few lines of Groovy code, you can be building and pushing your own tested Docker containers to your container registry.

Keeping Things Under Your Control

As a developer, I want a Docker container pipeline for iterating on my Django + nginx + Slack + Sphinx docs stack that deploys using Docker Compose. This repository holds two containers (Django and nginx) and I want to have a build tool that can: build both containers, run some container tests, and then push them to a container registry using source code that I can manage. Eventually, I’ll want to validate that the Docker Compose deployment passes integration tests before pushing to my registry, but that is out of scope for today’s post.

While there are many tools available for building containers, the advantage of CJE is that we have control of where the build environment is running and can scale it with CloudBees Jenkins Platform (CJP). This is a win for organizations looking to keep their builds on-premises or within their own infrastructure. The advantages of using Jenkins free versus CloudBees Jenkins Enterprise are discussed below.

Getting Started

This section provides a detailed tutorial for how to setup a Docker Container Pipeline with CJE. If you’re just interested in the benefits of CJE and how it differs from the community version, skip ahead.

Helpful Links

If you are new to Jenkins Pipeline or Groovy, these are some helpful primers:

Building the Docker Container Pipeline with CloudBees Jenkins Enterprise

We will be using the CloudBees Jenkins Enterprise Docker Repository for this post and following the CloudBees Docker Workflow.

  1. Start the CloudBees Jenkins Enterprise Docker container.
$ docker run -p 8080:8080 -p 50000:50000 -d --name cje -v /var/run/docker.sock:/var/run/docker.sock -v /usr/local/bin/docker:/usr/local/bin/docker cloudbees/jenkins-enterprise

2. Open the Jenkins Credentials page.

Login to the CJE instance by browsing to: http://localhost:8080/ and then register for a free trial. Once logged in, click on Credentials.

3. Setup your Docker Hub (or private registry) credentials.

Here you can add your docker credentials to CJE. I am currently housing my containers in Docker Hub: Django image and nginx image so I added my login credentials and named it jayjohnson-DockerHub . This will become the logical name associated to my user credentials for build jobs and items to use in the future.

4. Click OK, then click the Manage Jenkins option to install Jenkins plugins. Once it is open, click on Manage Plugins .

5. Update the CloudBees Docker Pipeline plugin.

Update the CloudBees Docker Pipeline plugin; restart Jenkins after it finishes installing.

6. Add the Available Pipeline Utility Steps plugin and restart Jenkins again.

7. Create a New Pipeline Item.

Once Jenkins restarts, click on New Item and enter a name, then select it to be a Pipeline type.

Click the OK button once you are done.

8. Paste the Jenkinsfile Contents into the Pipeline Groovy Section.

Per CloudBees best practices, the Jenkinsfile is stored in the repository. In the future, we can include this Jenkinsfile automatically with the Pipeline script from SCM option. For now, just copy the highlighted lines in this link into the Pipeline Definition text box.

9. Start the Build.

Now that our new Groovy Pipeline is ready to build and push the containers, we can kick off the Pipeline. Click Build Now to initiate the Pipeline job. This starts the Django and nginx container builds that auto-push to the Docker Hub registry. (Note: The build will fail since I am not distributing my Docker Hub credentials for this demonstration.)

It may take a few minutes to download and install the containers from scratch.

10. Verify the images and tags were pushed to the registry.

Open the container registry and verify the testing tag was pushed. For this demo, I pushed these containers to Docker Hub.

How the Groovy Pipeline Works

The repository uses this Jenkinsfile in the root of the repository. For now, it is using a single node to run the Groovy code, which means it will only run on one node/slave. Below is a breakdown of what the Groovy code is doing. Each major section is broken up by a stage declaration that makes it easier to debug when looking at the Stage View screen. In this sample code, each of the green boxes seen in the image from Step 9 above is part of a defined Pipeline stage declaration.

  1. Set up auth credentials

This section allows you to configure the registry, login, and default build tag.

registry_url = "https://index.docker.io/v1/" // Docker Hub
docker_creds_id = "jayjohnson-DockerHub" // name of the Jenkins Credentials ID
build_tag = "testing" // default tag to push for to the registry

2. Define the build repo

This section defines the GitHub repo to target during the build.

stage 'Checking out GitHub Repo'
git url: 'https://github.com/jay-johnson/docker-django-nginx-slack-sphinx.git'

3. Target the Docker Registry for the Django container

This section uses the CloudBees Docker Pipeline plugin to target a container registry (Docker Hub) and the ID for my Jenkins Credentials for the registry (jayjohnson-DockerHub). Learn more about Injecting Secrets with Jenkins Builds.

stage 'Building Django Container for Docker Hub'
docker.withRegistry("${registry_url}", "${docker_creds_id}") {

4. Build the Django Docker container

Assign the container maintainer and the container name, then read in the testing Docker env file, then kick off the build. A nice feature of the CloudBees Docker Pipeline is that it allows a Dockerfile in a repository’s subdirectory. This is why the django directory is added as an argument in the docker.build() method. View the Django subdirectory, which holds the Django on CentOS 7 Dockerfile.

// Set up the container to build 
maintainer_name = "jayjohnson"
container_name = "django-slack-sphinx"
docker_env_file = "testing.env"

// Read testing environment file:
docker_env_values = readProperties file: "./${docker_env_file}"

// Assign variables based off the env file
default_root_volume = "${docker_env_values.ENV_DEFAULT_ROOT_VOLUME}"
doc_source_dir = "${docker_env_values.ENV_DOC_SOURCE_DIR}"
doc_output_dir = "${docker_env_values.ENV_DOC_OUTPUT_DIR}"
static_output_dir = "${docker_env_values.ENV_STATIC_OUTPUT_DIR}"
media_dir = "${docker_env_values.ENV_MEDIA_DIR}"

stage "Building"
echo "Building Django with docker.build(${maintainer_name}/${container_name}:${build_tag})"
container = docker.build("${maintainer_name}/${container_name}:${build_tag}", 'django')

5. Start the Django container

The code below starts the Django server by using the testing Docker env file, which testing-docker-compose.yml targets for integration testing using the testing tag. Using the environment file keeps our Groovy code a little cleaner than the original version, and we can run this Django server in DEV mode for curl testing later in our container validation steps. Being able to specify environment variables to run containers via the withRun method is great for validating the container works as expected with Docker Compose.

Previously, we talked about how much easier Docker development is when using Docker Compose to drive the container’s configuration. In this repository, I am gluing a python Django server together with an nginx server through the use of a shared volume exposed from the host. This is handy for serving static assets (css, javascript, images, etc.) with a proven load balancer like nginx. It also allows for defining environment specific resources, such as where to post exceptions into Slack, without changing any code or the container. The values are also defined in the docker-compose.yml and testing-docker-compose.yml (which uses testing.env) files.

// Run the container with the env file, mounted volumes and the ports:
docker.image("${maintainer_name}/${container_name}:${build_tag}").withRun("--name=${container_name} --env-file ${docker_env_file} -e ENV_SERVER_MODE=DEV -v ${default_root_volume}:${default_root_volume} -v ${doc_source_dir}:${doc_source_dir} -v ${doc_output_dir}:${doc_output_dir} -v ${static_output_dir}:${static_output_dir} -v ${media_dir}:${media_dir} -p 82:80 -p 444:443") { c ->

6. Wait for the Django container to start

Once the container starts running, we need to wait for it to initialize the Django server process. This can take a few seconds, which is why the waitUntil method is very helpful. This code block will continue retrying itself using an exponential backoff retry timer until it returns true. By using this block, we can ensure that the container is running and that the internal Django server is ready to respond to HTTP requests.

// wait for the django server to be ready for testing
// the 'waitUntil' block needs to return true to stop waiting
// in the future this will be handy to specify waiting for a max interval:
// https://issues.jenkins-ci.org/browse/JENKINS-29037
waitUntil {
sh "docker exec -t ${container_name} netstat -apn | grep 80 | grep LISTEN | wc -l | tr -d '\n' > /tmp/wait_results"
wait_results = readFile '/tmp/wait_results'

echo "Wait Results(${wait_results})"
if ("${wait_results}" == "1")
{
echo "Django is listening on port 80"
sh "rm -f /tmp/wait_results"
return true
}
else
{
echo "Django is not listening on port 80 yet"
return false
}
} // end of waitUntil

7. Begin Django container tests

For simplicity, I limited this example to three container tests. By default, the expected result for each test expects a 0 returned, but each test can control this expected value.

// this pipeline is using 3 tests 
// by setting it to more than 3 you can test the error handling and see the pipeline Stage View error message
MAX_TESTS = 3
for (test_num = 0; test_num < MAX_TESTS; test_num++) {

echo "Running Test(${test_num})"

8. Test the Django container shows the home page

From outside the container, run a docker exec to issue a curl command. This command will return the contents of the home page, then count the number of “Welcome” occurrences and trim any newline characters out before writing the cropped output to a temporary file.

if (test_num == 0 ) 
{
// Test we can download the home page from the running django docker container
sh "docker exec -t ${container_name} curl -s http://localhost/home/ | grep Welcome | wc -l | tr -d '\n' > /tmp/test_results"
expected_results = 1
}

9. Test the Django container is configured to listen on Port 80

This is a redundant test of the curl command above, but I wanted to show how to utilize docker inspect from a Groovy script. This is helpful when you need to verify that Docker Compose deployed the composition correctly before promoting the container to production. Like the test above, this counts the occurrences of port 80 being open externally and trims the results to a temporary file.

else if (test_num == 1)
{
// Test that port 80 is exposed
echo "Exposed Docker Ports:"
sh "docker inspect --format '{{ (.NetworkSettings.Ports) }}' ${container_name}"
sh "docker inspect --format '{{ (.NetworkSettings.Ports) }}' ${container_name} | grep map | grep '80/tcp:' | wc -l | tr -d '\n' > /tmp/test_results"
expected_results = 1
}

10. Test Django does not have an ESTABLISHED connection on Port 80

This is another demonstration of how to log into the container and verify the internal Django process is running correctly. Since nginx is not being deployed at this time and there are no incoming connections with this test, Django should be in a LISTEN state without any ESTABLISHED connections on port 80. In the future, an integration test could verify the deployed composition successfully established connectivity from ngninx to the Django server during normal operation. As above, this counts occurrences of ESTABLISHED connections and writes the trimmed output to a temporary file.

else if (test_num == 2)
{
// Test there's nothing established on the port since nginx is not running:
sh "docker exec -t ${container_name} netstat -apn | grep 80 | grep ESTABLISHED | wc -l | tr -d '\n' > /tmp/test_results"
expected_results = 0
}

11. Exit and log an error for any unsupported tests

After a bit of debugging, this code allows the Jenkins Pipeline to stop running tests immediately when it executes. The code will auto-exit with an error message if you increase the MAX_TESTS to something more than 3.

else
{
err_msg = "Missing Test(${test_num})"
echo "ERROR: ${err_msg}"
currentBuild.result = 'FAILURE'
error "Failed to finish container testing with Message(${err_msg})"
}

12. Check that the test results match the expected results

Each test will run this block and fail testing if the test results do not match the expected results. Make sure to remove that temporary file from the host afterwards to prevent test results overlapping by accident.

// Now validate the results match the expected results
stage "Test(${test_num}) - Validate Results"
test_results = readFile '/tmp/test_results'
echo "Test(${test_num}) Results($test_results) == Expected(${expected_results})"
sh "if [ \"${test_results}\" != \"${expected_results}\" ]; then echo \" --------------------- Test(${test_num}) Failed--------------------\"; echo \" - Test(${test_num}) Failed\"; echo \" - Test(${test_num}) Failed\";exit 1; else echo \" - Test(${test_num}) Passed\"; exit 0; fi"
echo "Done Running Test(${test_num})"

// cleanup after the test run
sh "rm -f /tmp/test_results"
currentBuild.result = 'SUCCESS'

13. Stop testing for an exception

This code allows the Jenkins slave/executor to stop running tests if there is an exception (which is helpful when debugging complex Pipeline tasks).

} catch (Exception err) {
err_msg = "Test had Exception(${err})"
currentBuild.result = 'FAILURE'
error "FAILED - Stopping build for Error(${err_msg})"
}

14. Push the Django container to Docker Hub

If testing passes, this code will push the container image to the registry (Docker Hub) under the testing tag.

stage "Pushing"
container.push()

currentBuild.result = 'SUCCESS'

15. Test and push the nginx container to Docker Hub

This section builds the nginx container from the nginx subdirectory and then pushes the built image to the registry (Docker Hub) under the testing tag. It is a lightweight example for building a simple docker container pipeline.

stage 'Building nginx Container for Docker Hub'
docker.withRegistry("${registry_url}", "${docker_creds_id}") {

// Set up the container to build
maintainer_name = "jayjohnson"
container_name = "django-nginx"

stage "Building Container"
echo "Building nginx with docker.build(${maintainer_name}/${container_name}:${build_tag})"
container = docker.build("${maintainer_name}/${container_name}:${build_tag}", 'nginx')

// add more tests

stage "Pushing"
container.push()

currentBuild.result = 'SUCCESS'
}

Enhancing the Pipeline

I want to parallelize this build to run faster. To do this, I will perform the Django build step and the nginx build step using the native parallel support to put the build tasks onto two slaves or multiple executors. This speeds up the builds and at the same time increases the build’s complexity for the upcoming Docker Compose integration tests I want to run. The build currently uses the slave host’s docker engine to perform the build which, if parallelized, could end up running on two separate hosts in an environment running more than one Jenkins slave/executor.

Taking it to the Next Levvel

This fun little demo demonstrates Jenkins Pipeline’s power. However, while it works in the simple, one-developer working off one branch scenario, it breaks down when utilized across a team running multiple concurrent builds using a git flow multiple Pull Requests development model. Numerous potential bottlenecks around building and running containers on one Jenkins host exist. When I want to scale this to multiple slaves, I have to first look at where my containers are housed before trying to deploy with Docker Compose to run my integration tests. Making Docker container builds work seamlessly across large organizations is a much larger task, and some organizations will need help in preventing those bottlenecks that causes teams to lose time waiting on builds and deployments.

A Solution that Scales

Levvel has partnered with CloudBees to help our clients adopt DevOps practices that save them time and money. They are a great option to consider when looking to prevent Jenkins development and deployment bottlenecks.

Why Should I Pay for Something that is Free?

A fair question, so let’s look at the DevOps landscape and how we got here.

Jenkins has massive adoption spanning multiple verticals and solves numerous DevOps use cases. Before Pipelines, Jenkins was already a successful continuous integration tool and a friend to many, many developers. With Pipelines, Jenkins is positioned to tackle continuous deployments at scale. When setup correctly, Pipelines are a powerful tool that will bring even more users to Jenkins. Being able to handle artifact deployments and post-build actions empowers organizations with the tools to get features in front of customers faster.

Inevitably, it is this simple and powerful combination that can lead to issues. There are inherent complexities when running scaled-out multiple Jenkins master environments. To prevent these kinds of issues and overhead, CloudBees launched the CloudBees Jenkins Platform (CJP), which offers two variants: Enterprise Edition and Private SaaS Edition (private cloud deployments on AWS or OpenStack with Jenkins running on Mesos for High Availability). By just starting with CJP, your organization gets:

All this, plus access to the best-in-the-Jenkins-business gurus for supporting your organization’s continuous integration and continuous deployment capabilities with proven tools such as the Jenkins Pipeline.

Where CloudBees provides the enterprise with a comprehensive DevOps management platform, Levvel helps organizations:

  • Migrate, setup and install CloudBees Jenkins Platform (Enterprise Edition or Private SaaS);
  • Deploy and scale large multiple Jenkins master environments that are managed with CloudBees Jenkins Operation Center;
  • Customize their Kibana Analytics dashboard;
  • Lock down their Jenkins environments.

What Comes Next

I hope to hear your thoughts about this DevOps artifact pipeline for docker containers running on CloudBees Jenkins Enterprise. I have been looking for a simple way to control where my Docker builds were running in-house. The code within this blog post is already building my containers, and I hope it helps you get started with your Jenkins Pipeline.

Going forward, I plan to add Docker Compose integration tests to validate that my deployed composition is ready for primetime. Eventually, I could see the CloudBees Docker Pipeline wanting this supported natively out of the box. Docker Compose is already an established part of docker ecosystem and is super helpful when organizing and deploying complex topologies. It just makes good sense to vet the Compose deployment works prior to opening it up to production traffic.

If your organization would like assistance building your DevOps artifact pipeline using Jenkins, adopting CJP and CJE, Docker development or production strategy, please reach out to us.

Extra Reading

How to stop, clean up, and troubleshoot your environment.

Stopping the CloudBees Jenkins Enterprise Container

Stop the container with:

$ docker stop cje
cje
$

Remove the container (which will delete your Pipeline) with:

$ docker rm cje

Troubleshooting

From the command line, I had to set the permissions for the Jenkins container to access the docker socket:

$ sudo chmod 777 /var/run/docker.sock

If you want to build the CloudBees Jenkins Enterprise Docker container:

$ docker build -t jenkins-enterprise .

Modifying the docker engine to make sure it utilizes the correct services:

https://docs.docker.com/engine/admin/systemd/

I run a local docker swarm so I had removed the docker socket at /var/run/docker.sock from the systemd service file. So I had to re-enable it:

Then restart the docker engine with the following start command:

sudo service docker restart

Looking at the previous JenkinsCI work for help with Groovy I found this example helpful for getting the Pipeline working: https://github.com/jenkinsci/docker-workflow-plugin/blob/master/demo/repo/flow.groovy

For more content from Levvel, visit levvel.io/blog.

--

--

Levvel
Levvel
Editor for

Ask us how we can transform your business.