CI/CD With Jenkins, Docker and Fastlane — Part 2 — Jenkins and Docker

Osein
inventiv
Published in
12 min readJul 1, 2019

This is part of a multi-part series about CI/CD flow. This post focuses on what Jenkins is and how to install Jenkins with Docker. An introduction for Docker and hello world for Docker is included before the setup process.

Other articles in this series:

2) Jenkins

Jenkins is an extensible, open source continuous integration server. It builds and tests your software continuously and monitors the execution and status of remote jobs, making it easier for team members and users to regularly obtain the latest stable code and deploy to pre-prod or prod.

2.1) Jenkins Pipeline

Jenkins Pipeline is a set of plugins for Jenkins to define CI/CD flows. Jenkins Pipeline definition can be written into a file named Jenkinsfile or put into Jenkins configurations.

There is snippet generator( http://jenkins-url:8080/pipeline-syntax ) and declarative directive generator( http://jenkins-url:8080/directive-generator ) for pipelines.

There are two types of Pipeline definitions. Declarative and scripted.

2.1.1) Scripted Pipeline

Scripted pipelines are specified in a language that is almost same as the Groovy. Before Jenkins 2.0 you had to install pipeline plugins and enable scripted pipeline, but this feature comes in the box with 2.0 version.

Scripted pipelines usually start with node keyword. A node is a worker in Jenkins. Everything in a node keyword runs in selected worker. This keyword is not mandatory, but it is a good practice to put there.

Second keyword is stage. Every stage groups a functionality like building, testing, deploying etc.

Basic scripted pipeline structure looks like this:

2.1.2) Declarative Pipeline

Declarative pipelines simplify pipeline declaration with more friendly syntax and actions. In our opinion, declarative pipelines are meant to be stored in source control systems and parsed in each run. We use declarative pipelines and store them in our source control system. With Jenkinsfile we can configure almost everything for Jenkins.

Declarative pipelines start with the pipeline keyword. Its meaning can be extracted from the keyword. It defines a workflow for Jenkins. You can specify which agent to run inside pipeline block.

Second keyword is stages and it defines which stages to run. It will have stage blocks like in the scripted pipeline.

Basic declarative pipeline looks like this:

2.2) Installing Jenkins with Docker

To install Jenkins, we used Docker containers. Our master Jenkins server sits inside the container and connects our build machine with SSH. The reason we choose this structure is the portability.

When we use docker container, we give local volume to Jenkins. It is writing all of its data into this folder and it is synced with the folder in our build machine. This ensures that, when we shut the container down, we have all the Jenkins data like settings, project configurations, logs etc. A new Jenkins container can run from that data. Jenkins will validate its data at start. It will notify changes or expired configuration entries. Hence the reason we store data in our build machine.

We can update Jenkins with just a few commands. Stop the container, pull the new image and run a new container from new image. We won’t install java, we won’t install jenkins environmental variables. Our build server is clean just with the docker installation.

This easy setup process makes it possible to, in emergency situations, create a build server inside any machine using macOS too.

We will first talk about Docker and then do a quick hello world. Then we will install Jenkins and continue.

2.2.1 ) What is Docker?

Docker is a containerization platform that packages your app and all its dependencies together in the form called a docker container to ensure that your application works seamlessly in any environment. This environment might be a production or staging server. Docker pulls the dependencies needed for your application to run from the cloud and configures them automatically. You don’t need to do any extra work.

Putting or creating applications into containers has several advantages:

  1. Docker containers are always portable. You can create one container, extend it, create a new image with it and deploy new containers with newly created image. Containers can be deployed to any docker installation, whether a mac or windows.
  2. Containers are lightweight because they share same hosts.
  3. Containers are interchangeable. You can change containers or switch versions on the fly.
  4. Containers are stackable. You can stack services vertically on the fly. You can scale them too. Few commands and you have distributed services.

We are using docker to create our testing environment. This way if there is any shortage for electricity or anything that impacts our build pipeline. We can fire up a mac computer and deploy our testing container to continue operation.

2.2.2) Concept

Before we start to use Docker in practice, we should first clarify some of the important concepts.

2.2.3) Images

An image is an immutable snapshot of a container. We can think images as class definitions. When we build an image, we define a class. Containers are instantiated from an image. They are the working instances of images. Images consist of:

  • Code
  • Runtime
  • Libraries
  • Configurations

Images have layers. For example, extending an Ubuntu OS and putting a file in there will make a layer. You can run any number of containers from it. They all will have the file and they will share the Ubuntu OS.

2.2.4) Container

A container is a running unit of an image. From one image we can create one or more containers. Every one of them will run in their own isolation zone and they will share host system if they extend the same operating systems or same layer.

For example, a VM based system looks like this:

For every VM that is run, a guest OS will run with it. Whereas with Docker, containers will share one guest OS. It can be shown like this:

Consider running a load balanced web application in VM ecosystem. You can have 3 web servers, one HAProxy and cache servers etc. Let’s say one Ubuntu system takes 200 megabytes of ram and 5 percent CPU time to stay up and running. With just 4 servers you save 600 megabytes of ram and 15 percent of CPU time. Now, imagine this with hundreds of servers.

2.2.5) First steps with Docker

When you install Docker, you can query its version for basic test:

This will give us version and build number and prove that our installation process is completed successfully.

Second step in validation is running hello-world image.

Docker will look for hello-world image in local repository. When it is not pulled before, it will query registry for it. Now we know the install is completed. We can start installation process of Jenkins.

If you want to learn more about Docker, there is Docker Compose to create multiple-container applications, Docker Swarm to create multiple-container cluster based applications.

2.2.6) Installation of Jenkins

Pulling Jenkins image

The latest Jenkins version is 2.180 in the write process of post. There will probably be new versions when the post is published, and we can’t guarantee that everything will work because of changes. We suggest you pull the 2.178 image.

To pull the Jenkins image run the following command:

It will produce an output like this:

Now we are ready to run our Jenkins container.

2.3 ) Running Jenkins container

With this command we are running a container with daemon mode. We are assigning a volume to “/var/jenkins_home” folder to sync Jenkins files with our host machine. We are assigning 8080 and 50000 ports of host machine to docker container. Docker will listen to these ports and forward them to container. Lastly, we are giving image tag. This command will produce an image hash for successful allocations. It will look like this.

If you have applications listening to these ports, you have to change ports or close them. Left side of the port assignment is host machine. For example, 80:8080 will forward 80 port of host machine to 8080 port of container.

Now when you go to the address http://localhost:8080 you will see the initial jenkins page.

Getting admin password

There is two ways to do this. First one is to look into initialAdminPassword file. Second one is looking into container logs. The folks at Jenkins logged password for us.

Looking into initialAdminPassword file

For this method we need the container ID. If you can’t remember it just write “docker ps” into console. It will look like this:

My container ID starts with 8b9… We have to connect into container with:

docker exec -it {containerID} bash

  • “i” means interactive shell. If you don’t specify this, docker will send your command into container and exit when its finished.
  • “t” commands docker to create pseudo terminal. Without this it will act like a terminal connection.
  • Lastly bash is the program to run. You can specify other programs without -i. We will try this soon.

When you cat the password file it will print its contents. We will provide this password to Jenkins web UI and continue setup process.

Lastly you have one more option to optain password from file. We connected to just run one command. This can be done without interactive shell.

Like this, we can run one command, or more with && like this:

Looking through container logs

If you enter “docker container logs {containerID}” into terminal, it will print all the logs.

You can browse through logs to find out Jenkins admin password.

2.4) Completing the setup

After the admin password screen, Jenkins will ask which plugins to install. Suggested ones are fine. We added ansiColor plugin because Fastlane print ANSI color keywords to terminal. It will be hard to understand logs without ansiColor. Next one is user creation screen and Jenkins URL screen. This URL is how you access to your Jenkins instance. Localhost will be fine for docker. After the setup you should be seeing something like this:

This means our Jenkins instance is up and running. Congrats!

2.5) Adding host as worker to Jenkins inside Docker

We created Jenkins inside Docker. This means Jenkins is running on probably an Ubuntu OS. To make our mac a worker we first need to open remote login. Go to system preferences, then sharing.

You can now login to your computer with any computer in your network. Navigate to home page in Jenkins, manage Jenkins, then manage nodes, and click new node.

Give new node a name and you can only select permanent agent.

Remote directory is where Jenkins is going to put its files. There is a special host in node host. It is what Docker use to access to host machine.

You are successfully added host machine as worker to Jenkins

2.5.1) Known hosts error when starting worker agent

When you start newly created node it will print known hosts error like this:

We will SSH into docker container and test SSH connection.

First find the container id:

And we will connect to host machine. This will let container know our host.

Commands are followed in this order:

When you relaunch agent, it will start successfully.

After this we restrict docker master to run images. Select master, go configure and set number of executors to 0.

2.6) Jenkins Project Setup

We created Multibranch Pipelines for our projects because we want Jenkins to automatically pick up new branches and put our logic into Jenkinsfiles. Multibranch is suitable to pull requests too.

FYI, there are more options to choose from when creating Jenkins project:

  • Freestyle Project: This is a building block for Jenkins Projects. In this you will specify build steps inside config web UI. You have limited options for steps, like running shell commands, changing github commit status, invoking maven or gradle scripts etc.
  • Pipeline: In this item, you tell Jenkins that you will run a pipeline flow. It can be specified in web UI or it can be taken from a source control branch.
  • Multi-configuration project: Multi-configuration project can deal with testing in multiple environments or testing with different environmental variables. This item is based on freestyle project and doesn’t have pipeline support.
  • Folder: This item is just there to group other items.
  • Github Organization: Github Organization item is used to track organization branches and discover their branches automatically. Branches needs a Jenkinsfile to be discovered.
  • Multibranch pipeline: This item is simple version of github organization. Multibranch pipeline takes a source control project. It has auto discovery features for branches and pull requests that have a Jenkinsfile.

To navigate to job creation page you can click to “new item” on the main page.

After that we select Multibranch Pipeline.

Next, you will be redirected to item configuration page.

In here, we will add our project as branch source.

There is an extra behaviour and an extra build strategy. “Check out to matching local branch” is required to make commits in the Jenkins temp project folder. Jenkins clone source control project and then detach branch head from origin. When you don’t specify this, git will throw error that’s saying you are detached from origin.

Second extra is Build Strategy. We are telling Jenkins to trigger build for no regular branch when it scans branches. Without this, let’s say you are configured Jenkins to scan repositories every 5 mins and you have daily build in beta branch. Jenkins will trigger build for every scan and make poll timer invalid. This build strategy stops Jenkins doing that. Downside is you have to trigger a build for new branches. So that Jenkins will read poll source control timer string from Jenkinsfile.

Next up, there are two more configuration that we configured.

Scan multibranch pipeline triggers is set to 15 minutes intervals. What this means is, Jenkins will trigger a special event that searches your project for new branches. When I push C branch to github it will get scanned in 15 minutes.

Orphaned item strategy is set to “keep 50 last builds”. What this means is, Jenkins will keep the latest 50 build data. When we start new build after 50th, it will delete the oldest build data and you will still see 50 builds.

Setting build configurations in Jenkins is this easy when you use Jenkinsfile. This will simplify transition process too. When you switch Jenkins versions you can always start from scratch and start your build server in 5 minutes.

Now when you enter the multibranch item you create, you will see a screen like this. We will create our Jenkinsfile and push it to origin. Then we are going to scan repository to tell Jenkins that we have a new branch with Jenkinsfile.

That’s It

We are successfully installed Docker and created a Jenkins container. We connected our host machine as worker node to Jenkins and created our first GitHub project. It is now time to delve into Jenkinsfile and Fastlane.

--

--