Continuous Integration DIY series 4/4

In the previous posts we started ramping up to a fully automated local DIY CI environment.

So far we’ve achieved:

  • A (production-like) ASP.net core project with tests
  • A Dockerized image of our project
  • A local Jenkins CI build server based on configuration files

In this story, we will start wrapping things up, the Jenkinsfile for our project will be addressed in detail, so that you may tinker and adapt this to your needs.

So, what is a Jenkinsfile ?

We’ve seen a Dockerfile before, this is a file that contains the definition of the container. It’s the way of describing to Docker how to setup a container.

I’d say that a Jenkinsfile is to Jenkins, the same way a Dockerfile is to Docker.

A Jenkinsfile describes a build pipeline to Jenkins and how it should interact upon it.

Let’s look at a very basic example:

Basically, this is a script containing all the stages that a Jenkins pipeline build should execute in order to build the project. It has a ton of potential, you could possibly create a complete pipeline from initial source code commit to stopping/deploying/starting apps and websites to test and production environments. In the past, I’ve used a Jenkins pipeline script to leverage nodejs, grunt, xunit, Octopus Deploy, nuget, msbuild, Slack, PowerShell, Git, …

Basically, if a tool provides a CLI, you can interact with it in a Jenkinsfile. Some are even supported as first-class citizens via plugins that provide an API for use in a Jenkinsfile, like Slack.

So, the possibilities seem endless, but keep these best practices in mind. A Jenkinsfile is also part of your code base, you will typically commit this file to source control and keep it around while branching into features, releases, support branches. As it reflects the way the product was build at that point in time.

With that said, let’s look at our example.

At line 02, we provide the script with a global variable, this variable can be set and accessed from anywhere inside the script. This is where you’d store API keys, the base version of your product (1.1, 1.2, 1.3, 2.0 …)

The pipeline is defined at line 04 with the node keyword, this typically contains stages, a stage is an isolated piece of logic that does 1 part of the pipeline. Like building, testing, deploying, asking for input. The more your pipeline is split up into stages, the easier it is to troubleshoot when things go wrong.

If all goes well up to the point of deployment, than you can deduce that the deployment server must be down, for example. When a pipeline breaks on the stage of testing, this would indicate that your tests are probably broken since the last commit.

Line 05 uses the function ‘ws’ which stands for workspace, it will use a folder called ‘myapp’ located in the Jenkins home directory as its base directory.

Line 06 describes a try-catch block (yes, you can do that, too!). If anything goes wrong in our script, we will catch the interrupted exception in order to determine if a build was canceled by the user.

Line 07 defines a stage called ‘scm pull’, this is the display name for the stage in the Jenkins pipeline stage view and will get the source code from the scm provider, determine a version for the product and set the name of the current build to that version number.

The next stage is located at line 13, this will execute the build of the product, it calls a function called ‘build’ that is defined below the pipeline. This function changes it’s current directory to ‘app’ at line 45 and executes a shell command at line 46. For Jenkins CI installed on Windows, this would be the ‘bat’ command instead of the ‘sh’ on Linux. You can also use PowerShell on Windows if you like.

You can see why this is powerful, basically anything you can do in a shell, you can do in a Jenkinsfile.

All other steps seem to be pretty self-explanatory. Keep in mind that this is not a functioning script, it is an example.

Let’s look at the working Jenkinsfile for the netCoreBuild project:

You can see that it looks pretty similar as our example before.

Please note that you can include other plugins (or modules) via a simple import statement:

 import groovy.json.JsonSlurper

For brevity, I will address the functions that our pipeline script calls one by one, as the stages are quite transparent.

Building the project

def dotnet_build(){ 
dir(‘Merken.NetCoreBuild.App’) {
sh(script: ‘dotnet build Merken.NetCoreBuild.App.csproj’,
returnStdout: true);
}
}

Testing the project

As we saw before, in the first post, in order to build a project, you must leverage the dotnet CLI with dotnet build. This is what is stated here, we are basically copying all our shell commands into the Jenkinsfile in order to achieve the same result as we would’ve had when doing it manually in a shell.

def dotnet_test(){ 
dir('Merken.NetCoreBuild.Test') {

sh(script: 'dotnet restore', returnStdout: true);
sh(script: 'dotnet xunit -xml xunit-results.xml',
returnStdout: true);
}
dir('Merken.NetCoreBuild.Transform') {
sh(script: 'dotnet run ../Merken.NetCoreBuild.Test/xunit-
results.xml xunitdotnet-2.0-to-junit-2.xsl junit-
results.xml', returnStdout: true);
step([$class: 'XUnitBuilder', thresholds: [[$class:
'FailedThreshold', unstableThreshold: '1']],
tools: [[$class: 'JUnitType', pattern: '*.*']]])
}
}

For testing a dotnet core application, you must restore all dependencies for the test project via ‘dotnet restore’. This contains the xunit CLI dependencies as well.

Executing the tests is as simple as running the ‘dotnet xunit’ command.

This will produce a results file called ‘xunit-results.xml’. The XUnit plugin cannot interpret this file as-is, we must transform it using an xsl tranformation tool.

I’ve included a simple dotnet core based tool in the ‘Merken.NetCoreBuild.Transform’ directory, this leverages the xsl tranformation file ‘xunitdotnet-2.0-to-junit-2.xsl’ in order to provide a compatible result for the XUnit report builder.

sh(script: 'dotnet run ../Merken.NetCoreBuild.Test/xunit-
results.xml xunitdotnet-2.0-to-junit-2.xsl junit-
results.xml', returnStdout: true);

We can access a plugin via a generic step-command.

step([$class: 'XUnitBuilder', thresholds: [[$class:
'FailedThreshold', unstableThreshold: '1']],
tools: [[$class: 'JUnitType', pattern: '*.*']]])

This command will use the class XUnitBuilder from the available API set of installed plugins to read all the local compatible files in order to provide a test report. More info can be found on the XUnit website.

Shipping the project (via Docker)

This is done in the publish function:

def dotnet_publish(){
dir('Merken.NetCoreBuild.App') {
sh(script: 'dotnet publish Merken.NetCoreBuild.App.csproj -o
./obj/Docker/publish', returnStdout: true);
sh(script: 'cp Dockerfile ./obj/Docker/publish',
returnStdout: true);
sh(script: 'tar zcf netcoreapp.tar.gz -C
./obj/Docker/publish .', returnStdout: true);
}
}

The first shell script should be familiar, this is described in the first post and publishes the app via ‘dotnet publish’ to a folder called ‘obj/Docker/publish’.

The second shell script copies the Dockerfile from the app directory into the publish directory, because Docker requires a Dockerfile at the root of the directory by default.

The last script, creates a tar archive of the publish directory, containing the published netcore application and the Dockerfile.

Now we are ready to ship this to a Docker host to build our image based on our tar archive.

Since we’ve installed Docker in a previous post of this series, we can use our local machine as a Docker host.

Because the next group of functions utilize the Docker remote API, let us look at the common function that is used to communicate with our Docker host ‘dockerApiRequest’.

def dockerApiRequest(request, method, 
contenttype = 'json', accept = '', data = '',
isDataBinary = false){
def requestBuilder = 'curl -v -X ' + method + ' --unix-socket
/var/run/docker.sock "http://0.0.0.0:2375/' + request + '"';
    if(contenttype == 'json'){       
requestBuilder += ' -H "Content-Type:application/json"';
}
if(contenttype == 'tar'){
requestBuilder += ' -H "Content-Type:application/x-tar"';
}
if(accept == 'json'){
requestBuilder += ' -H "Accept: application/json"';
}
if(data.trim()){
if(isDataBinary){
requestBuilder += ' --data-binary ' + data + ' --dump-
header - --no-buffer';
}else{
requestBuilder += ' -d ' + data;
}
}
def response = sh(script: requestBuilder, returnStdout:true);
if(accept == 'json'){
def jsonSlurper = new JsonSlurper();
def json = jsonSlurper.parseText(response);
return json;
}
return null;
}

This function will eventually generate a ‘curl’ http request to an endpoint on location ‘http://0.0.0.0:2375/’ via the unix socket located at ‘/var/run/docker.sock’. Because we are running Jenkins inside a Docker container, we need this setup in order to communicate with the Docker host. As you may recall from post #3, we are running this image with a volume connected to the container:

docker run -p 8080:8080 -d -v   
/var/run/docker.sock:/var/run/docker.sock --name netcorebuild
netcorebuild:latest

This allows us to communicate from within the container to the Docker host, hence the 0.0.0.0 IP address.

In the function description, you can see that we are accepting a set of parameters with default values. This allows us to call this function with the least amount of effort in most cases.

Let’s see how we are utilizing this function in the next step of the pipeline.

def docker_build(){
dir('Merken.NetCoreBuild.App') {
dockerApiRequest('containers/netcoreapp/stop', 'POST');
dockerApiRequest('containers/prune', 'POST');
dockerApiRequest('images/netcoreapp', 'DELETE');
dockerApiRequest('build?t=netcoreapp:' + VERSION_NUMBER +
'&nocache=1&rm=1', 'POST', 'tar','', '@netcoreapp.tar.gz',
true);
}
}

You can see that we’re calling the common function with a pretty limited set of parameters, the only required parameters are the API endpoint and the HTTP method.

These commands are similar to what we would’ve done if we were to create the container manually in a shell. To keep this pipeline idempotent, we will first clean up any existing containers of this image and start afresh.

The first api call stops the netcoreapp container (if it were running).

The second removes all stopped containers via the containers prune command. Be careful, as it also removes other stopped containers on the Docker host. You might want to skip this step in a shared environment.

The third request deletes the netcoreapp image from the host.

And lastly, it sends the tar archive of our netcoreapp project to the Docker host to be build.

Deploying the project

For the final step in the pipeline, we will deploy our project as a Docker container.

def docker_run(){
dir('Merken.NetCoreBuild.App') {
def containerId = createContainer();
renameContainer(containerId);
startContainer();
}
}
def createContainer(){
sh('echo \'{ "Image": "netcoreapp:' + VERSION_NUMBER + '",
"ExposedPorts": { "5000/tcp" : {} }, "HostConfig": {
"PortBindings": { "5000/tcp": [{ "HostPort": "5000" }]
} } }\' > imageconf');
    def createResponse = dockerApiRequest('containers/create',
'POST', 'json', 'json', '@imageconf');
def containerId = createResponse.Id;
return containerId;
}
def renameContainer(containerId){
def request = 'containers/' + containerId + '/rename?
name=netcoreapp';
dockerApiRequest(request, 'POST');
}
def startContainer(){
dockerApiRequest('containers/netcoreapp/start', 'POST');
}

The ‘docker-run’ function has only three steps to execute and is similar to how the docker run command functions in a shell using the docker api from a docker host. It executes a set of other api commands in order to run a container. This is what we will be doing as well, since there is no remote api for ‘docker run’.

In the createContainer function, we will create a container based on the netcoreapp image. The configuration is stored in a file called ‘imageconf’ which contains the necessary plumbing in order to get the container wired up.

This file could possibly be stored as a separate file in the source code system, but is embedded into the pipeline for transparency.

The following part of the function calls the Docker remote api with the imageconf file as a payload in the request body. As a response, it returns the container id, which we need in order to rename the container.

def createResponse = dockerApiRequest('containers/create',
'POST', 'json', 'json', '@imageconf');
def containerId = createResponse.Id;
return containerId;
def renameContainer(containerId){
def request = 'containers/' + containerId + '/rename?
name=netcoreapp';
dockerApiRequest(request, 'POST');
}

Because we want to address the container later (to stop the running container), we will rename the container to something we can manage : ‘netcoreapp’.

After renaming, we can start the container with the new name.

def startContainer(){
dockerApiRequest('containers/netcoreapp/start', 'POST');
}

That’s it

We’ve run through about most of the Jenkinsfile regarding building, testing, publishing and deploying.

Notes

Hopefully you’ve found this interesting, I’ve created this series because I was curious on how far I could stretch the integration of Jenkins, Docker and netcore. Truth is, you can replace netcore with any development technology. I’ve set this up for an Angular 2 app in the following repo:

In this repository, I’ve setup multiple Jenkins pipeline jobs in order to provide a continuous delivery setup for a back end and front end setup. But in theory this could be setup for any microservice architecture to have a pipeline for each microservice instead of a lock-step (monolythic) deployment.

Any remarks are welcome. I enjoy to be enlightened on ways to improve this setup, will this be something you will try out ? Will anyone use this in a production environment ? For me, it has been a learning experience and I love to continue working with Jenkins and Docker in the future.