Containerizing your First NetCore MicroService with Docker and creating CI/CD Pipelines with Jenkins — Second Part

Elsavies
Proscai X
Published in
8 min readAug 22, 2020

In this new tutorial we are going to learn how to Containerize your .Net Core 3.1 Microservices using Docker and how to create your Continuous Integration and Continuous Deployment (CI/CD) Pipelines using Jenkins. I recommend you to read and use the project made in “Our first Microservice with .NET Core 3.1–First Part”.

Unit Tests

An important step of a good CI/CD pipeline in Jenkins is to Test your code, Visual Studio and .NetCore has different options for Unit Tests, in this case we will use XUnit, add a new project to the solution and name it Auth-Test, add a new .json file and name it appsettings.test.json, add this code inside:

appsettings.test.json

Now create a new C# class and name it Configuration.cs, this will add the appsettings.test.json file when the tests run:

Configuration.cs

Note: Remember to set property “Copy to Output Directory” to “Copy If Newer”

Add a new C# class and name it AccountControllerSignupTests.cs, like the name suggest, in this class we will test different cases in the Signup method of the AccountController.cs class, add the next code to your class:

AccountControllerSignupTests.cs

Your solution should look like this now:

Solution File Structure

To run your Tests just click Test > Run All Tests:

Test Menu

This should be the output:

All Tests Runs Output

Docker Container

Another important step in a Jenkins CI/CD pipeline is to build the project, we will build it with Docker, for this, we need to create a Dockerfile for our Auth-API project inside the solution, a Dockerfile is a file that specify how your container will be created, it uses a layer strategy for the creation where every new layer that you are creating uses the last layer created and normally you start from a base image hosted in some public or private container registry like DockerHub, this will be easier to understand by reading our Dockerfile:

Auth-API-Dockerfile

The best way to see a container is like this:

Image from: https://collabnix.com/understanding-docker-container-image/

As you can see, it start using the linux kernel and then a base image, every next layer uses what the last layers have created.

After having our Dockerfile created, let’s create a DockerCompose File, What is the difference?, while a Dockerfile stablish how your container will be created, a DockerCompose file uses that Dockerfile and others, then stablish how all of them will work as individual components in a local Docker Machine. In Visual Studio 2019 is really easy to create a DockerCompose file, just Right Click your Auth-API project > Container Orchestrator Support > Docker Compose > Linux, choose “no” to override your existing Dockerfile created in the last step. Your solution explorer should look like this now:

Solution Explorer Structure

Open your docker-compose.yaml file, delete everything and add this code:

docker-compose.yaml

We stablished in this new docker-compose.yaml file the next:

  • The name of the image build will have suffix “auth-api”.
  • Ports 8081 and 8082 from Docker Machine are mapped to ports 80 and 443 of the container created, respectively.

Creating and Running Containers

Open your console(CMD|Bash) and navigate to your solution folder where the docker-compose.yaml file is, type the next command:

docker-compose build [--build-arg ASPNETCORE_ENVIRONMENT_ARG=Debug|Stage|Production]

This command builds all your images presented in your docker-compose.yaml file as well as other things that you can set in it, you can optionally specify your build args with the --build-arg parameter, ASPNETCORE_ENVIRONMENT_ARG value is passed to the docker-compose and this latter the value to the Dockerfile. If everything goes well, you should see a step by step process of your image building, at the end, a message of successful built and image tagged, something like this:

Built Process Output

Now check if your image was created:

> docker images

An output like this will be shown:

Your image is created and has the “latest” tag because we did not specified any, some other images were created too, those used as base images for our Dockerfile. Run your built image using the next command:

> docker run authapi:latest -d

Docker run your image detached from the console and assing a random name for your container, to check the name assigned to the container type the next command:

> docker ps

An output like this will be shown:

Our container’s name is “bold_murdock”, and has port 80 and 443 opened, however we did not map any port of the docker machine to container, now let’s enter to our container’s bash console and see the file system created:

> docker exec -it bold_murdock /bin/bash
#root@59f641d729d1:/app# ls

An filesystem like this will be shown:

Container File System

There is a problem with this file system, no matter the ASPNETCORE_ENVIROMENT_ARG variable passed there is a appsettings.json for every compile mode, let’s fix it, what we are going to do now is to specify that in every build of the project uses an specific appsettings.override.json file based on the compile mode, modify your Program.cs C# class to look like this:

Program.cs

Now modify your Auth-API.csproj, replace the whole code for this one:

Auth-API.csproj

To avoid build problems with this new .csproject configuration, Add a new compile mode to the project in Visual Studio, name it “Stage, copy the “Release” compile mode for the creation, this way we will have three standard stages of software distribution:

To run your project in the docker machine in an easy way just right click your solution explorer > Set startup projects…> docker-compose, hit F5 or just click play in Visual Studio, a new created image will be running and tagged with “dev” tag:

Now let’s create an Azure Container Registry to upload our images, you need a subscription to Azure(Free with USD$200 credit), download the latest Azure Cli and Open PowerShell with Administrator privileges, type the next commands:

#a login windows will be prompted, introduce your azure credentials
> az login
#Create a var with the ResourceGroup name
> $rg="RGMyFirstMicroService"
#Create the Resource Group in Central United State region
> az group create --name $rg --location centralus
#Create the Azure Container Register inside the Resource Group
> az acr create --resource-group $rg --name acrmyfirstmicroservice --sku Basic --location centralus --tags "acr=myfirstmicroservice-central-us"
#Login into your Azure Container Registry and save the credentials> az acr login

Now let’s tag your image built and upload it to your Azure Container Registry:

> docker tag authapi:latest acrmyfirstmicroservice.azurecr.io/authapi:1.0.0>docker image push acrmyfirstmicroservice.azurecr.io/auth-api:1.0.0

After pushing your image you can go to the Azure Portal > Resources Groups > RGMyFirstMicroservices > acrmyfirstmicroservices > repositories and you should have your authcontainer image

Jenkins Server

A Continuous Integration(CI) pipeline is a triggered step by step process that builds, tests and deploys your code into some different Hosts like: web servers, cloud services plans, Kubernetes infrastructure, package managers, etc., In this case we want to build, test and deploy our previous created docker image when a new commit is made to the Development branch in our GitHub repository.

Let’s create an Ubuntu Virtual Machine in Azure to use it as our Jenkins Server:

> az vm create -n jenkins-server-central-us -g RGMyFirstMicroService --public-ip-address pip-jenkins-server-central-us --image ubuntults --data-disk-sizes-gb 20 20 --size Standard_DS2_v2 --admin-username {YourUsername}--admin-password {YourPassword} --vnet-name vnet-myfirstmicroservice-central-us --subnet ServiceSubnet --nsg nsg-service
Command Output Example

SSH to your created Ubuntu Virtual Machine:

> ssh -i c:/Users/{YourUser}/.ssh/id_rsa  YourAdminUserName@YourVirtualMachinePublicIP

Install Docker, Java and Jenkins, go to your /var/lib/jenkins/secrets/initialAdminPassword and copy the value, then type the next commands to avoid future permissions problems trying to access from Jenkins server to your Docker terminal:

> sudo usermod -a -G docker jenkins
> sudo usermod -a -G docker {yourusername}
> sudo systemctl restart jenkins

Access to your created {VirtualMachinePublicIP}:8080 and start configuring Jenkins Server:

Jenkins Initial Config

Install the Suggested Plugins in the next screen and then create your Admin User, finally let the URL configuration of the server instance.

Jenkins Suggested Plugins Installation

Finally, in the next screen, create your Admin User credentials for next logins.

Jenkins Plugins

Go to Manage Jenkins > Manage Plugins and install the next plugins:

  • Docker
  • Docker Pipeline

Jenkins Configuration

Go to Manage Jenkins > Global Tools Configuration and specify Git

Jenkins Credentials

Go to Manage Jenkins > Credentials > System > Global Credentials and add the next credentials:

  • SSH Username and Private Key: Create a new SSH key in your machine, In GitHub register the public SSH Key generated for your user, and register the private SSH Key generated with it’s Passprase in Jenkins, name it GitHubCredentials.
  • Username and Password: Go to Azure Portal > RGMyFirstMicroService > acrmyfirstmicroservice > Access Keys and copy the Username and Password from it, paste it in your Jenkins Crendentials and fill the ID Field with ACRCredentials.

Once the server is fully configured let’s create our first pipeline, click New Task > Pipeline and put this declarative script in the pipeline script section:

Jenkins Pipeline

If everything goes well with your Pipeline Build, you should see the next screen:

Pipeline Build Result

To confirm that your image is built and published correctly, Go to Azure Portal > RGMyFirstMicroservice > acrmyfirstmicroservice > Repositories, you should see your auth-api:1.0.0 inside:

If you noticed, there was a last stage where we did nothing in our Jenkins Pipeline, “Deploy Image into K8s”, this stage means deploy(CD = Continuous Deployment) our new container into a Kubernetes infrastructure in the cloud.

To avoid making this tutorial bigger, I will explain what is left in the next Tutorial:

“Creating a Kubernetes Infrastructure in Azure with Terraform.”

If you like my tutorials Follow Me!

Finally you can download this tutorial from my GitHub Account: Elsavies

Follow me as well in Facebook and Instagram :)

--

--

Elsavies
Proscai X

Software Architect, Passionate Gamer, IT Lover