Building a simple microservices platform on Azure including CI/CD pipeline

Paul Nieuwenhuis
Aug 25, 2017 · 11 min read

In my previous blog from a while ago, i’ve written about building a microservice in ASP.NET Core and Akka.NET. A lot happened between that time and now, because i’ve build a lot of similar microservices like the one in my blog and they are all running in a production environment right now at a client. As container orchestration platform i’m using Amazon EC2 Container Service, which is the managed container solution from Amazon Web Services. And I must say i’m very positive with how it works and how stable the solution is that AWS is offering. It runs with 30+ services over 2 environments (staging and production) without any downtime for months.

Because i’ve seen how great the solution at Amazon is, i’ve wanted to spend some free time into creating a similar platform on the Azure platform. I wanted to see if it is easy to setup and if I can create a similar solution. Ultimately I should be able to migrate microservices from the AWS solution to this new solution, but that is way out of scope of this blog.

I’m only building a small part of the features that my platform has on AWS, because it will be to large for this blog. But one of the main features that every microservices platform should have is an automated CI/CD pipeline for all services. Without it, don’t even try to manage those services because it would be hell! Just like the project at the client I will be using GitLab. GitLab is a great alternative to GitHub with one notably great feature build-in: a build pipeline running in Docker! It is easy to setup and is flexible enough for my needs.

Container orchestration: Kubernetes

When you are creating a container orchestration platform in Azure, there are many options to choose from. Each option has it’s pros en cons and are widely described on the internet. For this project i’m choosing Kubernetes, just because I don’t have any experience in it yet. Looking though the feature-set, it offers some more options then what Amazon EC2 Container Service offers (such as service discovery), but it certainly offers me to build almost the same setup as I have with AWS:

  • Have all containers have a endpoint on a public accessible load-balancer, so it can be reached by, for example this endpoint for the basket service: https://public-api.westeurope.cloudapp.azure.com/service/basket
  • Deploy using a custom settings file for configuring the services, such as limiting the CPU and memory (per environment) and setting environment variables.
  • Recover microservices when crashed
  • Include health checks
  • Upgrade microservices to newer versions without down-time

Ok, now lets create a Kubernetes cluster! Please note i’m using MacOS and i’ve installed and configured the `azure-cli` on my local environment. If you don’t want to set it up locally, you can also use the Shell that is build into the Azure Portal. It has the CLI installed! First you have to create a resource group, which is a Azure specific logical group for keeping resources together. I’m creating a group named ‘newhouse-platform’ in the region ‘west europe’:

az group create -l westeurope -n newhouse-platform

If everything goed well, it will return with ‘provisioningState’ : ‘Succeeded’ and the Kubernetes cluster can be created in this resource group. Besides a resource-group, a service principal is also needed for Kubernetes to communicate with the Azure Resource Manager:

az ad sp create-for-rbac --scopes /subscriptions/xxxxxxxx-xxxx-xxxx-xxxxxxxxxxxx --name newhouse-platform-principal --role Contributor --password myPassw0rd

After running the command, some information is returned. The values of ‘appId’, ‘tenant’ and ‘password’ should be written down, because they are needed in future steps.

Creating the cluster needs only one command with Azure. It uses templates (which could be customized for advanced users) to create a Virtual Network, Storage Accounts, Virtual Machines and a Load Balancer, all setup for Kubernetes. See https://docs.microsoft.com/en-us/azure/container-service/kubernetes/container-service-intro-kubernetes for a architectural diagram of the setup that is created.

az acs create --orchestrator-type=kubernetes \
--resource-group newhouse-platform \
--name=staging \
--agent-count=2 \
--master-count=1 \
--agent-vm-size=Standard_D1_v2 \
--ssh-key-value @~/id_rsa_kubernetes.pub \
--service-principal xxxxxxxx-xxxx-xxxx-xxxxxxxxxxxxx \
--client-secret myPassw0rd

Ok, i’m creating a Azure Container Service resource with orchestrator type ‘kubernetes’ in the resource group ‘newhouse-platform’. I gave it the name ‘staging’ and I need two 2 agents and 1 master. The agents are Standard D1 v2 machines, which are cheap single core with 3,5GB memory machines. These machines are great enough to run many .NET Core containers with good performance.

The SSH key i’m providing is needed authentication with the Kubernetes cluster, so i use a separate SSH pair for that. There is a lot of information on the internet on creating these SSH pairs for each platform, so I won’t go into detail here about how to create them (hint: ssh-keygen -t rsa). The build server also needs this pair for deployment of the services.

The service principal is the application-id of the principal that is created earlier. The client secret that is provided is the password of the created principal.

Running this command takes some time, but after that you have a running Kubernetes cluster.

While you are at it, install the Kubernetes CLI (kubectl) for managing the cluster from the command line:

az acs kubernetes install-cli

To use it, you must configure it once by running this command:

az acs kubernetes get-credentials --resource-group newhouse-platform --name staging --ssh-key-file ~/id_rsa_kubernetes

After that, you can run the command kubectl get pods --all-namespaces, to see that you have a functioning cluster.

There is also a Web UI, which you can access by using:

az acs kubernetes browse --resource-group newhouse-platform --name staging --ssh-key-file ~/id_rsa_kubernetes

It can take some time for the UI to show, because the UI itself is also an container that must be started and initialised. But after that, you’ll see that there is a new container cluster for deploying containers!

Creating the public API endpoint using Traefik

Now that the cluster is running, the next step is to create a public endpoint where you can access all your containers. In this example, i’m creating a load-balancer with a public DNS name, where you can access each container by using paths like this:

  • /service/basket/items
  • /service/product/items/1

In this path, the first segment ‘service’ is static, and the second segment is the name of the service. In these examples the services are: basket-service and product-service. All other segments after the second (first 2 segments are stripped) are send to the service and will be resolved there. For example, for the path, the basket-service will receive a request for path ‘/items’.

Kubernetes doesn’t support this by itself, so we need some kind of gateway that forwards incoming requests to the appropriate service based on rules (such as the path). One of the solutions that we are going to use is Traefik (https://traefik.io/), which calls himself a HTTP reverse proxy for microservices. It will fulfill our task of forwarding the requests to the microservices. It has many options, which I won’t discuss here, but I’ll set it up for ‘path based routing’. You can install Traefik on our cluster using a package management tool for Kubernetes called ‘Helm’. Check the site (https://github.com/kubernetes/helm) for installing instructions. For MacOS users, use Brew:

brew install kubernetes-helm

After being installed, you have to initialise Helm for usage with the cluster:

helm init

Helm also needs a server component called ‘tiller’, which it will try to install when initialising, but it is already installed while setting up the cluster. You can find ‘tiller’ under the ‘kube-system’ namespace.

With Helm you can install the Traefik package, by running the command:

helm install stable/traefik --name traefik-proxy --namespace kube-public --set replicas=2

This will install two replicas of the Traefik container and will also automatically add a Azure Load Balancer which will forward requests to these containers! Although the containers will be started fairly fast, it will take some time for the Load Balancer to initialise. You can check progress of the initialisation of the load-balancer by running this command:

kubectl get svc traefik-proxy-traefik --namespace kube-public -w

Configure DNS name for Load Balancer

Now that the Load Balancer is created and available, it is only reachable on a IP-address. Therefore we should give it a proper DNS-name which we later need for our ‘path based routing’.

Open the Azure Portal and go to the Resource Groups and then our created resource group ‘newhouse-platform’. The public load balancer has an IP Address resource with a similar name:

Open this resource, and to the the ‘Configuration’ tab. There you can give your load balancer a DNS name:

Creating a private Docker repositories for your containers

Before we can deploy our service, you must have a Docker repository to store created Docker containers, so Kubernetes can pull them during deployment. One option is to use Docker Hub, which allows you to create public repositories or one private repository for free. Most of the time, you want your Docker containers to be private and a microservices platform consists of multiple services, so you need a paid plan for that. Alternatively, Azure also allows the creation of private repositories, which I’m going to use here. Use the Azure CLI to create an Azure Container Registry:

az acr create --name newhousedockerregistry --resource-group newhouse-platform --location westeurope --sku Managed_Basic

We can automatically use the principal that is created in at the start of this post in the build pipeline to push containers to the registry, because the principal covers the scope of all resources that are created in the ‘newhouse-platform’ resource-group. For the same reason, Kubernetes can access the Azure Container Registry so no additional authorization is required.

Deploying your microservice!

Ok, now we have a load balancer that responds with 404 for every request…It is time to deploy a microservice on the platform. Therefore we will use the .NET Core microservice that i’ve created in another blog. One of the requirements of having a good microservices platform is to have automated builds and deployments of the microservices to the platform. Therefore we need a build server and some scripting. GitLab is a great solution for having a code repository and a build server based on Docker containers. They have also a free plan for small projects.

In the repository is a file called .gitlab-ci.yml, which is the configuration file for the build server. In this file, I’ve defined three stages:

  • Build stage: here the code gets compiled, but it also runs the unit-tests
  • Publish stage: create a Docker container and publish it to the private Docker repository that is created earlier
  • Deploy stage: Instruct Kubernetes to deploy a container on the platform

Each stage is started by using a Docker container containing the appropriate tools for performing the task. For example, the build stage runs on the SDK version of the .NET Core docker container, which contains the needed build tools. The last stage for deploying the container also needs some extra tools that are installed each time this stage runs (because each time the predefined Docker container is the starting-point). Of course I can optimise build times by creating a new Docker container where these extra tools are already installed and use this as the base image. For the sake of this example I haven’t done this optimisation.

Also, in the deploy stage, a script called ‘deploy.sh’ is called. In this script, Kubernetes YAML deployment scripts are generated to create a service, an ingress and a deployment of the pods. These are all resources of a Kubernetes cluster. The pods are the actual Docker instances of the microservice that are running in the cluster. The service is the an abstraction to these pods to access them. And an ingress resource allows external traffic to reach the service from outside the cluster. In our case, it configures Traefik to forward traffic to the correct microservice. The creation of these configuration files are templates and generated by running a custom NodeJS script. This script can be found in the ‘deployment’ subdirectory.

Of course, when there are multiple microservices, each having it’s own repository, you want to centralise the deployment scripts eventually in a Azure Storage Account and download them at each deployment.

For flexibility, the deployment scripts are using a JSON configuration file called ‘platformsettings.json’ in the root of the repository for some service specific settings such as the number of replicas to be started, the CPU and memory limit and environment variables to be set at runtime. These could be different between the staging and production environment of course. One of the settings that should be noted here is ‘LoadbalancerHostName’, which has to correspond with the DNS-name of the load-balancer that is created earlier. Otherwise the container will deploy fine, but Traefik won’t route traffic to it, because the host name won’t match.

The NodeJS deployment script reads this configuration file and uses it to generate the Kubernetes YAML configuration files.

The last step are the authentication details needed for the build server to execute the build. Two types of authentication are needed: access to login to the Azure Resource Manager for pushing the container and the SSH private key used to trigger the deployment at the Kubernetes cluster. To store these sensitive values in a build script (which is in a source repository) is not a good idea, so GitLab allows us to store these values elsewhere. These values are injected as environment variables into the build Docker container. Our script needs the following variables:

  • AZURE_SERVICE_PRINCIPAL_USER: The application-id of the principal that was created before the creation of the Kubernetes cluster.
  • AZURE_SERVICE_PRINCIPAL_PASSWORD: The password of the principal
  • AZURE_TENANT: The Azure Active Directory id. This was in the JSON output when creating the principal. But can also be seen in the portal by going to Azure Active Directory blade and then ‘Properties’ in the left menu.
  • KUBE_SSH_PRIVATE_KEY: The SSH private key used to authenticate with the Kubernetes cluster (~/id_rsa_kubernetes.pub)

You can add these variables in GitLab at the repository by navigating to: Settings -> Pipelines (or CI/CD in the new navigation) and then scroll down to ‘Secret variables’. These variables are not shown in the build output, so it is safe for public builds.

Conclusion

There you have it! A .NET Core microservice that is automatically deployed on a Kubernetes cluster running on Azure! The microservice can be publicly accessed with: https://<loadbalancer-dns-url>/service/basket/products/. Of course this is just the beginning of a good implementation. Things that could be added to a good microservices platform are for example encrypted settings, auto-scaling, service discovery and better cluster security with VPN. But that is a good start and helped me a lot understanding Azure and Kubernetes.

The code repository for the basket microservice can be found at: https://gitlab.com/pnieuwenhuis/newhouse-basket-service/. More detail about the implementation of the microservice is found in my previous blog:

https://medium.com/@FurryMogwai/building-a-basket-micro-service-using-asp-net-core-and-akka-net-ea2a32ca59d5

)

Paul Nieuwenhuis

Written by

Full Stack Developer with interest for micro-services, building web applications and CI/CD.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade