Implementing DevOps with Terraform and Azure Pipelines

Abdelhalim Addad
OCP digital factory
7 min readAug 27, 2019

--

Whether you are starting a new project or maintaining some old legacy code, no matter how small or big your team is, delivering value to your end users has always been the number one priority in what you’re doing.

With the rise of DevOps culture and tools, shipping continuously new features to your live application becomes a requirement, especially within a context of an MVP, where instant feedback is required to plan and shape the rest of the product backlog. Thus, deploying changes to your staging environment should happen at least once a week, well.. if not every day.

As a DevOps engineer at the OCP Digital Factory, during the last few weeks, we worked on a product and we explored a bunch of cloud technologies. In the next lines, I’ll share with you my journey with Microsoft Azure, how we implemented DevOps using Infrastructure as Code to bootstrap our environments and how we set up the CI CD pipelines to build, test and deploy our applications.

Application architecture

The architecture of the application is very straightforward, using Spring Boot as a Backend with MySQL database, Azure Blob for media storage and ReactJS for the Frontend.

Application architecture

The above schema shows the application components and here is an overview of each:

Kubernetes: used as a platform for automating deployment, scaling, and management of containerized applications, we are using Docker as the containerization technology. There are several ways to set up a Kubernetes cluster, but for simplicity, we are using a managed service offered by Azure known as AKS.

Frontend Application: Built using ReactJS and served with Nginx web server, the application is packaged in one container image which you can check in the following sections.

Backend Application: We’re using Spring Boot for our backend REST API, Spring Boot is an open source Java-based framework used to create scalable micro-services with minimum configuration and easy deployment. We are also using Gradle for building the project and perform other tasks such as unit tests and Sonarqube for code analysis.

Azure Container Registry: allows us to securely store Docker images for our applications which are built and pushed during our CI CD process. The registry can be replicated across multiple regions which helps to manage global deployments as one entity to simplify operations.

Azure Blob Storage: a scalable storage for the application media files and all kinds of unstructured data that our application might use.

Azure Database for MySQL: a fully managed MySQL server and database for high availability and dynamic scaling.

Traefik: an HTTP reverse proxy and load balancer used as an ingress controller for our Kubernetes cluster. Traefik employs an internet facing Azure Load Balancer which was provisioned separately to receive incoming requests then the ingress controller pod dispatches each one to its respective destination based on a defined ingress resource that dictates the host and path for the end service.

Infrastructure As Code

There’s no doubt that cloud computing has made our lives much easier than ever, all you have to do is navigate around the console, click some buttons and now you have an up and running server, then go to another page, fill some entries for your database credentials and now you have a database server.

The ability to click a few buttons to provision servers, databases, and other infrastructure components has led to an increase in development productivity. But while it’s easy to spin up simple cloud architectures, mistakes can easily be made when provisioning complex ones because we’re repeating steps over and over, the console may change over time and the human error will always be present.

One way to avoid these kinds of errors is to automate the process of provisioning infrastructure or what we call Infrastructure as Code. There are many tools that help us achieve this goal and Terraform is my favorite.

Terraform is a very nice declarative infrastructure management tool that uses HCL language from HashiCorp, it’s much easier to learn, simpler to use, and yet very powerful. So let’s see how our architecture will look like in Terraform code.

The actual code of the infrastructure is organized into modules with variables as inputs and outputs, this structure makes the code cleaner and reusable allowing different combinations of the infrastructure components to satisfy each of our environment’s needs.

For the sake of simplicity, I’ll be presenting each resource with hardcoded values that should normally be handled with variables and stored in terraform.tfvars file.

The virtual network

This will first create a resource group, which a way to organize resources on Azure, then create a virtual network and a virtual subnet 10.1.0.0/24 from which Kubernetes nodes will pick up their IP addresses. Note that there are more options available to implement networking on Azure using terraform, if you are curious then you can check the docs.

Kubernetes

This will create a 3 nodes Kubernetes cluster in the subnet we provisioned before, we are also creating a route table to allow external communication to the internet from our cluster.

Database Server

This will create mysql server and a database. We used the azurerm_mysql_virtual_network_rule for the network configuration, so the database server will be available internally in our subnet, and a firewall rule to allow access only from that subnet.

Container Registry

Blob Storage

This will create a storage blob for our media and files, using LRS redundancy which is enough for our staging environment.

CI/CD Pipelines

This section assumes you already have a Microsoft Azure Account and an active Azure DevOps Organization with the Frontend and Backend repositories that contain the React and Spring boot applications code respectively.

Also, make sure you’re an administrator of the Azure DevOps projects that you want to use.

For each application, we’ll set up a Build pipeline to build the project, run the unit/integration tests, build and push Docker images and deploy to staging environment which we created in the previous section.

Spring boot application

I’m not going to dig into the details of the app, rather I’ll focus on the Dockerfile and the YAML code of the Build pipeline.

We are using multi-stage build in our Dockerfile. Multi-stage builds are a feature supported since version 17.05 or higher of Docker and let us produce optimized application images by using a separate base image to build the code artifacts and another one for the runtime environment.

In our case, we have openjdk:8-jdk-alpine which contains everything needed to build and test the application, and in the next stage, we are only copying the jar file to openjdk:8-jre-alpine base image which is our java runtime environment.

Azure Pipelines

Azure Pipelines enable you to continuously build, test, and deploy to any platform or cloud manually or on each code push. In my case, I used different workflows depending on the branch in which the code is checked out. Basically, we have two types of pipelines: One triggered when changes happen on a feature branch, and another when merging into the master.

Here are the steps of each:

Feature branch:

Only build the application jar and execute unit/integration tests using Gradle tasks.

Master branch:

  • Execute code quality analysis with Sonarqube.
  • Build the application docker image, tag it with the build ID then push it to the container registry.
  • Deploy to staging environment using helm chart.

To implement these workflows we have two options, one of them is via the GUI which might be helpful if you don’t want to bother writing code for your pipelines.

But to be more consistent I prefer using YAML to setup those pipelines, by adding a configuration file in the app’s source code called azure-pipelines.yaml, thus allowing us to more easily track build changes and offering us more customization options.

Also, YAML configuration is not something new to setup a CI CD, it’s used in almost every tool I tried before such as Gitlab CI and CodeBuild.

Here’s the content of our YAML file:

Note that we are using the following variables to customize the pipeline and to override the default Helm values for deployment to our staging:

React application

I applied the same process for the Frontend application. In the multi-stage docker build, I used a node based image to build the application dependencies and static files, and Nginx web server as a runtime environment to serve the HTTP requests. Here are the Dockerfile and the Nginx config I used:

Azure Pipelines:

The structure of the pipelines is similar to our Spring boot application, the only thing that changed is the build steps in which I used NodeTool provided by Azure and Yarn to build the project dependencies.

In this blog post, I tried to cover how we used Terraform and Azure Pipelines to set up the infrastructure and implement the CI CD to deploy to our staging environment.

I hope you enjoyed the reading, feel free to share your thoughts in the comments.

--

--