Deploying a Docker Application to Azure App Service with Terraform (Part 1)

Hildred Adebayo
6 min readFeb 6, 2023

--

Docker + Terraform + Fully managed web hosting service = Magic! In this 2 part article, we will see how to manage dependencies seamlessly, speed up provisioning resources and let the cloud provider do the heavy lifting of managing running our application. Shall we?

Docker has become a popular tool for developers in recent years, due to its ability to simplify the deployment and management of applications. Docker provides a consistent environment for running applications by managing dependencies required for an application to run, taking away the complexity and extra effort of having to install and manage dependencies which in time past could be hellish for developers.

When running applications with docker, an image is first created then run in a docker container, exposing the desired port. How does this image get created?

The essence of this article is to walk you through building a docker image with a Dockerfile and running the application on a fully managed hosting service (the power of cloud computing right?)

In hosting this application, we will leverage Infrastructure as Code (IaC) with terraform. Terraform is a choice IaC tool because it works with various cloud providers, preventing vendor locking. It is an open source project, comprising numerous modules and cloud specific parameters that allow for seamless integration with any desired cloud provider such as Azure, AWS, GCP.

Using terraform is also very useful for documentation processes and reduces human errors that could come from manual configurations. The best part is, it is very fast 🚀🚀 in provisioning resources.

Now that we have a basic introduction to the service and tools we will be working with, let’s begin our Docker-Terraform-Azure journey!

First, the prerequisites for creating a docker image are:

  • Docker installed on your machine
  • The application code and any necessary dependencies

In this tutorial, we will be dockerizing a django application with this directory structure:

Let’s take a look at the dependencies required to run this portfolio application in the requirements.txt file:

asgiref==3.4.1
certifi==2021.5.30
charset-normalizer==2.0.4
Django==3.2.6
django-crispy-forms==1.12.0
django-environ==0.7.0
gunicorn==20.1.0
idna==3.2
pytz==2021.1
requests==2.26.0
sqlparse==0.4.1
urllib3==1.26.6
whitenoise==5.3.0
pandas

To run this application locally, one might go the route of a virtual environment and install all these dependencies locally in the virtual environment and when the need to run the application on another machine arises, repeat this same process. Our goal is to save ourselves that stress and repetition so, we’ll write a Dockerfile in the root folder of the django application to bundle these dependencies.

Take a look at this Dockerfile:

FROM python:3.8

ENV PYTHONUNBUFFERED 1

RUN mkdir /app

WORKDIR /app

COPY . /app

RUN pip3 install -r requirements.txt

ENV RUNTIME_SECRET_KEY $SECRET_KEY

RUN echo SECRET_KEY=$RUNTIME_SECRET_KEY > portfolio/.env

EXPOSE 80

ENTRYPOINT ["python3", "manage.py", "runserver", "0.0.0.0:80"]

Explaining this file line by line:

  • Line 1: It is a django application which means it is written in python (in this case version 3.8) hence, our base image being python:3.8
  • Line 2: When we run our application, it is very helpful to see the output of our container as it runs. Setting this variable ensures the container output (stdout and stderr) streams are sent straight to the terminal.
  • Line 3: The RUN command tells docker this is a shell command to run directly in the terminal. In this case, create an /app directory where the code in this local directory will be copied to.
  • Line 4: WORKDIR gives itself away. It instructs to change the current directory to the /app directory created in step 3. This means, every line of code coming afterwards is run in the /app directory.
  • Line 5: This line tells docker to copy all the files in the current working directory to the /app folder (there is a connected .dockerignore file we will come to shortly).
  • Line 6: Now to the dependencies. This line recursively instals all of the dependencies in the requirements.txt file in the /app directory created in the docker image. Once this is successful, at this point we have bundled up our application with all its required dependencies.
  • Line 7: Now, this is an interesting part. For a django application to run, it requires a SECRET_KEY declared in the settings file of the project. This SECRET_KEY is not to be exposed in your image because it is a sensitive credential. To avoid exposing it, the value has been set to be stored in the Application hosting environment (Azure App Service) and referenced in this line using the ENV command.

Here, SECRET_KEY is the name with which the value has been stored in Azure App Service and using interpolation, the value of the key is called and stored in our container as a variable called RUNTIME_SECRET_KEY. This simply said, the same variable is stored with the name SECRET_KEY in Azure App Service and RUNTIME_SECRET_KEY in our docker image.

  • Line 8: Now that we have gotten the secret key value into the docker image, we have to create a .env file where the SECRET_KEY is referenced from in the docker container at runtime. This is why we are echoing the value of the variable and saving into the variable SECRET_KEY declared in the settings file.
  • Line 9: This application will run on port 80 (the default Azure App Service port) hence, exposing this port.
  • Line 10: This is the startup command for running a django application. The ENTRYPOINT command tells the docker container, run this command to start this application.

To be able to deploy this docker image to Azure app service, we have to build the application and push it to a dockerhub repository. There are other container registries like Azure container registry and Amazon elastic container registry. However, the tools chosen in this article are all geared towards avoiding being vendor locked which is why we’re using dockerhub.

Before building this application, remember there was a .dockerignore file mentioned earlier. This file works much like a .gitignore file does. This file tells docker to exclude whatever files are written in it from the image during the build process. In this case, this is useful in excluding the .env file present in the django project directory. This .env file was required in the development process to get the application to start locally but since it’s not intended to be stored in the docker image, it is excluded from the image build using this approach.

This is the content of the .dockerignore file:

portfolio/.env

After adding the Dockerfile and .dockerignore file, here’s what the directory structure looks like:

Next, head over to dockerhub to create a repository, naming it whatever resonates with you. The repository name will usually follow this convention:

[dockerusername]/[repositoryname]

Now to build this application, open a terminal where your application is and in the project root directory where the Dockerfile is stored, run this command:

docker build -t [dockerusername]/[repositoryname]:[tagvalue] .

If you do not give a tag value, dockerhub will default to the ‘latest’ tag (Note: When building the image, make sure to include the period (.) at the end of the command specifying to copy all files in the current working directory into the docker image). As the build goes on, the output shows the layers as they are being created in the docker image.

A step that ideally comes between the build and the push steps is the test step. This step can be done on a local machine however, because we are calling a runtime variable from Azure App Service, this test will be done in the hosting environment (a development slot) because the image is built in a way that fits the app service environment.

The final step is to push the image to dockerhub. Note that before you push, you must have logged into docker through your command line using the docker login command. This will prompt you to fill in required credentials to connect to your dockerhub account.

To push the image, run the following command:

docker push [dockerusername]/[repositoryname]:[tagvalue]

The image is now available to be deployed on app service from dockerhub. In the second part of this article, we will dive into deploying this image and running it in App service, provisioning the required resources with Terraform.

I hope you’ve learnt a thing or two about Docker, hope to see you at Part 2!

Feel free to connect on LinkedIn and please leave a clap if you found this article helpful ❤️

--

--