Building Microservices with Python, Part 2
In the previous article about Microservices with Python, I was talking about how to create a basic API with some simple steps. You can run that directly with Python, but let’s say we have to integrate it with other systems, such as a Database, ElasticSearch or RabbitMQ.
As I mentioned in the previous article, you can find all the code I am generating on those articles in this GitHub repo: https://github.com/ssola/python-flask-microservice
Third part is available in this link.
What is Docker?
Docker is the world’s leading software container platform. Developers use Docker to eliminate “works on my machine” problems when collaborating on code with co-workers. Operators use Docker to run and manage apps side-by-side in isolated containers to get better compute density.
Docker allow us to have different container for each one of our dependencies. In this example we are going to include two dependencies in our project:
But before starting, I need to explain some concepts:
A Docker image is a read-only template with instructions for creating a Docker container. For example, an image might contain an Ubuntu operating system with Apache web server and your web application installed.
A Docker container is a runnable instance of a Docker image. You can run, start, stop, move, or delete a container using Docker API or CLI commands. When you run a container, you can provide configuration metadata such as networking information or environment variables.
A docker registry is a library of images. A registry can be public or private, and can be on the same server as the Docker daemon or Docker client, or on a totally separate server.
Creating your docker-compose
It is a common practice to put the
Dockerfile in the root of your project. With this approach, you can share your development environment with anyone cloning the project.
Docker Compose allow you to create many containers needed for your service. For instance, in the Docker Compose we can define that we need a MySQL instance and an Elasticsearch up-and-running. We can set both on the same file, then with a single command, we can bring up or down those services.
Dockerfile allow you to create the recipe for a new container. In this case, let’s imagine we need to create a new container to run our application. With Dockerfile we can define to:
- Install Python 3.6
- Clone my project
- Make it run
Understanding the Dockerfile
The Dockerfile allow you to define a recipe to build your image. Based on a given one, like alpine, you can set a list of commands to be executed to achieve some state, for instance, running a python app.
This is the definition of our recipe:
It basically get an alpine image with a python 3.5 already installed. This is nice because we can save some time, we only need to create a directory where to put our files, install the dependencies of our application and that is it.
This is a simple example, but for a production ready image we need to think about:
- Setting environment variables depending on production/staging environment.
- Tune the image with production ready settings on Flask.
- Store the image in some private registry to be able to do immutable deployments.
Defining my dependencies
docker-compose.yml file. Now, we are going to define the dependencies I stated above, Elasticsearch and RabbitMQ.
Most of the time you do not need to create your images. Fortunately, we have many of them publicly available in DockerHub.
Let’s start with the Elasticsearch one. Be careful, when looking for an image in DockerHub check the version of the applications you want to install. I found so many Elasticsearch 1.7 images when the current version is 5.2.0.
In this case, I chose the official image from Elastic
Probably your first question is, what that Alpine is? Alpine is a base image based on Alpine Linux. It is a super minimal distribution that allows us to create super small containers. It is a good idea to search for images built on top of that base image.
docker-compose.yml we are going to add these lines:
And the RabbitMQ dependency too, after the elasticsearch one:
With these few lines we have a composition of two containers. We can build up or down both services at the same time. But before building and running our services we should understand what we just did.
- Image: It defines which image should be use to build this service
- Environment: You can define some environment variable that will be used on the image. For instance we can define the user and pass for RabbitMQ
- Ports: You can define the port forwarding from the image to your machine
- Command: If needed you can execute a command after starting the image
- Volumes: You can define a mapping between the image filesystem and your host filesystem. This is useful if you want to share the Elasticsearch content between containers
Now we can execute the command
docker-compose up -d this will bring up two containers, one with the Elasticsearch and the other one with RabbitMQ. The
-d means it will detach the process from your session.
If you want to stop your container, you can do
docker-compose down, or
docker ps to know the status of each container.
Finally, we are going to include our image on the docker-compose, with this final step we will be able to execute our application from 3 different containers.
Useful Docker commands
Now we know how to build our image and how to orchestrate the creation of all the containers we need, but Docker has a lot of magic on it. These are some useful commands:
docker exec -i -t 40be80cb3581 /bin/bash
- List all the containers running
- Access to one specific container (useful for debugging issues):
docker exec -i -t container_name/container_id /bin/bash
- Check the container logs:
- Run a specific command on a container:
docker exec container_name/container_id echo 'Hello World!'
Since Docker 2.1 we have an interesting feature adding services. We can include an specific service under a condition. For instance, only include the service if the health check passes. We can do something like:
With Docker, we can easily create a container for each dependency our application has. In this example, we saw how to include Elasticsearch and RabbitMQ, but we can quickly add MySQL, MongoDB, Grafana or any other tool we could need.
This helps us to be more efficient coding our service and to have an environment quite similar to the production one.
In a future chapter of building icroservices with Python, I will explain how to do immutable deployments with Docker, we will use the image we were creating on the first step.
Remember that you can find all the code I am using for this series of articles here: https://github.com/ssola/python-flask-microservice/
Third part is available in this link.
This is a great book to start grasping all the concepts related to docker.
You will learn how to create your first containers and how to connect your containers to volumes.
By the way, at HelloFresh we are hiring for Data Engineers and Backend Engineers.