Understanding Docker as if it were a Game Boy

James Audretsch
36 min readFeb 25, 2018

--

The definitive guide to Docker

Welcome to the Docker tutorial that will change the way you develop software.

When I finally understood Docker (which took several months), it felt like I had superpowers. Though this tutorial is rather long, you’ll feel this way too by the end :)

We’ll start from the ground up with zero knowledge. You’ll learn all of the important fundamentals of Docker while working through real examples. At the end you’ll understand how to deploy a Dockerized web application stack from scratch.

I’ve written this tutorial as the ultimate resource of how I interact with Docker on a daily basis. It is focused on teaching a practical understanding of Docker.

Most importantly, the goal is to learn and have fun :)

  • This tutorial is designed for Mac and Linux but will work with Windows. If using Windows, replace basic shell commands with Windows equivalents.

What will I learn?

We’ll cover the following topics:

  • How does Docker work?
  • What is an Image?
  • What is a Container?
  • What is a Dockerfile?
  • Mounting in a Container
  • Understanding Container Ports
  • Data Layering in Images
  • What is Docker Compose?
  • Writing Microservices
  • Networking between Containers
  • Managing Data in Containers
  • Composability
  • What is DockerHub?
  • Deploying Containers to Production
  • Container Best Practices

What is Docker?

A little bit of historical background:

Docker is software that started as an internal project from platform-as-a-service company dotCloud. Docker eventually grew so be so popular that it overshadowed dotCloud, so dotCloud formed a new company called Docker Incorporated. Docker Inc. is a company dedicated to further developing Docker and the ecosystem surrounding it.

Docker says it “Packages software into standardized units for development, shipment and deployment”.

What does this actually mean?

Let’s digress for a moment and take a look at the Game Boy Color:

The nostalgia is overwhelming :)

If you remember, when you buy a Game Boy game it comes as a cartridge:

A physical game cartridge? So retro.

I believe what makes video game consoles so successful is their simplicity.

When you want to play a game, you put in in your Game Boy and it just works.

You can share this game with your friend; she can put it in her Game Boy and it will also work.

Docker aims to make running software as easy as pressing the ON button with a gameboy!

This is the essence of what makes Docker useful — anyone who has Docker can run any software written for Docker.

What kind of software can you actually run with Docker? From a technical standpoint, Docker is reminiscent of Virtual Machines —

Docker is an engine for running virtual operating systems that is extremely light weight.

It let’s us run Linux operating systems in isolated environments very quickly.

Why Use Docker?

Software Installation Nightmares: Have you ever tried installing software on your computer and it just won’t work? You get some weird error, and it’s impossible to figure out what’s wrong? After hours of searching you get to the 10th Google results page… and on some forum you’ve never heard of, you finally find a random comment that fixes your issue.

Similarly, what makes writing PC games more difficult than writing Game Boy games is that you have to design for a wide range of devices and specs. Different computer systems may have different operating systems, drivers, graphics cards, etc.

Docker Saves The Day:

Docker is like a Game Boy.

Docker takes a standardized piece of software and runs it as a Game Boy would run a game.

You don’t need to worry about what host system the user is running — as long as they have Docker, your code will run.

As a developer, you don’t need to worry about what machine is running your software. As a user, you don’t need to worry about software not working — it’s plug and play.

Time to Download Docker

Luckily Docker gives us their engine for free on Mac, Linux, and Windows.

Download it here:

Docker provides Community Edition (the free one that we are using) and Enterprise Edition. The Enterprise edition provides a few bells and whistles that we don’t really care about or need. Functionality is the same between the two services.

On Mac, you should see this icon in your status bar.

Verify Docker is running. Open up a new Terminal and type:

docker

And verify docker-compose is installed (you’ll get a similar output):

docker-compose

*Linux Only — you’ll need to download “docker-compose” separately: https://docs.docker.com/compose/install/#install-compose

What is a Docker Image?

A Docker Image is like a Game Boy Game Cartridge — it is software. It is standardized to run on any Game Boy. You can give a game to your friend, and she can put it into her Game Boy and play it.

Let’s look at an example by downloading our first image. The recipe for this is:

docker pull <IMAGE_NAME>

In Terminal run:

docker pull ubuntu:14.04

This tells Docker to pull an Ubuntu image from Dockerhub.com, the central repository for Docker images.

It’s like driving to Gamestop to buy a game, but a lot faster! Type:

docker images
Your output will only have ‘ubuntu’.

This is like the catalog of Game Boy games that you own —the Docker images you currently have.

I kept mine on a shelf like this. Pretend you have a shelf rack of Docker images, each that is it’s own software.

What is a Docker Container?

We’ve upgraded our system from a Game Boy to a GameCube. GameCube games are held on a disc which is read only.

A Docker Image exists as a GameCube game does — it is immutable.

A Docker Container is an instance of the image running — similar to when you actually put the game into your GameCube, turn it on, and the TV screen lights up.

Running a docker container equates to actually playing your Gamecube game. Docker run your image as a container like a Gamecube runs a game.

Running a docker container equates to actually playing your Gamecube game.

Let’s run our first container. The simplest recipe for running a container is:

docker run <image> <optional shell command to run inside container>

Now type:

docker run ubuntu:14.04 echo 'hello world'

This runs the command in an instance of the Ubuntu environment. In other words, the command echo ‘hello world’ is ran in an Ubuntu container. Now type:

docker ps    # List running docker containers

There’s nothing there… that’s because docker ps only lists containers that are currently running.

docker ps -a  # Lists running and stopped docker containers
The command we passed to the container is listed under the COMMAND column.

When a process is done running, the Docker container exits. It’s like a power saving feature on new video game consoles — if you aren’t playing anymore, the system turns off automatically.

Every time you do the docker run command, it creates a new container.

Let’s make things more interesting — we can actually connect to the virtual operating system that is running in a container:

docker run -it ubuntu:14.04 /bin/bash  

-it with the /bin/bash command attaches a terminal to the container. We are inside of the container, which you can think of as like a mini virtual machine.

It’s not the most apparent command, but it’s a very important one to remember. You’ll know you are in the container when the user is root@<container_id> in your terminal. Now type:

ls

We are actually in a complete Ubuntu OS system. Play around and run any command you want.

A container is completely independent from the host system… like an isolated virtual machine. You can change any data you want without repercussions.

Just like when you play Mario Kart on your Gamecube, no matter what you do in the game, it’s not like you can effect the Gamecube itself. You won’t effect the game disc either.

A container is completely independent from the image and isolated from the host machine.

Open up a new terminal window and type:

docker ps 

You can see that the container is indeed running.

Jump back to the first terminal window (the one inside the docker container) and type:

mkdir /TEST    # Make a directory
exit # Exits the docker container

exit will tell Docker that you want to exit the container back to the host system.

The container is now stopped (check with docker ps to confirm). Now we are going to rerun the stopped container.

docker ps -a

You can use the CONTAINER_ID to start a container again — make sure you use your container id, found in the left most column of docker -ps -a.

docker start aa1463167766    # Your container id instead
docker ps
docker exec -it aa1463167766 /bin/bash # Your container id instead
Type ls again, and you can see the “TEST” directory still exists!

exec let’s us execute a shell command inside a container. In this case, we execute /bin/bash which allows us to connect to a terminal inside the container.

To leave the container, type:

exit

Now stop and remove the docker container:

docker ps
docker stop aa1463167766 # Your container ID instead
docker ps
docker rm aa1463167766 # Your container ID instead

What is a Dockerfile?

Docker allows you to share the environment your code runs in, just like github lets you share your code.

A Dockerfile allows you to document the steps to set up the environment for your application. The Docker Engine will parse your Dockerfile, and create a Docker Image from it.

With a Dockerfile, you explicitly document how you got to a certain state. To make this clearer…

Imagine you are playing Pokemon:

But you are struggling to get past the first gym. As your friend, I want to help you. I have two options:

  1. I could give you a save file starting right after the first gym. You just need to load the save file.
  2. I can write a document that states each step I took to beat the first Gym. It’s like a recipe that you need to follow carefully. It might look something like this:

INSTRUCTIONS TO BEATING 1ST GYM
- Choose Squirtle as your starter Pokemon
- Head to forest north of Pallet Town
- Train until Squirtle is level 10
- Head to Pewter City
- Stop at Pokecenter and heal Pokemon
- Fight gym leader Brock using the move “Watergun”

Which is more useful? The second way, obviously — because it explicitly teaches you how to get to the state that you desire. It’s not just a black box starting after the first gym.

With Docker, you have the same two ways of making images.

  1. You can commit (save) a new image directly from a container.
    This is like sharing your save file with someone.
  2. Write a Dockerfile, explicitly documenting how you reached that machine state.

We prefer the second way because it is explicit, maintainable, and editable (you can rewrite a Dockerfile but can’t “rewind” an image).

Let’s practice this with a real example. Clone the repository:

git clone https://github.com/jamesaud/Docker-Medium-Tutorial
cd Docker-Medium-Tutorial/1-pyramid

There are two files…

pyramid.sh: A shell script which prints out a pyramid (which I borrowed from here)

Dockerfile: Recipe for running our app

The commands in red are part of a simple Dockerfile language. Let’s go over what these commands mean:

FROM: This is like choosing the game engine you’ll use for your game (Unity, Unreal, CryEngine). Though you could start from scratch, it makes more sense to use an existing engine.

In our case, we are using the Ubuntu:14.04 image we pulled earlier. Our code will run starting in the Ubuntu OS environment.

COPY: Copies a file from the host machine into the container.

RUN: Runs a shell command as if you were in the container’s terminal.

CMD: Executes this command every time the container runs.

For details of more commands, check out the documentation.

When writing a Dockerfile, start from the most relevant existing image and add on to it to suit your application’s needs.

To build the image, run:

docker build . --tag pyramid

build is the folder where the Dockerfile is ( . refers to the current folder)
— tag
is the name for the image

The commands we wrote in our Dockerfile are executed. Confirm the image is built with:

docker images

Run a container from the Image:

docker run pyramid

That’s cool — our shell code was executed! It ran the CMD statement of the Dockerfile when the container started.

We copied the pyramid.sh file to the Pyramid Docker Image when it was built. You can verify this by running docker run pyramid ls , which executes the ls command in the container.

However, our container is not very flexible. It would be nice if the user could specify how many lines they want the pyramid to be.

Edit the pyramid.sh file to accept a command single line argument. The last line of pyramid.sh should be changed to:

...# makePyramid 5 - the old line
makePyramid $1

Let’s rebuild the image:

docker build . --tag pyramid

Now run a container:

docker run pyramid bash /pyramid.sh 7

Why did this work?

When running a container, you can overrule any CMD in a Dockerfile by passing your own command to the container.

Our original CMD, bash pyramid.shis overwritten. However, wouldn’t it be nice just to pass 7 to the container, instead of bash /pyramid.sh 7?

Adjust the Dockerfile:

CMD is appended onto whatever you run in ENTRYPOINT.

[“bash”, “/pyramid.sh”] is actually ran as bash /pyramid.sh. The syntax of putting command line arguments in a list is preferred in Dockerfiles.

After CMD is appended to ENTRYPOINT, the final command will be:
bash /pyramid.sh 5- the user can overwrite 5 by providing their own command at runtime.

We have to rebuild the image now, and run a new container. The final version of this is in part 2:

cd Docker-Medium-Tutorial/2-pyramid
docker build . --tag pyramid # Rebuild the image
docker run pyramid 3 # Try changing 3

Hooray!

What is mounting in a Docker Container?

Mounting in a Docker container is when you mount some directory of the host machine into the container.

When a game reads save file data, the Game Cube’s file system is mounted into the instance of the game (well, one could imagine this). A game can change the save data file and it is reflected in the Gamecube’s file system.

Mounting the host system into a container allows the container to read and write to the host system — your changes now have repercussions.

The recipe for mounting is:

docker run -v <HOST_DIRECTORY>:<CONTAINER_DIRECTORY>

The HOST_DIRECTORY is a path mounted in the CONTAINER_DIRECTORY. Run the following:

cd Docker-Medium-Tutorial
docker run -it -v $(pwd):/mounted ubuntu:14.04 /bin/bash
ls
ls mounted
touch mounted/testfile
You’ll see more folders… I took this screenshot when I only had 3 part of the tutorial written :)

$(pwd) is the path to our current directory
touch mounted/testfile creates a file called testfile

With the command docker run -it -v $(pwd):/mounted ubuntu:14.04 /bin/bash , the current directory mounts to a folder called /mounted in the container.

You can see that files from Docker-Medium-Tutorial on our host machine are mounted into the container when you run ls /mounted .

On your computer, navigate to Docker-Medium-Tutorial with a file explorer.

Look! There’s a file called testfile on our computer. We created this in the container, but it is on our host machine as well!

Mounting allows you to modify files on your host machine while running them in the environment of a Docker container.

This let’s us use tools like Code Editors while running code in containers. We’ll use this strategy later in the tutorial.

What are Docker Volumes?

Docker Volumes are like Memory Cards for the Game Cube. Your memory card contains the data for a game. These memory cards are portable, and exist when the Gamecube turns off. You can plug in different memory cards which hold different data.

You can put memory cards in your gamecube. A Docker volume can similarly be attached to a container.

With Docker volumes, you get a container that holds data and is persistent. You can attach a data volume to any other running containers so that they can access the data.

Instead of writing to your host system when mounting, you are writing to a data container which is mounted.

I personally don’t use them very much, as there are other methods for managing data. However, they can be useful for containers that need to persist data or share data with one-another.

Container Ports

In part 3, there’s a Python web app built on the web server Flask that we’re going to run:

cd Docker-Medium-Tutorial/3-server 

For reference, the server.py file looks like:

This is a basic web server that returns an HTML page.

The Dockerfile in part 3 shows a basic recipe that can be used for Python web apps:

First let’s go over few commands:

FROM:
instead of starting from the Ubuntu Image, start from a python image. This is an Image with Python already installed.

WORKDIR:
creates the directory if it doesn’t exist and cd into it, similar to running mkdir /app && cd /app

EXPOSE:
Flask, our Python web server, runs on port 5000 by default (we’ll see this in a second)

There are 2 steps to networking in Docker:

1. Expose the container port
2. Map the container port to the host

This is a bit like hooking up your PS4 to the correct HDMI input on your TV. You explicitly state, by connecting the cable, which TV HDMI channel is able to display your video.

In this case, our host machine is the TV and the container is the game console; we need to explicitly map the ports from the container to the host machine.

EXPOSE in the Dockerfile allows connections to port 5000 on the container— this is like allowing HDMI connections to your PS4.

The EXPOSE command takes care of step one. Let’s build the docker image:

docker build . --tag python-webapp

Try running it:

docker run python-webapp

Try accessing the app on: http://localhost:5000

It won’t work because we haven’t addressed step 2.

Press CTRL-C or CMD-C to exit the running flask app.

*If you’re having trouble stopping the container: in a new terminal type
docker ps to find the running container id and using the id type
docker stop a68198bd7e5c (your container id instead).

As the output indicates, the container will broadcast our app on the container’s port 5000; now we need to tell our computer to listen. The recipe to do this is:

docker run -p <HOST_PORT>:<CONTAINER_PORT>

We can map the container’s port 5000 to port 5000 on our host. Type:

docker run -p 5000:5000 python-webapp

Go to http://localhost:5000

Almost as easy as plugging in an HDMI cord! Try running it on a different port to get comfortable with how mappings work:

docker run -p 8080:5000 python-webapp

Go tohttp://localhost:8080

Let’s clean up a bit; stop and remove containers with the recipe:

docker stop <CONTAINER_ID> ...
docker rm <CONTAINER_ID> ...

Here’s a nice shortcut to remove all containers; run these commands:

docker stop $(docker ps -a -q)   # Stops all containers
docker rm $(docker ps -a -q) # Removes all stopped containers

Notice, anyone who runs this image doesn’t need to have Python installed on their computer — they just need Docker!

Docker Image: Data Layering

Docker is smarter than you might think :)

Every time you build an image, it caches existing layers. Because images are immutable, the data never can change; a cacheing system is used to speed up build times.

Building a Docker image is like layers of a cake:

Docker Images are like layers of a cake, but a lot less tasty.

Each command in your Dockerfile is saved as a layer of an image.

cd Docker-Medium-Tutorial/4-layers

Our Dockerfile for the Python web app is only slightly different now; the Flask app moved into a folder called src.

When you write a Dockerfile, you are adding layers onto an existing Image to make your new Image.

FROM tells the Docker engine to start from an existing image. New commands will layer on top of this Image.

COPY copies the contents of the ./src directory on the host into a folder /app in the Image

WORKDIR sets our current path in the Image to /app

A Layer of a Docker Image is like a checkpoint in a Super Mario game. If you want to change things that happened before the checkpoint, you’ll have to restart the level. If you want to move forward, you can continue where you left off.

Docker will cache from where you left off when building a Dockerfile — however, if you add a new command it will only cache the image state before the command.

To illustrate this point, add a new line to the Dockerfile in 4-layers/Dockerfile :

And rebuild the image with:

docker build . --tag python-webapp

You can see from the output that the lines before the new RUN command are cached; the build runs very quickly because we don’t need to re-execute RUN pip install -r requirements.txt.

Now move the new line right before RUN pip install -r requirements.txt :

and rebuild:

docker build . --tag python-webapp

You’ll notice that the image takes a long time to build. Whenever you write a new command in a Dockerfile, it invalidates the cache for commands after it. However, you can see that the previous commands COPY and WORKDIR were still cached.

Now open into 4-layers/src/requirements.txt with your favorite text editor. We are going to add a new Python package called requests:

Rebuild:

docker build . --tag python-webapp

The web app rebuilds and DOESN’T use the cache for RUN pip install -r requirements.txt.

Docker checks for differences in files when building. If a file has changed, the cache is invalidated for all subsequent layers.

That’s a really great feature actually! Docker keeps track of changed files, and uses the cache when appropriate. Your change could affect future commands, so future layers need to be invalidated.

This is an important concept to understand, and is well demonstrated by editing the 4-layers/src/server.py file.

Let’s give our web app a purpose; it will return JSON data that represents a list of books.

Open 4-layers/src/server.py with your favorite editor and change it to:

Now, let’s rebuild the image:

docker build . --tag python-webapp

Uh-oh, do you see a problem? Every time we make a change, requirements.txt is reinstalled.

Docker sees server.py changed from last time it built the image, from the lineCOPY ./src /app . When it changes, it invalidates the cache for all future commands. Being clever programmers, we can rewrite our Dockerfile to take advantage of cacheing:

The COPY command now comes after the pip install requirements.txt command. Rebuild the app with

docker build . --tag python-webapp

Make some changes to server.py (just add a comment or delete a whitespace line). Rebuild with:

docker build . --tag python-webapp

Awesome! The cache is used properly this time and the image is built super quickly. For fun, and because it will be used later in the tutorial, let’s view the new app that we wrote:

docker run -p 5000:5000 python-webapp

Visit http://localhost:5000 in your browser. The output should look something like:

The new version of Firefox is a nice browser for developers.

We’ll continue extending this web app in the next section.

When writing a Dockerfile…

1. Commands that are unlikely to change should be placed earlier.

2. Commands copying data that’s likely to change should be placed later.

3. Commands that are time intensive should be place earlier.

In general, it’s good to reduce the amount of layers that an Image has. In a Dockerfile you might have two RUN commands:

RUN apt-get install pacman
RUN pip install scrapy

This would create two layers in the Image. Instead, prefer to write:

RUN apt-get install pacman && pip install scrapy

If the command gets too long you can put the commands in a shell script, and run the shell script with a single RUN command.

*Technically only ADD, COPY, and RUN commands add new layers to your Docker image; the other commands are cached in a different way.

What is Docker-Compose?

Docker compose is like being a conductor of an orchestra, where your orchestra is a bunch of containers. Each container has a different job, like an instrument plays a different part in a song.

Docker Compose coordinates containers to run together, running your software in composable units.

This is known as container orchestration.

Docker-compose coordinates containers to run together as instruments collectively play a piece of music.

Each instrument has one job in a band or orchestra (I was in jazz band in high school). The Trumpets may carry the melody; piano guitar and base provide foundational support; the drums lay down the beat; horns and saxophones might harmonize.

Just as as instruments have a specific job, so should our containers.

Docker-compose is written in YAML format, which is a data format similar to JSON or XML. YAML is designed to be more human readable. YAML is a bit like Python in the sense that whitespace matters, and items are separated with colons.

First let’s cd into the next activity:

cd Docker-Medium-Tutorial/5-compose/webapp

The files are now in a folder called webapp. Let’s look at the docker-compose.yml file:

version: which version of docker-compose to use (we are using the latest version)
services: containers we want to run
server: an arbitrary (but ideally should be descriptive) name we call this container service
build: steps describing the build process
context: where the Dockerfile is located at to build the image
ports: map ports from the host machine to the container

We can use this file to build and run our Flask web server.

docker-compose build

Docker parses the docker-compose file and builds our services according to the build instructions.

The context says to look in the current directory . for the Dockerfile to build the image for this service.

Now we are going to run the services as containers. The command is:

docker-compose up

The server should be running on http://localhost:5000.

Press CTRL-C or CMD-C to exit.

When the container named server runs, docker-compose maps the ports that we specified on ports. Basically, it runs the container with -v 5000:5000 , saving us the pain of having to specify command line arguments manually.

With Docker-compose.yml, we take the command line arguments passed to containers and arrange them in a YAML file instead.

In this example, we’ve put the BUILD and RUN steps for our service into docker-compose.yml. In order to run all of these steps, you just need to memorize two commands: docker-compose build and docker-compose up .

Let’s make local development a little bit easier. Instead of rebuilding the image every time we make a change, we can mount our work directory into the container.

Remove COPY ./src /app from the Dockerfile — the files will be mounted from our host computer instead:

Normally to mount our host system into the container, we would run a container with the argument:-v <HOST_DIRECTORY>:<CONTAINER_DIRECTORY>

With Docker-compose, we can add it directly to docker-compose.yml :

This mounts the ./src directory on our host computer to the /app directory in the container. Type:

docker-compose stop
docker-compose rm

enter y when prompted and press enter. This will stop and remove the all containers specified in our docker-compose file, same as running the commands docker stop <CONTAINER_ID> and docker rm <CONTAINER_ID> .

Now, rebuild the services because we changed the Dockerfile:

docker-compose build

And rerun the container with:

docker-compose up

In your browser go to http://localhost:5000

Instead of copying over our app files, we just mounted the folder containing them on our host.

Now, make a change to the server.py file: change the title or author of one of the books:

# 5-compose/src/server.py...{
"title": "Lord of the Rings", # This line changed
"author": "JR Tolkien" # This line changed
},
{
"title": "Animal Farm",
"author": "George Orwell"
},
...

Reload the browser — you’ll see that the change in JSON is reflected! Mounting your host system as a Docker volume is a common method of doing local development in a docker container.

Right now we are only deploying one container; we’ll add more in the next lesson.

How Do I Write MicroServices With Docker?

Docker is often used for writing Microservices, an architecture design pattern with a philosophy of “separation of concerns”.

Here’s a quick rundown microservices:

A single microservice accomplishes one specific task. It is decoupled from other microservices. Together, microservices symphonize to form the application.

A microservice should perform its task in isolation, managing its own data as much as possible. It is common for each microservice to have its own database (if it needs one).

This allows for a robust application that is highly scalable, with no single point of failure.

Separation of concerns means that developer teams can work in parallel, focused on different components.

Microservices often communicate to one another through REST API endpoints with a web data format like JSON.

It’s a bit vague how small or large of a task a microservice should encompass; it’s up to you as the developer to decide.

Using microservices allows you scale specific services under heavy load, without having to scale up the entire application.

Let’s add another service to our docker-compose.yml file:

First move on to part 6:

cd Docker-Medium-Tutorial/6-microservices/webapp

In the webapp folder you’ll find a book-api folder and a front-end folder. These are two separate Flask applications which will run as independent microservices.

Let’s tangibly describe what each microservice does. We’ll focus on what it accomplishes, not its implementation:

front-end microservice: displays html to the user
book-api microservice: provides an API for CRUD* operations on books

*CRUD stands for Create, Read, Update, Delete

Open the book-api/src/server.py file. It is a slightly modified version of the Flask server we already wrote:

The GET endpoint /books returns a JSON containing a list of book details.

Let’s take a look at the new Flask app in front-end/src/server.py:

The show_books function renders an HTML page that displays the books. The HTML template can be found in front-end/src/templates/show_books.html.

books=Books means the Python variable Books is passed to the HTML template and is accessible with the name books. This is used with a rendering language called Jinja2. We are able to render books in HTML with a special syntax.

...<table class='table'>
<thead>
<tr>
<th>Title</th>
<th>Author</th>
</tr>
</thead>
<tbody> {% for book in books %}
<tr>
<td>{{book.title}}</td>
<td>{{book.author}}</td>
</tr>
{% endfor %}
</tbody>
</table>
...

(view full file here)

Let’s look at 6-microservices/webapp/docker-compose.yml:

The original docker-compose.yml file was slightly modified to follow the new folder structure.

A new service called front-end is added. The configuration is extremely similar to the first service that we wrote.

Notice the ports specification for each service. Recollect, the mapping for ports is <HOST_PORT>:<CONTAINER_PORT>

  • book-api maps host port 5001 to container port 5000
  • front-end maps host port 5000 to container port 5000

Though Flask runs on port 5000 in each of the containers, we map to different ports on the host — like plugging in to two different HDMI ports on the TV.

If they map to the same port, you’ll receive an error when trying to start the containers (like trying to plug 2 HDMI cords into the same port on the TV). Now run:

docker-compose build
docker-compose up

Both the front-end and book-api containers are launched simultaneously!

Visit http://localhost:5000 to view the front-end web app. You’ll get a screen like:

Bootstrap4 is included in the HTML for nice table styling

Visit http://localhost:5001/books to view the book-api server.

We’ve hardcoded the book values in front-end/src/server.py. Our next step will be to dynamically link the books from the book-api server to the front-end web app. This takes us to our next section…

*Try changing both ports configurations to ‘5000:5000’. Rerun ‘docker-compose up’, and you’ll see there is a port collision.

How Does Networking Work With Docker?

How do we know what ip address docker gives a container? Ensure the containers are running with:

cd Docker-Medium-Tutorial/6-microservices/webapp
docker-compose up

In this section we are going to use the container name instead of the container ID. Open up a new terminal and type:

docker ps

Names are unique to containers just like IDs. The docker-compose.yml file automatically names containers for us based on the service name. For reference, you can name a container manually with the command:

docker run --name <CONTAINER_NAME> <IMAGE>

Anyhow, there’s a nice command to give details about a container. In a new terminal run:

docker inspect book-api-1

This provides information about our book-api container. Scroll down to find the “Networks” section, and locate the “IPAddress”.

You can see that Docker has assigned this container an IP address. Your container’s IP address may be different than mine.

When using Docker-compose, Docker automatically sets up an internal network for your containers. Containers have permission to contact each other by default.

*There’s also a way to manually configure networks from the command line (but I find it cumbersome).

Let’s verify this claim with a little experiment. Open a new terminal and jump into the front-end container with the command:

docker exec -it webapp_front-end_1 /bin/bash

The book-api container should be accessible from within the front-end container. Try pinging the ip address. Remember to use the container IP address that was listed with docker inspect:

ping 172.21.0.3    # Your container's IP address instead

The book-api container is responding! Technically we could use the container’s ip address directly in our front-end/src/server.py. We’ll use the requests library that we installed earlier to get the JSON data from our book-api server:

Remember to change the container IP to your container IP address.

Verify this actually works on http://localhost:5000 . After changing the IP address, reload the webpage. You should see the books are dynamically loaded from the book-api server:

Awesome! Although this works, the IP address given to a container is not guaranteed to stay the same. That means that sometime in the future your app will stop working.

Docker provides a solution to dynamically network containers.

Docker automatically injects aliases for the IP addresses of linked containers, assigned as their service names.

This is known as automatic service discovery.

Let’s see an example of what this really means. Jump back into the book-api container if you aren’t already there:

docker exec -it webapp_front-end_1 /bin/bash

Instead of pinging the ip address of the book-api container, we can use the service name as an alias:

ping book-api

Still a response! And we didn’t need to find the IP address manually.

Change the connection to the service name in front-end/src/server.py:

Verify that it works on http//:localhost:5000 . There’s one more step we can take to make our application more robust to changes. Instead of hardcoding the name of our book-api server, we can pass it in via an environment variable.

There’s 3 ways to define environment variables in Docker:

1. Pass at runtime with the -env argument

2. Specify in a Dockerfile with the ENV command

3. With docker-compose via the environment tag

We will use the third way. Edit the docker-compose.yml file:

and modify front-end/src/server.py to use the environment variable:

Rerun docker-compose up and verify that the app is running on http://localhost:5000

You can also verify this by opening a new terminal window and printing out the environment variables in the container:

docker exec webapp_front-end_1 printenv

This let’s us change the backend server just by editing the environment variable in docker-compose.yml.

Our app configuration is now always declared in docker-compose.yml:

Docker-compose.yml provides a ‘single source of truth’ for our application configuration.

How Do I Manage Data In Docker Containers?

Let’s add a database to our web app; by doing so we’ll learn the common pitfalls of running a database in a Docker container.

Enter the directory for this activity:

cd Docker-Medium-Tutorial/7-data/webapp

Notice that we’ve hard coded values for the books in book-api/src/server.py. It would be nice if these were dynamically were pulled from a database. We’ll also add a method for adding books to the database.

The beauty of Docker is shown by adding a database in docker-compose.yml:

Using a pre-configured MongoDB image, we have a database all ready to go. Reading the MongoDB Docker documentation let’s us know to map port 27017, among other optional settings. Let’s test it out:

docker-compose build
docker-compose up

The database should be running, but we haven’t hooked it up to our book-api server yet. Let’s do that now. Stop the containers with:

docker-compose stop

In book-api/src/requirements.txt “pymongo” was added, a package that lets us connect to MongoDB.

First, let’s verify that the connection works. Edit book-api/src/server.py and add the following code:

The initialize_db function initializes the database with one book if it’s empty. Hopefully when we start the server we will see the print statement.

Notice the CLIENT url is the service name from docker-compose.yml. It will be available to us through automatic service discovery. Type:

docker-compose up
It can be a bit difficult to find — I suggest using a terminal that supports searching for keywords

It works! Let’s update the get_books function to interact with the database:

jsonify_mongo is a simple helper function to return a JSON response from what PyMongo gives us from the database.

In get_books, we retrieve all of the books from the database with DB.books.find(). Then we return them as JSON.

Go to http://localhost:5001 to see the JSON response.
Check out http://localhost:5000 to view on the front-end.

That’s awesome! Finally, we can add the functionality for creating books. Here are the file changes:

These files are already modified for you in part 8:

cd Docker-Medium-Tutorial/8-data/webapp

Spin up the containers:

docker-compose up

Check out the full web app running on http//localhost:5000 . Try adding some books at http//localhost:5000/add_books!

Now let’s see what happens when we remove the containers:

docker-compose stop
docker-compose rm

Start the containers again:

docker-compose up

Visit http//localhost:5000 .

Do you see what happened? The entire database got deleted! All of the books that were added are gone. The data lives in the container; when the container is destroyed so is the data.

Containers are ephemeral. They should be able to be destroyed and spun up again at any time.

There are ways to structure our application so that data persists even when a container is destroyed.

Several popular options for data management with Docker are:

- Mounting the host machine as a volume

- Mounting a named docker volume

- Using a cloud data storage provider such as AWS

In production you’ll likely prefer to use a cloud data storage provider for permanent data. When developing locally, it is easier to mount the host as a volume to persist data from containers.

Our solution is to adjust mount the data folder from the MongoDB container to our host machine. Edit docker-compose.yml:

Restart the containers with docker-compose up. Add some books on http://localhost:5000/add_book. Notice, on your host machine in the 8-data/webapp directory there is now a folder called db. This contains the mounted data from the container.

Now, when you delete the container with docker-compose stop && docker-compose rm and restart the containers with docker-compose up, the books that you added will still be in the database.

Fantastic work!

Although the app starts up alright, technically if the Database starts before the Book-API there will be a connection error. We’re going to add a safety measure in the Dockerfile, specifying the order in which the apps should be started:

depends-on says to wait for the specified services to start before starting the current service. front-end waits for book-api which waits for book-database. Therefore, the order that the containers start is:

  1. book-database
  2. book-api
  3. front-end

Finally, pass in the database configuration through an environment variable. Edit book-api/src/server.py and docker-compose.yml:

What is “Composability” With Microservices?

Being able to swap out components is an important aspect of composability in a highly decoupled system.

For any service listed in docker-compose.yml, we can replace with a completely separate application.

For example, take the front-end web app. It is written in Flask. We could easily replace it with a Javascript front end web app instead.

On the other hand, we could completely replace the book-api server with a Node-js app. As long as it still responds to the same API endpoints, everything will work!

With micro-services in Docker, the implementation for a services is completely independent of other services — all that matters is that you abide by the contractual API you establish.

What is Dockerhub?

Dockerhub is a public (and private) repository for your Docker images.

Dockerhub is like Github for Docker Images

First, sign up for an account at:

https://hub.docker.com/

Once you are registered and log in, you’ll see a screen like this:

You won’t have any repositories yet

Let’s push the webapp Docker images to your repository so that they are accessible on the cloud. Move into the next tutorial directory:

cd Docker-Medium-Tutorial/9-production/webapp

Writing a Production Dockerfile:

Here’s what the new Dockerfiles look like. Instead of mounting app files from our host machine, files can be copied directly into the container. We don’t care about Docker caching any of the build steps for the image, because we will only build it once. We can write a more succinct Dockerfile:

First build the new images:

docker-compose build

Login to hub.docker.com through the terminal, providing your username and password:

docker login

For this section whenever I use jamesaudretsch, replace it with your username instead.

Now we have to tag the images correctly before pushing them. Your username replaces mine in the two commands below:

docker tag webapp_front-end jamesaudretsch/webapp_front-end
docker tag webapp_book-api jamesaudretsch/webapp_book-api

Now we can push the images to our repository:

docker push jamesaudretsch/webapp_book-api
docker push jamesaudretsch/webapp_front-end

You should be able to see your images at https://hub.docker.com/.

You can verify in the command line by running:

docker pull jamesaudretsch/webapp_book-api

Now anyone can pull your images and run containers! Just like giving them a Game Boy game :)

How Do I Deploy Containers to Production

There are many options for running containers in production. You could manually run a single instance of your application by installing Docker on a single server.

There are cloud options for running containers too. AWS, Azure, and Google Cloud all support running containers.

I’m actually a big fan of a platform called Docker Cloud, made by Docker Inc. for deploying containers. We’ll use Docker Cloud and Digital Ocean to host our servers.

Sign up for an account on Digital Ocean:

https://www.digitalocean.com/

Digital Ocean requires credit card details to sign up, but we’ll use a promotion code for $20 credit. You can then cancel your account after this tutorial.

Tip: Use a virtual credit card from a site like https://privacy.com, so you’ll never forget to cancel a subscription.

Log in to Docker Cloud — Log in to Docker Cloud using your Docker username and password at:

https://cloud.docker.com

You’ll reach this screen:

If your screen looks different, make sure that SWARM MODE is off:

At the top of the page, make sure this is set to OFF

At the bottom of the side navigation, go to settings:

Click on the $20 code on the Digital Ocean tab and copy the code.

Copy the coupon code

Click on Connect provider on the Digital Ocean tab. It’s the little electric power cord symbol.

You’ll be taken to DigitalOcean, where you may be prompted to log in.

Now go to your billing section, under Digital Ocean settings, and enter in the coupon code:

https://cloud.digitalocean.com/settings/billing?i=6f132c

If you scroll down, there is a tab to enter in a promo code

Perfect, you should be all set to go now. Go back to the Docker Cloud page.

Click on the Node Clusters navigation tab.

Click the Create button on the top right. You’ll enter the creation wizard that looks like this:

Enter the same values that I did. We’ll only have 1 node as part of this cluster. We choose the smallest sized node because it’s the cheapest at $5/month on Digital Ocean — fully under our $20 budget. The region doesn’t matter.

*There was a bug when I tried to create a node cluster. Just try a few times if it fails, and it should eventually work :)

After around a minute your Node Cluster will be created.

Now go to the Stacks tab and click Create:

We need to supply a Stackfile that specifies which containers we want to run. If you think this sounds a bit like docker-compose.yml, you are exactly right! The Stackfile looks earily similar to the docker-compose.yml file we wrote:

Copy this code and name the stack Book-stack:

Click Create & Deploy and watch the magic happen. Wait a minute, and all of the services should be started up.

You might see an error with the book-api service, depending the order your containers start — if you look at the logs, it’s because the database starts after the book-api server and it fails to connect. You can just re-deploy the book-api app if this happens:

Redeploy if the app doesn’t start correctly

Now everything should be up and running! Take a look at the Endpoints section:

Copy the endpoint for the front-end service and view it in your browser!

Make sure to not copy the TCP:// at the beginning of the url.

The app should be working! You should be able to add books, which means all the containers are talking to each other properly.

You can also copy the book-api endpoint and view it in your browser (remember to add on /books to the end of the url).

In the stack file there is a volume entry mapping the database to the host machine. If you re-deploy the book-database service (which deletes the container) the data still persists. Our containers are ephemeral!

Let’s talk about the difference between a Service and a Container:

A Service is the strategy for running containers based off the same image. A Service has a single endpoint. There could be 1, 10, or 100 containers running behind a Service, but as a user you would never know. The Service routes you to a container when someone makes a request. This lets you have one endpoint (the service endpoint) while scaling up with many containers (each with their own container endpoints).

Security

It’s not great practice to leave our unsecured database exposed to the world. Click the Edit button on your stack page and change the Stackfile to:

The ports configuration was removed for the database and book-api. Now no one from the outside can access these apps.

The front-end is now mapped to port 80, so you don’t need to specify a port number in the url.

Redeploy the app in the Actions tab

Re-deploy the stack when you are finished making changes. Copy the new endpoint and visit the web app in your browser:

Go to the services tab on Docker Cloud and click on book-api:

At the top, there is a slider for scaling. Move the slider to 5 and then click the scale button.

Scroll down a bit and you’ll see there are 5 containers running for this service now. Our service can now handle more traffic. It’s just that easy!

Good work :) Now cancel your DigitalOcean subscription before you forget!

In production, you would have a web server like NGINX to serve your Flask app. Security settings would also be enabled in your apps, the database would have a username and password, etc.

This tutorial simply serves as an example of how to deploy and connect containers :)

What is Docker Swarm?

There are several software options for container orchestration. Popular ones include Kubernetes, Mesos, and Docker Swarm.

Docker Swarm is a native container orchestration tool made by Docker Inc. It lets us coordinate how our containers run, similar to Docker Compose, but targeted for production. This lets us run many container instances of our application in parallel — meaning our application can sustain high levels of traffic. It can autoscale to changes in traffic.

We won’t cover Swarm in this tutorial — perhaps next tutorial.

What Are Container Best Practices?

I believe the most important design principals for Docker containers are derived from the twelve factor app:

https://12factor.net/

Seriously. This will give you a great starting point.

Here are some key points to remember:

  • Containers should always log to STDOUT — this helps standardize logging so that it is easy to monitor with monitoring tools
  • Containers should not hold permanent data
  • Store data outside of the container
  • Containers should be ephemeral (disposable)
  • Containers should do one thing and do it well
  • Runtime configuration should be passed with environment variables
  • Containers should communicate internally whenever possible. Only expose ports if necessary
  • Minimize Image layers if possible when writing Dockerfiles

Documentation of Commands:

Images
docker imageslist all images
docker image rm <image_name> remove an image
docker build <path> --tag <image_name> build image from dockerfile

Containers
docker pslist running containers
docker ps -alist running and stopped containers
docker run <image_name> run container from image
docker run -p <host_port>:<container_port> map host port to container
docker run -v <host_directory>:<container_directory>mount host
docker run -env <key>=<value> — pass environment variable
docker inspect <container_id>give details of a container

Docker-compose
docker-compose buildbuilds images
docker-compose upstarts containers
docker-compose stopstops running containers
docker-compose rmremoves stopped containers

Docker-compose.yml
version: which version of docker-compose to use
services: names of containers we want to ru
build: steps describing the build process
context: where the Dockerfile is located at to build the image
ports: map ports from the host machine to the container
volumes: map the host machine or a docker volume to the container
environment : pass environment variables to the container
depends_on : start the listed services before starting the current service

Dockerfile
FROM image_namestarts build by layering onto an existing image
COPY host_path container_pathcopies file or directory from host to the image
RUN shell_commandruns a shell command in the image
WORKDIR pathsets the current path in the image
ENV variable valuesets the env variable equal to the value
EXPOSE portexposes a container port
ENTRYPOINT ['shell', 'command']prefixes to CMD
CMD ['shell', 'command']executes shell command at runtime

I hope you enjoyed this tutorial!

--

--

Responses (9)