Introduction to Software Deployment with Docker — Part 1

John Olafenwa
Dev Planet
Published in
9 min readAug 16, 2019

In deploying modern applications and services, developers are often faced with the challenges of building apps that run on a wide range of different hardware and software platforms.

From the edge to the cloud, devices and servers are run by different operating systems including Windows and the wide range of Linux distros. Even on the same OS, applications run on a wide variety of software stacks such as Java, .NET, Go, Ruby, Python etc. Hence, to deploy any application, you need servers that meets the following requirements:

  • Specific OS
  • Specific Runtime Libraries
  • Specific Software Stacks
  • Specific Versions of all of the above

Meeting the above requirements creates significant friction in the software deployment process, making it hard to move between stacks and software versions without having to reconfigure or tear down existing servers. In the cloud computing era, we sometimes need to move apps from premise to the cloud and between different cloud environments.

Even if you ensure your server meets all of the above, you are still faced with the fact that your development environment on your laptop would often be very different from the server environment. You probably use a Windows or Mac to develop applications that would be eventually deployed to a Linux server. This discrepancy, a violation of the 12factors guide, would often result in applications that works well in development but fails inexplicably in production.

Image Credit: https://itviconsultants.com/is-the-internet-of-things-a-developers-dream-or-a-million-new-headaches/

To fix these issues, it has become essential to always package apps with their own environment and dependencies, in such a way that they remain completely independent of the host environment and one another, enabling us to deploy and run them on any target system.

Containers were built to provide this type of isolation required to build scalable modern applications. Containers package your application, the specific os it requires, runtime libraries and any app dependency as a single deployable unit called an Image. This allows you to deploy your app image on any server without ever asking what operating system, software stack or runtimes exist on the target system. Containers enables your app to run in complete isolation from the rest of the operating system. Ensuring that your app’s image would run on your Linux server exactly the same way it runs and behaves on your development Windows PC. This frees you completely from any deployment pains and allows you to change stacks, software versions and even os choices at will without any server reconfiguration.

Image Credit: https://developer.ibm.com/dwblog/2016/what-is-docker-containers/

In this series, we shall learn how to build, deploy and scale cloud applications using Docker, the open source container platform and runtime that took the world by storm a few years ago and revolutionized the way both startups and fortune 500s build and run cloud applications.

In this first part, we would learn how to install docker and containerize a basic web application with it.

Installing Docker

On Linux

Run

sudo apt-get update
sudo apt-get install curl
curl -fsSL get.docker.com -o get-docker.sh && sh get-docker.sh

On Windows

Docker was originally created for Linux, however, it is now supported on all versions of Windows 10

On Windows 10 Pro or Enterprise, download and install docker ce from

On Windows 10 Home Edition, download and install docker ce from

On Mac OSX

Download and install docker ce from

Containering/Dockerizing Your First App

Now that we have docker installed. To explore the basics of how docker works, we shall build a simple Flask web app in python and containerize it with docker. Note that you don’t have to be a python programmer to go through this example.

Our app is made up of three files:

  • app.py
  • requirements.txt
  • Dockerfile

Clone the devplanet repo:

git clone https://github.com/johnolafenwa/devplanet

You will find the files in samples/docker/intro

app.py

Our web app running on port 5000

requirements.txt

flask

Dockerfile

The Dockerfile

The docker file here in the same directory as our source files contains all the instructions to package our app. We shall go over this instructions line by line.

FROM python:3.6-slim

Every docker file begins with the FROM keyword which is followed by the name of our base image. Here, we are using the python:3.6-slim base image. This image contains dependencies needed to run python 3.6. In later chapters, we shall learn how to create our own base images.

WORKDIR /app

This command creates the directory /app within our container and sets it as the current working directory.

COPY . /app

The COPY command copies everything in the current directory denoted as “.” to the /app directory within the container. This essentially copies our app.py, requirements.txt and the Dockerfile into the /app directory of the container.

RUN pip3 install requirements.txt

The RUN command precedes all the instructions you need to run within the container. It acts like a bash shell or powershell through which you can run commands within the container. Here we are simply installing all pip packages specified in the requirements.txt that had been copied into the /app directory.

EXPOSE 5000

Since our web app is running on port 5000 in the container, we use the EXPOSE command to make this available outside the container.

CMD [“python3”,”app.py”]

The execution point of docker container is the CMD command. There can only be one CMD command in your docker file. Here we execute the app.py starting up our web app.

With that explained, we shall now build our docker image.

sudo docker build -t myapp .

On windows, please omit the sudo command as it is linux specific.

Here, the docker build command is responsible for creating your app image following the instructions you specified in the Dockerfile. Here the -t myapp indicates that the generated app image should be tagged myapp , you can use any tag you wish. Also the “.” helps docker look for the Dockerfile in the current directory.

Ensure, this command is run in the same directory where your app.py and other files are.

Output of the docker build

Now your build is done, you can view your created docker images with the command

sudo docker image ls

This will show all the docker images you have created or downloaded.

Running Your Docker App

So, the endgame of everything we have done so far is to actually run our app as a self-contained package, isolated from the rest of the system. Now you can do that with the command below.

sudo docker run -p 80:5000 myapp

To break this down, the run command runs your app, the -p 80:5000 maps port 80 on your host system to the port 5000 on the container. Note that our Flask app is running on port 5000 which we exposed in our Dockerfile. When running, we can choose to run on any other port by the port mapping pattern above.

Again, this perfectly fits with the port binding principle of the twelve factor guide. https://12factor.net/port-binding

Now, visit localhost:80 in your browser and see the output.

And that’s it!

Now you maybe thinking.

But not so fast. Your app displaying “Hello From Flask” is running in complete isolation, even if you uninstall python from your system, it has nothing to do with it. Again, it a self-sustaining and self-sufficient package which is ready to be deployed and run on any server on premise or in the cloud irrespective of what is installed on the target server. You can build and package even the most sophisticated applications this way.

Docker Image vs Docker Container

At some points you might have gotten confused along the words, image and container, that is perfectly normal, happens to everyone. When you ran “docker build” what was created named myapp is called an image. However, when you ran “docker run” a container was created from your image. A container is simply an instance of an image. The image is analogous to a class in object oriented programming while the container is analogous to an object.

You can have multiple containers running at once, just like you can have multiple objects of a single class.

Deploying your Docker Image

Docker images are distributed and deployed via container registries similar to git. Docker Hub is like the Github of the docker world where you can push your image to private and public repositories as well as pull images created by others. The base python:3.6-slim image we used in our app earlier was pulled from the docker hub the first time we ran the build.

First step, head over to the Docker Hub,

Create your account and create a new repository for your app.

In this case, you can just name your repo myapp, hence, just like in Github, your repo would be username/myapp

With username replaced by your actual username

Next, on your system, login to docker via the command line.

Run

sudo docker login

Enter your username and password in the prompts that follow, same as you used in registration.

Your login should succeed and you can now start pushing to your repositories.

Tag Your Image

During build we tagged our image as myapp, to push our image to docker hub, you need to tag it as username/myapp, you can do that like this.

sudo docker tag my app username/myapp

Note that to avoid re-tagging after building, next time, you should build with

sudo docker build -t username/myapp .

Push Your Image

Finally, run docker push

sudo docker push username/myapp

Once your push completes, your app has been successfully published to the docker hub.

Running Your Docker App on Servers

Now that your app is published, you can run it on any server.

You can login to any server on any cloud platform or just another system you have access to. Essentially, on any system that would serve as your deployment server. Do the following.

Step 1:

Install docker as shown in previous steps. On a linux server, this means running:

sudo apt-get update
sudo apt-get install curl
curl -fsSL get.docker.com -o get-docker.sh && sh get-docker.sh

Step 2:

Login to docker

sudo docker login

Step 3:

Pull your docker image. (Push and Pull just like Git)

sudo docker pull username/myapp

Finally:

Run your app

sudo docker run -p 80:5000 username/myapp

Your app would behave exactly as it did on your development system.

Conclusion

It’s just part 1 of this docker series and yeah, that was long. As we can see here, docker enables us to securely run our apps in complete isolation with all runtime requirements prepackaged with the app itself. This approach to deploying software frees us from worrying about what actually runs on the target servers. Our apps essentially become independent systems that we can move at will from development to staging and to production without any discrepancies in the environments our code is running in. This allows developers to focus on building without worrying about IT infrastructure. This is just one of the benefits offered by the docker way. As we proceed in this series and throughout all series in this publication, we shall explore a universe of tools and systems built around the container ecosystem.

If you enjoyed this article you can give some claps and share it on twitter.

You can always reach me on @johnolafenwa

Read Part 2 of the Series Here

--

--