Getting started with Docker and publishing your first image

App Academy
App Academy Engineering
6 min readMar 5, 2019
https://unsplash.com/photos/SmIM3m8f3Pw

Containers are an incredibly important part of building any modern application. They are an innovative approach to creating well-defined and constrained environments inside an operating system using a shared Kernel. Not long ago, containers, as a technology were only available at tech giants such as Google, but nowadays they are widely available and used at companies small and large alike. Major tech corporations such as eBay, Lyft, and GE use containers to simplify development, testing, deployment, and operation of their software and tools — a trend that is credited to the use of containers through Docker and its ecosystem.

Whether you begin a new frontend project, a backend service or even start a new development environment for your team, you can significantly benefit from using the Docker’s approach to containers and container management. In this article we would be exploring the core parts of Docker and the lifecycle of a Docker image, from creating the definition in a Dockerfile to publishing to Docker Hub and using the image in a Docker Compose file. Working together, this set of tools creates better shareability, productivity and consistency across development platforms.

If you are new to containers as a concept, get a quick introduction below.

Docker Engine

Docker Engine is the technology that is often represented as the Docker we know. It consists of the application that runs Containers and an HTTP API that exposes an interface for other tools such as the Docker CLI to interact with the Engine. While this is the core of Docker, the value of the entire ecosystem is delivered by the sum of its parts. You can install a packaged version of the Docker applications as Docker Desktop here. Once you have Docker installed on your machine you can spin-up your first container (in this case we use Ubuntu as our base image) in the terminal of your choice:

  • Pull a Docker image from the Docker Hub, which is the building block behind containers

docker pull ubuntu:latest

  • Run the bash binary or any binary/application of your choosing in our container so long as it is shipped with Ubuntu as the content of the container including bash itself are created from the given image, aka vanilla Ubuntu. (-it refers to interactive mode with a terminal, see CLI options for more info)

docker run -it ubuntu:latest /bin/bash

Note that our container dies after we exit the bash process. You can start containers in detached mode (-d) and execute commands or run processes on them at a later time. We can also use non-interactive processes such as the date binary with this container to get something that is mildly useful out of it:

Be sure to exit (Ctrl+c) out of bash before running this command on your host machine.

echo “Time is: $(docker run -it ubuntu:latest /bin/date)”

Dockerfile (Build System)

Another basic building block of Docker is the definition on which this container and its image are built upon and the instructions to create the sharable environment that we want. This is represented in what we call a Dockerfile, which can be built to a Docker image.

Here is the popular “in-memory data structure store [often used as a cache]” Redis, installed on top of the base Ubuntu image.

Note that this image is unoptimized as it creates too many unnecessary “layers”. We’ll cover optimization of Dockerfiles in an upcoming article.

https://github.com/appacademy/docker-redis-ubuntu

Images

So far we’ve pulled/used a Docker image and created a Dockerfile representing the contents of an image.

Let’s build our earlier redis + ubuntu image and tag it with the arbitrary name redis-ubuntu. To do this, we’ll need to run the docker build command where we have saved our Dockerfile with the content defined above:

docker build -t redis-ubuntu .

After building and tagging the image, we should be able to see it by running docker images in our terminal.

docker images

Images are Docker’s method of distribution of containers. They consist of one or multiple layers, creating the applications/binaries and dependencies into a root image (filesystem). In terms of the concept, this is not too different from a series of zipped files (layers) expanded on top of each other. This layering system allows Docker to skip a few steps if the command for the build hasn’t changed, making future builds significantly faster.

Images are often stored on a centralized provider. Docker Inc. provides the main platform for publishing, sharing, and retrieving such images known as the Docker Hub. Take a look at a the Postgres Docker image and Mongo Docker images available on Docker Hub. Anyone can publish a Docker image to Docker Hub and while it does require a few steps, it is rather straightforward.

Docker Hub

Now that we have defined, built, and tagged our image we can go ahead with publishing it on Docker Hub. This would make the image available outside of the single host/computer that we just built it on.

For our redis-ubuntu image, we’ll be using Docker Hub to host and pull the image we just built. The process is straightforward. We need to do the following:

  • Signup for an account at https://hub.docker.com/signup
  • Create a new repository on Docker Hub by clicking the “Create Repository” button. Call this repository redis-ubuntu for ease of reference
  • Login in the terminal and enter credentials that we’ve setup during signup

docker login

  • Tag our previously built redis-ubuntu image now available on our machine against the repository

docker tag redis-ubuntu <docker-hub-username>/redis-ubuntu

  • and finally push the newly tagged image to Docker Hub

docker push <docker-hub-username>/redis-ubuntu

Docker and Docker Hub’s relationship is, in many ways, similar to that of GitHub and git but instead of repositories of code, we push, pull and maintain repositories of Docker images. Docker Hub is one of the many registries that you can use to host images, as cloud providers such as AWS offer their own private registries such as the Elastic Container Registry (ECR).

Checkout the final image on our public Docker Hub organization.

Docker Compose

Now that we have published our image to Docker Hub, let’s put it to good use. Docker provides an intuitive CLI/terminal tool for setting up an application to use our now published image but it is often easier to deal with systems through committable configuration, which in the case of Docker Compose are referred to as compose files. Docker-compose is a Python-based tool provided by Docker that implements the definition file for a given service using one or a set of containers, networking, and configuration through a docker-compose.yml file.

As an example, if we wanted to use our image running redis as an external dependency for a new Node project, we have the option of building an image that includes Redis and the Node runtime with our app or running multiple containers each with a single purpose, one as our database and one as our app. In this case we can start multiple containers that together consist of our application and its dependencies rather easily with Docker Compose. These containers can then connect to each other using networking.

We can define this in a docker-compose using readily available images in Docker Hub or the image we just published:

You can then spin up such containers and their dependencies using:

docker-compose up

Or we can define the build process of a Node app in a Dockerfile:

And run said app:

docker-compose run app npm start

Note it’s up to you to write and run an app using Docker Compose. But we’ve already prepared an example here that also demonstrate the basics of networking setup by Docker Compose. Note that, Docker Compose is shipped by default on Docker for Mac.

In this article we’ve covered a small part of the growing ecosystem of Docker, from creating a build definition in a Dockerfile, building and publishing a Docker image, to using Docker Compose to run containers and their dependencies for a future app. You can expect much more around Docker and other technologies as part of our engineering publication. Try out containers and publishing to Docker Hub if you haven’t already and subscribe for the rest of the series!

Look for more in-depth articles and topics in this publication soon, so be sure to follow us and clap 👏 if you enjoyed this one.

--

--

App Academy
App Academy Engineering

Empowering people to transform their lives by tying our success to theirs.