Getting To Know Docker | Part 1: Your First Image and Container

J.P.
7 min readDec 6, 2017

--

This is a simple workflow to help developers get to know Docker.

Part 1 focuses on building an image and creating a container with it. For this example, I will use a Node image, but the image can be replaced in order to create a sandbox in any environment you want. I will assume that you have some minor command line experience.

First, install Docker.

Docker Image

Open up a terminal and use the command docker image ls . If you have not yet created an image in Docker, you should just see some headers. If you have an image and don’t know why, ignore it for now.

A fresh copy of Docker with no images yet.

Let’s build an image.

To build an image, we are going to use a Dockerfile. The purpose of the Dockerfile is to list our image build instructions. These instructions can be passed as command line arguments, but it is much easier to understand with a Dockerfile. Make a Dockerfile inside of a new directory called ‘app’.

We will not talk about making efficient images here, but you can (and you should) watch this talk by Abby Fuller or check out the Docker Docs in the future. For now, let’s keep it simple.

My lone Dockerfile inside of my app directory.

As you can see, I am in my ‘app’ directory and I only have a Dockerfile. Fill your Dockerfile with these simple instructions.

A simple example of a Dockerfile.

FROM pulls a Docker image from Docker Hub. In this case, we want the Node 8 image.

WORKDIR sets the working directory inside of our container. We want it to be just like our local directory.

COPY takes all of our files in our local directory and copies them into our /app container directory. Again, Docker has great documentation if you want a deeper explanation.

To run the instructions inside of the Dockerfile use…

docker build -t my_image .

The -t flag allows us to name or tag our image for easy reference. We named our image “my_image”. The .(dot) tells Docker that our current directory is the context of our build. For now, not too important.

After the build, running docker image ls should show you two images.

The “node” image was pulled from Docker Hub because of our FROM line in our Dockerfile. It is our base image.

The image named “my_image” is dependent on the base image. Trying to delete the Node image will result in an error.

Attempt to remove an image that has children. You can usually remove an image by using the command above. Notice that passing the first 3 characters of the image ID is good enough to reference the image.

Docker Container

A container is just a running instance of an image. Let’s run the “my_image” image.

docker run -it my_image bash

Running a Docker image and accessing the shell. We are in our shell because of the “bash” command used here. The container and changes inside the container will expire when the command is terminated. That is, when the exit command is used.

COMMAND BREAKDOWN

docker run — command given by Docker

-i — Keep STDIN open even if not attached

-t — Allocate a pseudo-TTY

For interactive processes (like a shell), you must use -i -t together in order to allocate a tty for the container process. -i -t is often written -it

- Docker

bash — Program to run inside the container. This is how we get into the shell of the container.

Great! We are now inside of a running image or a Docker container. We used a Node 8 image to build our image, so lets check what version of Node is running inside of this container.

Latest Node LTS version at the time of writing this article.

Our Node image gives us NPM out of the box, so let’s install Cowsay in our Docker container.

npm install -g cowsay

You should now be able to do this…

Cowsay running inside of a Docker container.

We can exit the container with the exit command. Since I do not have Cowsay installed locally on my local system, I will no longer be able to access the package. Notice the terminal name change when exiting a container.

Exiting a Docker container and finding that I don’t have access to packages installed inside of the container anymore.

Let’s jump back into our Docker container with a familiar command.

docker run -it my_image bash

and now let’s Cowsay something again…

Proof that our Docker container does not remember our last session where we installed Cowsay.

Whoa! Wha?! …. uh?

Ok, what is going on here is that because we are using the bashcommand to enter the shell, the container changes only last as long as the bash command does. So, when we exit the shell with the exit command the container is stopped and our changes lost. We can prove this with the following command.

docker container ls -a

or

docker ps -a

An “Exited” container.

The two commands are the same. It shows us all of our containers instances. Without the -a tag, Docker will show us only our running containers. Soon, we will see a running container using this command. Right now, our once running container has been stopped.

A bit of a re-cap

What we can currently do with Docker is spin up a container that we can dabble in with no consequence. This can be cool if we want to try a technology like Node without installing it locally.

However, there is no way that I am reinstalling Cowsay every time I run the container. Why doesn’t this container keep our changes?

The answer is that the contents of the container are determined by the containers lifespan. To put it simply, stopping a container removes any changes made in that container. It is unrealistic to keep the container running forever, so we need a different option to make our container persist. Let’s go through a workflow to do that.

Persistent Docker with Bind Mounts

Bind mounting is one way Docker handles persistence. A bind mount is a mapping of a local file or directory to a container file or directory. Put simply, if I make a change inside my container it should reflect locally and vice-versa.

One thing to note about bind mounts is that they must be specified on the Docker command. They cannot be declared in the Dockerfile. Why bind mounts can’t be done in the Dockerfile is not very important at the moment and it will make sense once we are introduced to volumes. More on that in the future. For now, let’s bind mount.

docker run -it -v $(pwd):/app my_image bash

BAM! Bind Mounted.

We see the bind mount here as …

-v $(pwd):/app

The -v tag allows us to pass a variable where on the left side of the colon we use our current working directory and on the right side of the colon we use the location in the container. These are the two directories we are binding. You can find more on -v here.

For proof, we can open up another terminal tab and navigate to the ‘app’ directory on your local machine and run

docker ps

Showing our running container. Notice that the name given is random.

Now, in the local directory run npm init

npm init logs

If you have never used NPM, just press enter on all the default options. If you have used NPM, you know to just press enter on all the default options. More importantly, we should now have a new package.json file that was created locally.

Proof of local package.json file

If we go back to our terminal with our running Docker container and check the contents in our directory, we see that we also have that package.json file. Now, from inside your Docker container, make an index.js file.

Proving the existence of our package.json file and creating an index.jsd file inside of our Docker container.

Navigate back to your local terminal and inside your directory you should have that index.js file.

Proof that our index.js file (made in our Docker container) is created locally as well.

RECAP

Now, with what we know, we can have a completely isolated development environment. We can install dependencies here and not have to worry about installing them in our local machine. Docker has a super amount of images that we can use. All of the techniques here are great when you use them for your small projects. When we get to using Docker on a larger scale there is more optimization needed in order to really make Docker whistle.

Personally, this workflow gave me some real clarity into the reason I hear so much praise around Docker.

Here are some helpful commands to help you navigate and explore Docker.

You just finished Part 1| Part 2 | Part 3 | Part 4

--

--