The Ultimate Docker Command List

A curated list of Docker commands that’ll save you hours in debugging and scouting Stack Overflow

Timothy Mugayi
Mar 17 · 10 min read
Photo by Grzegorz Walczak on Unsplash

Table of Contents

#1. Docker Build
#2. Running Docker Containers
#3. Debugging Docker Containers
○ Docker on AWS ECS
#4. Cleaning Up Docker Images
#5. Pulling Docker Images from a Remote Registry
#6. Exporting and Importing Physical Docker Images
#7. Final Thoughts

Introduction

How many of you have spent a day or two trying to get your Docker cluster setup or getting the piece of code that keeps making your Docker container fail to boot up? For most developers, a lot of time is spent negotiating with configurations — finding bugs becomes an endeavor that seems to outweigh the time spent on actually pushing out new features, especially when the environment you work on is still relatively new or not mature.

Some of us are not fortunate to have environments that are stable with perfect CI/CD processes. For those that fall into this camp, this piece is for you. This article came to be a by-product of experience. Just like you, I’ve been down the debugging trenches for days, too. This piece is meant to complement material found on the primary Docker technical-document site, while also focusing on the common commands you're most likely to interact with on a daily basis while using Docker.

For a more detailed list of comprehensive optional flags and arguments, please refer to the Docker manual. Do take note that depending on your Docker system configurations, you may be required to preface each docker command with sudo.

Tip: Every Docker command has built-in documentation. Learn to use it. Typing docker run --help will generate the following help documentation.

I hope this guide helps you navigate the trenches of the rabbit hole of debugging and working with Docker. Take note of the accompanying explanation of the command when reading commands.


Docker Build

$ docker build \
--build-arg ARTIFACTORY_USERNAME=timothy.mugayi \
--build-arg ARTIFACTORY_SECRET_TOKEN=AP284233QnYX9Ckrdr7pUEY1F \
--build-arg LICENSE_URL='https://source.com/license.txt' \
--no-cache -t helloworld:latest .

This will build a Docker image with optional build arguments. Docker will cache the results by default for the first build of a Dockerfile or subsequent builds based on new layers added to the image via the run command in your Dockerfile, allowing subsequent builds to be faster.

If you don’t require this, you can append a no-cache argument, like we’ve done in the example above. If you’d like to know about how to use ARGs with Docker environment variables during your builds, you can read more in my other article

Note: Docker commands can be executed by name or by the Docker container ID. <CONTAINER> will be interchangeable with either the container ID or the container name.


Running Docker Containers

$ docker start <CONTAINER>

Start the existing container. We assume the container has already been downloaded and created.

$ docker stop <CONTAINER>

Stop an existing running Docker container.

$ docker stop $(docker container ls -aq)

If you have multiple Docker containers running and you wish to stop all of them, type docker stop and a list of all the container IDs.

$ docker exec -ti <CONTAINER> [COMMAND]

Run a shell command inside a particular container.

$ docker run -ti — image <IMAGE> <CONTAINER> [COMMAND]

There’s a real clear distinction between Docker run and start. Docker run, in essence, does two things: (1) creates a new container of an image and (2) executes the container. If you ever wish to rerun a failed or exited container, then use the docker start command.

$ docker run -ti — rm — image <IMAGE> <CONTAINER> [COMMAND]

This is an interesting command intended to create and start a container at the same time. It also runs a command inside it and then removes the container after executing the command.

docker run -d  <IMAGE>:<IMAGE_TAG>Usage: 
docker run -d helloworld:latest

If you wish to fire the docker run command in a detached state — e.g., as a Linux daemon in the background — you can append -d to your run command(s).

$ docker pause <CONTAINER>

Pause all processes running within a particular container.

$ docker ps -a

The above command lists out all Docker images that have been run before. Once you have identified the image you want to run, execute the command below. Ensure you change the container ID to reflect the results shown by your initial docker ps -a command.

$ sudo docker run {container ID} -e AWS_DEFAULT_REGION=us-east-1 \
e INPUT_QUEUE_URL="https://sqs.us-east-1.amazonaws.com/my_input_sqs_queue.fifo" \
e REDIS_ENDPOINT="redis.dfasdf.0001.cache.amazonaws.com:8000" \
e ENV=dev \
e DJANGO_SETTINGS_MODULE=engine.settings \
e REDIS_HOST="cmgadsfv7avlq.us-east-1.redis.amazonaws.com" \
e REDIS_PORT=5439 \
e REDIS_USER=hello \
e REDIS_PASSWORD=trasdf**#0ynpXkzg

The above command illustrates how to run a Docker image with multiple environment variables passed in as arguments, where \ is a line break.

$ docker run -p <host_public_port>:<container_port>

If you ever find your self having to expose docker ports. The run command takes argument p for port forwarding. Where host_public_port is your machine port you want docker to forward the container_port. For multiple ports append another -p argument.

$ docker run -p <host_public_port1>:<container_port1> -p <host_public_port2>:<container_port2>

Debugging Docker Containers

$ docker history <IMAGE> example usage:$ docker history my_image_name

Display the history of a particular image. This information is useful when you want to gain insights and a detailed view of how this Docker image came to be. Let's digress for a moment here, as it's necessary to truly understand what this command does. The literature around the command is sparse.

When we talk about Docker, images are built on top of layers, which are building blocks for a Docker image. Each container consists of an image with a readable/writeable layer — think of it as a persisted state or file. On top of that, other read-only layers are appended. These layers (also called intermediate images) are generated when the commands in the Dockerfile are executed when the Docker image build command is executed.

If you have a from, run, and/or copy directive in a Dockerfile and then build that image, that run directive will result in one layer being created with its own image ID. That image/layer will then show up under docker history with the image ID and the date the image was built. Subsequent directives will result in another entry — and so on. The CREATED BY column will roughly correspond to a line in a Dockerfile, as shown in the image below.

Illustration of the ‘docker history’ command
$ docker images

List all images that are currently stored on the machine.

$ docker inspect <IMAGE|CONTAINER ID>

Docker inspect displays low-level information about a particular Docker object. Data stored in this object can be quite helpful in debugging situations — e.g, cross-checking Docker mount points.

Take note: There are two primary responses that this command fetches — details on an image level and details on a container level. Some of the insights you can derive from this command are:

  • Container ID and timestamp of when it was created
  • Current status (useful when trying to identify if the container is stopped — and why it was stopped)
  • Docker image info, filesystem binds and volume info, and mounts
  • Environment variables — e.g., command-line parameters passed into the container
  • Network configuration: IP address and gateway and secondary addresses for IPv4 and IPv6
$ docker version

Displaying the version of Docker includes both the client and the server version that’s currently installed on the machine.

Yes, you pretty much read that correctly. Docker is a client-server application. The daemon (long-running Linux background service) is the server, and the CLI is one out of many clients. The Docker daemon exposes a REST API from which a number of different tools can talk to the daemon through this API.

Docker version output
Here’s an image of how the client-server architecture is set up

Docker on AWS ECS

Use the command docker exec -it <container ID> /bin/bash to get shell access. If you ever find yourself trying to find a Docker image that failed to run — for example, if you’re using the AWS ECS cluster and you get an obscure error message such as the one below.

Honestly, this can be caused by a lot of things, such as (1) your code has issues — e.g., an uncaught exception was thrown and your Docker container died on startup, (2) you’re out of disk space if you’re using an ECS cluster on EC2 instances — that's if you’re not using the ECS placement type Fargate, or (3) your existing Docker container maxed out your EC2 available memory.

Essential container in task exited

Execute the below command to identify the most recent Docker container that failed to run. Omit the sudo if you have sudo access on your account. With the given output, use that to rerun the container to see why it's failing.

$ sudo docker ps -a --filter status=dead --filter status=exited --last 1

When in doubt, restart the Docker service

$ sudo service docker stop$ sudo service docker start# on a MAC you an use the docker utilty or alternatively run $ killall Docker && open /Applications/Docker.app

This needs no further explanation.


Cleaning Up Docker Images

$ docker system prune

Docker takes a conservative approach to cleaning up unused objects, such as images, containers, volumes, and networks.

These objects are generally not removed unless you explicitly ask Docker to do so. Hence, if these objects aren’t removed, this can quickly begin to start to take up a lot of space. Hence, it’s very important to periodically run the below command to clean up unused Docker images.

docker kill <CONTAINER>

Kill an existing running container.

$ docker kill $(docker ps -q)

Kill all containers that are currently running.

$ docker rm <CONTAINER>

Delete a particular container that’s not currently running. If the image exists in a remote registry, the image won’t be affected.

$ docker rm $(docker ps -a -q):

Delete all containers that aren’t currently running.

$ docker logs my_container

Get access to the container logs (useful for debugging).


Pulling Docker Images from a Remote Registry

Docker Hub

If you wish to pull images from there to your local registry, its as simple as running the Docker run command followed by the image path. The below command illustrates pulling and running the version-stable R Rocker image.

$ docker run --rm -p 8787:8787 rocker/verse

Docker will initially attempt to check if this image is available on your local machine. If it doesn’t then proceed to download the image from the Docker Hub repository, this works out of the box.

$ docker pull rocker/verse

If you just want to pull the image without having to run the container, then docker pull will suffice.

docker login --username={DOCKERHUB_USERNNAME} --email={DOCKERHUB_EMAIL}

To log in to Docker Hub, you can run the above command, which will prompt you to enter the password.

Custom Docker registry

$ docker login your.docker.host.com
Username: foo
Password: ********
Email: user@myemail.com

If you’re pulling from a generic custom Docker registry that needs an authentication, the docker login command allows you to pull from any Docker registry, as illustrated above. Take note when executing the above that it’ll create an entry in your ~/.docker/config.json file. Concatenate ~/.docker/config.json to modify authentication details.

Amazon Elastic Container Registry

You need to have the AWS CLI configured with your IAM user having AWS access and a secret key.

The Amazon ECR requires the IAM user access keys have allowed permissions (ecr:GetAuthorizationToken) through an IAM policy before you can authenticate to a registry and pull any images. Alternatively, you can leverage the Amazon ECR Docker Credential Helper utility. The below approach assumes you’re using the AWS CLI and have all your permissions configured.

$ aws ecr list-images --repository-name=twitter-data-engine-core$ aws ecr describe-images —- repository-name=twitter-data-engine-core

The get-login command generates a long Docker login command. Copy that and execute it. Authentication is required before you can attempt to perform a Docker image pull from the AWS ECR.

$ aws ecr get-login —- region us-east-1 —- no-include-email --profile {AWS_NAMED_PROFILE_NAME}# profile argument is optional provided your using the default aws cli profile. if you have multiple named profiles in your aws /.aws/credentials file then you need to explicity set the named profile to pick the onen you wish to use$ docker login -u AWS -p {YOUR_TEMPORARY_TOKEN}$ docker pull 723123836077.dkr.ecr.us-east-1.amazonaws.com/twitter-data-engine-core:build-9

Exporting and Importing Physical Docker Images

$ docker save your_docker_image:latest > /usr/local/your_docker_image.tar$ docker load < /usr/local/your_docker_image.tar

If you ever have a need and want to export to disk and load back Docker images, then the above will do the trick.

Exporting to a file is useful for instances when you want to transfer Docker images from one machine to the next via an alternative medium (other than a Docker registry). There are certain environments to which you might have restricted access due to security. This may make you unable to do registry-to-registry migrations — hence, this is a useful command that's easily unrated and forgotten.


Final Thoughts

Cheers and happy coding!

Better Programming

Advice for programmers.

Thanks to Zack Shapiro

Timothy Mugayi

Written by

Tech Evangelist, Instructor, Polyglot Developer with a passion for innovative technology, Father & Health Activist

Better Programming

Advice for programmers.

More From Medium

More from Better Programming

More from Better Programming

More from Better Programming

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade