Docker provides a single command that will clean up any resources that are dangling (not associated with a container):
docker system prune
To additionally remove any stopped containers and all unused images (not just dangling images), add the
-a flag to the command:
docker system prune -a
docker rmi $(docker images -a -q)
docker rm $(docker ps -a -f status=exited -q)
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
 Melissa Anderson, How To Remove Docker Images, Containers, and Volumes (2017), digitalocean.com
In this section, we will learn about Docker Compose, its file, and its commands, using a sample application developed by Docker called the Voting App.
The Voting App is a Flask application written in Python to vote between Cats and Dogs.
This vote is then transferred to Redis, which acts as an in-memory DB here. The worker application, written in .NET, then processes this vote and inserts it in the persistent database — the Postgres container here.
Finally, the result of the vote is displayed via a web application that is written in NodeJS. …
In this section, we will discuss how docker stores data on the local file system, understand which layers are writable and would deepen our knowledge of persistent storage for containers.
On a linux system, docker stores data pertaining to images, containers, volumes, etc under /var/lib/docker.
When we run the docker build command, docker builds one layer for each instruction in the dockerfile. These image layers are read-only layers. …
When you install docker it creates three networks automatically - Bridge, Host, and None. Of which, Bridge is the default network a container gets attached to when it is run. To attach the container to any other network you can use the --network flag of the run command.
The Bridge network assigns IPs in the range of 172.17.x.x to the containers within it. To access these containers from outside you need to map the ports of these containers to the ports on the host. Another automatically created network is Host. Selecting the Host network will remove any network isolation between the docker host and the containers. For instance, if you run a container on port 5000, it will be accessible on the same port on the docker host without any explicit port mapping. The only downside of this approach is that you can not use the same port twice for any container. Finally, the None network keeps the container in complete isolation, i.e. …
In this section, we will learn how to map a port inside a docker container to a port of the docker host. More importantly, we will see how to have persistent storage.
To map a port on a host to a container we need to use the -p flag of the docker run command.
docker run -p <port_number_on_host>:<port_number_on_container> <image>
The following command will run a container for mlflow and map port 7000 of this container to port 7000 of the docker host. …
You can download and install Docker Community Edition for free. You can follow the instructions on Docker’s website, they are pretty straight forward and always latest.
Apart from the commands described below, I would like to refer you to a cheatsheet released by docker. …
Two very important aspects of becoming a Docker super user is the understanding of the Dockerfile and knowing your Docker commands. The idea of putting together this page is not to memorize but to familiarize yourself with available options. You can always revisit the page if you know what you are looking for.
Docker file is a text file written in a special format that docker can understand. It is in INSTRUCTION and argument format. …
There are several courses available on this topic. Some of them are very short and do not serve any other purpose than a ‘Getting started course’, while others are super long and require you to spend several days to study and understand everything. My aim with this tutorial is hit a balance between conciseness and exhaustiveness.
The following section is more of an appreciation section for Docker. If you are already aware of what Docker is and how is it useful, you can save some time by skipping over to the next section.
This is the sixth article and the final article in my MLflow tutorial series:
conda create -n production_env
conda activate production_env
conda install python
pip install mlflow
pip install sklearn
Run a sample machine learning model from the internet
mlflow run email@example.com:databricks/mlflow-example.git -P alpha=0.5
Check if it ran successfully
ls -al ~/mlruns/0
Get the uuid of the model we just ran from the above command and serve the model. …
This is the fourth article in my MLflow tutorial series:
If you create a new project or clone an existing one you can make it an MLflow project by simply adding two YAML files, viz., MLproject File and Conda environment file, to the root directory of the project.
This step is not obligatory but is highly recommended, as it not only enhances the reproducibility of your models but also links the run to a specific version of the code (its git hash). This is very useful, as a user can simply git checkout to a particular git commit, if the future changes to code have affected it’s functionality and/or the results. …