After half year and often using Docker tool , something possible to miss. As all tools or language has it’s own tips and tricks to be effective.
I have noticed that some tricks and practices are recurrent. I set goal for myself to learn more and share it. So you might come back to specific items when you need.
All existing containers (not only running)
docker ps - show only running
docker ps -a - show all containers
Remove all containers with status=exited
when running a lot of container, can be case that exists list of containers which has status
exited . Command bellow can help to remove it in one command
docker rm $(docker ps -q -f status=exited)
Stop all containers
docker stop $(docker ps -q) - will run stop only for active
docker stop $(docker ps -aq) - will run stop for all
Run and attach to container (
'--rm' delete container after exit)
Pay attention! Every time with using
docker run will create new container with specified image. When you are using often
docker run . It will become boring after stop and remove it. Use option
--rm so container will removed after it finishes. it can make your life easy.
docker run -it --rm <image_name> /bin/ash
Pass environment variables to docker
In case if you have several env variables you can use option
--env as example bellow.
docker run -it -e TEST=1234 --env TEST1=3456 --rm alpine /bin/ash- write in terminal
echo $TEST - you should have result '1234'
in case when you need to add list of variables. First example become not so comfortable. It’s better to create
env.file . In docker command use option
--env-file instead of
#Content of env.file
example of docker command :
docker run -it --env-file ./env.list alpine /bin/ash
Remove all docker images
docker rmi $(docker images -q)don't forget image shouldn't have reference to container
Execute command in container
docker exec YOUR_CONTAINER echo "Hello from container!"
Bind local folder the docker folder on docker run
What if you need to share your local folder with docker container and start to using it internally from your container. you can use flag
docker run -it -v /LOCAL_PATH:/CONTAINER_PATH <container_image>
docker run -ti --rm -v /local_path:/var ubuntu
Build image (from folder with Dockerfile)
In case if you want to create your own image use command bellow
docker build -t <image_name> .
See logs in container
you can check logs of container. Use the following:
docker logs -f <container_name>
Believe bash is your best friend
Many developer or admins who are working close with command line create their own aliases for various commands. you can do it too, just make your working process easy. Just add these to your
alias dr='docker rm $(docker ps -aq)'
alias ds='docker stop $(docker ps -aq)'
alias di='docker images'
alias dri='docker rmi $(docker images -q)'
alias dsr='ds && dr'
alias dps='docker ps -a'
alias dcup='docker-compose up'
All will start from creating
Dockerfile in root folder of project. About best practices how to create and use it, follow links
Debug container and see Docker internal files.
It’s not secret that all images and containers are saving locally on your computer. If you are using linux. execute command in
docker ps -a
copy one of
CONTAINER_ID(it’s 12 characters part of hashname) and let’s continue.
cd /var/lib/dockerls -list // It will show all folderscd /containers/<CONTAINER_ID>+<OTHER_HASH_NAME>ls -list // will show all files
As example you will see.
Just play a little and check what you have in files.
Backup of container
There is possibility to make backup of your container and move it to another hosts. So how you can do it
docker ps -a // choose your one of your container
docker commit -p <CONTAINER_ID> <YOUR_BACKUP_NAME>
docker images // it will appear one image with <YOUR_BACKUP_NAME>-- save docker container to archive
docker save -o <CONTAINER_FILE>.tar <YOUR_BACKUP_NAME>
After you can copy this
*.tar file on another local machine and restore it.
Restore docker container
it’s easy , just execute one command line.
docker load -i <CONTAINER_FILE>.tar
after you can check list of docker images, you will find new one what was restored from
Use minimal base image as it’s possible
Don’t need to create you privet docker image based on
ubuntu . You can check
debian . If
debian is still big, you can check
Use docker-compose in case running multi-containers using file docker-compose.yml
NOTE: How you can create your private docker-compose file follow link
Remove container when
'docker-compose up’ doesn't work
If there is issue when
docker-compose up like
liquibase-container | Unexpected error running Liquibase: liquibase.xml does not exist
you can remove container with using
docker-compose rm liquibase
run only one container from docker-compose
docker-compose up liquibase
Stop, remove containers, remove images and networks what was created by “docker-compose up”
docker-compose -f docker-compose.yml down --rmi all
I was really happy to use something new, and share for you. I hope you will enjoy to use it in your normal working day :) . I will happy to listen “What kind of command are you using?” Write in comment.