Steps to get your Docker host compromised

Adam Borczyk
CDeX
Published in
10 min readAug 10, 2020

Widespread adoption of Docker both within desktop and server environments has significantly widen the attack surface of companies’ IT infrastructures. While the concept of containerization has been around for a longer while thanks to e.g. LXC or FreeBSD’s Jails, it is Docker which gained the most popularity, mainly due to its lower entry threshold. However, without knowing the underlying technology, it is easy to become a victim of insecure Docker configuration. Here I will describe some of the most common pitfalls of this environment.

Note: the following OS references and examples will refer to Linux-based hosts.

1. Mounting sensitive parts of host’s filesystem into a container

Often needed functionality is a file exchange between the host and a container. The feature designed for that is called “bind mounts” and allows to map chosen host’s directory into the guest. For example, you can mount host’s new empty directory into container’s /var/log/nginx/, so container’s HTTP logs are stored closer to you, and maybe that speeds up your debugging.

It is possible to bind mount an entire host’s filesystem into the container. If that is the case, and someone gains access to the shell inside such a container, they are able to perform any operations on the host machine they want — such as change a user’s password, enable remote access for themselves or install some fishy software. All of this because of the possibility of editing host’s configuration files the bind mount gives. The simplest possible example is shown below:

docker run -it --rm -v /:/hostfiles busybox

Binding the whole filesystem is not very common, however doing so for the Docker socket file is. This file is an entry point for every interaction with Docker daemon, such as container listing, creation, deletion, etc. Several popular software vendors, such as Traefik or Netdata ask for bind mount of the socket into the container. This is a legitimate request, because this software has containers’ monitoring capabilities, and access to the Docker API is the only way to get information about containers running on hosts when doing so from within another container. This socket binding issue has been around for years now and there are several solutions that help to secure it, such as Seccomp/Apparmor security profiles, Linux capabilities and more.

Github.com code samples mounting the Docker socket into the container

Keep in mind though, that it’s not only about the socket, and there are variety of system files that should not be exposed to the container. An example of a brainless use case could be when you need your local .ssh directory binded into the container, so you can use the same private/public key pairs from within the container to access some remote hosts (like Github). This way not only you widen the attack surface for stealing the keys, but also enable the attacker to paste their own public keys into authorized_keys as a persistence method, opening a passwordless login through the SSH to your server.

Double check what system resources you share with a container, especially when the containers are on a publicly available server environment. Methods described here assume that initial foothold was already performed by the attacker, but they landed inside a container. This is not always the case as will be described in the next section.

2. Remotely accessing Docker Engine API

Docker daemon, which is the main engine running on the host and managing containers, can be configured to listen on a few different “sockets”. These include an earlier mentioned unix, by default configured as unix:///var/run/docker.sock (this is the only active socket after a fresh installation), tcp, which is helpful for remote communication over (yes) TCP, and fd, available on Systemd-enabled systems, where it’s Systemd that manages listening sockets for Docker.

It is reasonably justified as why it’s the unix socket which became the default — as a local file it is only accessible from inside of the host (not remotely) and it is configured to allow access only for superusers and “docker” group members. You can see the latter on a fresh Docker installation when you access Docker daemon without necessary privileges:

Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock

Sometimes we may want to manage our containers remotely. The tcp socket comes handy in this case, but things may start to get insecure at this point. We can start your Docker daemon with a tcp socket this way:

$ sudo dockerd -H tcp://0.0.0.0

we will see this warning as one of the first things Docker outputs:

[!] DON’T BIND ON ANY IP ADDRESS WITHOUT setting — tlsverify IF YOU DON’T KNOW WHAT YOU’RE DOING [!]

Well, that’s just what we did and it’s evident below that our host is listening on the default, unsecured 2375 port:

vagrant@buster:~$ sudo ss -tlpn | grep docker
LISTEN 0 128 *:2375 *:* users:(("dockerd",pid=13395,fd=5))

With the above warning, Docker daemon is telling us to set --tlsverify flag. Indeed, the proper way to secure open Docker Engine API is to use TLS certificates verification — the client connecting to the remote daemon has to provide a valid certificate issued by a CA that the daemon trusts. Docker website has a comprehensive documentation on that.

Consequences of an above configuration where the Docker daemon is exposed on the network are rather easy to imagine. Anybody who has access to this interface and port over the network can issue “docker” commands just as if they were present on our host system, they just have to add -H flag with a respective IP address, for example docker -H tcp://some_address ps. We have already covered the issue of mounting sensitive parts of host’s filesystem into the container so it doesn’t need further explanation. More threats about dependencies are waiting in the next part , but for now I’ll just bring to your attention that it’s any image that the attacker can run on the system when the API is open. And to stress it once again — without certificate authentication set, the API will happily accept requests from anybody, without any kind of password or other constraints.

At the time of writing, there are almost 6,000 Docker Engine APIs exposed on the web (with as little as 100 using 2376 port, by convention secured with TLS), as reported by the following Shodan query, and it obviously does not include Docker APIs exposed within private corporate networks:

Shodan.io search results on publicly available Docker APIs

A potential attacker could simply iterate over the IPs listed, verify positive access to the API and run their own containers on these hosts. And that’s exactly what happened with Doki or XORDDoS and Kaiji DDoS malwares, when the attackers started to mine crypto and expand their botnets at the expense of vulnerable hosts.

Setting up and managing a CA “just” for some Docker daemons may look like an overkill, especially when the daemon is to be placed within private network of an organization, or access to it is limited to a whitelist of some IP addresses used by company’s VPN service. However, a Zero Trust concept of a corporate network advises to perform identity checks both on outside and inside clients. This greatly reduces lateral movement possibilities for a hypothetical attacker who was able to eg. gain remote access into one of the less important hosts in the company’s network and would like to pursue further actions.

3. Unexpected dependencies

Due to the complexity of modern software, comprehensive awareness in the field of dependency tracking is very often difficult to achieve. And as if it was not enough to maintain your application’s libraries, Docker adds even more for you to take care of.

When you pull a Docker image, it’s just another piece of code that you are about to run on your laptop or a server. But what’s inside there? What does it do? Docker images are built with lots of inheritance from other existing sources. Specifically, the FROM keyword in a Dockerfile tells which of the available images is used as a base for the current one. Theoretically, there can be dozens of images preceding your chosen one, each maintained by a totally different projects or individuals. Most often this number is only between 1 or 3 or similarly low, as creators tend to minimize final size of the package, but there is still no built-in feature in Docker which allows for full history extraction. You have to find it out yourself, and it’s not always possible. Why is it so important? Docker’s Official Images are rather safe, but you don’t have a team of reviewers on every lesser-known or obscure image. Researchers from Palo Alto Networks’ Unit 42 have discovered a Docker Hub user hosting several malicious images, with a total pull rate of 2 million. Username “azurenql”, friendly resembling Microsoft’s Azure platform name, earned around $36,000 in Monero cryptocurrency before the account was taken down. It is unknown how exactly the image was distributed — by using compromised workstations/servers with Docker installed or by a malicious base image for a legitimate service, but both of these are equally probable.

If an image hosted on Docker Hub has a Dockerfile attached as below, it’s easy. When there is no Dockerfile, such as when a preprepared image was uploaded directly (docker push) instead of being built through eg. public Github repository, you can only speculate based on supplied RUN command or try to find its sources somewhere deep in the web.

Docker Hub repository with a Dockerfile attached, referring to “tomcat” as its base image

And just as when you (should) think twice before using some suspicious, undocumented libraries in your project, you should analyze Docker images the same way. And the last note: malicious images don’t have to contain some specific hackerish or crypto packages — what if they use a library of a version with a known vulnerability, that you have just installed on your server?

4. Everyone jump into the “docker” group

This is a short comment on another long battle against security and usability, but as it is strongly connected to the security aspects described earlier, I felt I need to place it here. Docker post-installation steps for Linux has a few tips on how to perform additional Docker configurations, one of which is entitled “Managing Docker as non-root user”. The tip, while explicitly warning about potential outcomes, suggests to add a given user to a “docker” group, so the user does not need to have superuser permissions nor use sudo prefix when interacting with Docker daemon. This is a reasonable and convenient shortcut for everyday Docker usage, and is mostly helpful for developers who use Docker on their local machines and for a subset of servers running dockerized applications. Important consequence of adding someone to the “docker” group is that they will be able to fully interact with Docker daemon, and so do all the evil things related to mounting host’s filesystem described earlier. This is an indirect way of granting someone “root” privileges.

There are different approaches for managing a server infrastructure. The modern way, typical for cloud environments such as AWS or GCP, is a fast server provisioning with cloud images, with an already set up non-root user account, most often with a passwordless sudo granted. These servers are destined for hosting a single service (or a group or services), often including Docker. They are also a bit ephemeral — the number of servers running the application increases or decreases in time depending on the application load, destroying and creating fresh server instances automatically. This environment is very narrow in the jobs it is performing and potential compromise of Docker daemon does not introduce new security risks, since the (only) user on the server is already a passwordless sudoer. On the other hand we have a more traditional approach. Servers hosting multiple different applications and services, having several different users, with different permissions. Here having a non-root user and only putting them in the “docker” group allows them to become root, so in this case it may be dangerous to perform these post-installation steps, as it undermines the whole security policies already present on the server. Unless the user was already a sudoer before, because then nothing would effectively change.

Recent reports of global threat intelligence teams shows that it is still common to find insecure Docker instances facing public Internet, and that attackers do not hesitate exploiting them. With a popular tool such as Docker which strongly interferes with a host system and thousands of both secure and insecure tutorials and advice over the web, it is easy to do a fatal mistake. There are many things to remember when spinning up a Docker container. Do you want to expose one of the container’s ports to the host? Well, Docker probably has just made a hole in your system firewall, so you can access the container from the outside. How do you map user ID within the container and do you use root in there? CIS Benchmark for Docker may shed some light on how to do this properly.

Admittedly, many of cases described in this article can be perfectly fine from a business perspective and are not some security abomination. A simple “what do I risk doing this?” question, or better, a comprehensive risk modeling will resolve doubts. You probably don’t have to create a fortress on a cheap 5-minute VPS where you test out your software, but you also probably don’t want to be one of the “hacked” companies in the news.

--

--