Hardening Docker: Settings for Running Untrusted & Trusted Containers at the same time

Fabien Soulis
9 min readDec 22, 2023

--

Docker’s role in modern software development is undeniable, but managing security, especially for untrusted containers, is a paramount concern. This article introduces practical best practices and code examples to effectively secure your Docker environments when you want to run untrusted & trusted Containers at the same time. (If you are also interrested about container image building and hardening let’s check this other article I wrote).

1. Utilizing a Dedicated Host for Untrusted Containers

Best Practice: When dealing with untrusted containers, one effective strategy to enhance security is to run these containers on a dedicated host. This host can be a separate physical machine or a virtual machine, specifically allocated for this purpose. The key idea is to isolate untrusted containers from your primary production environment to mitigate the risks they pose.

Benefits of Using a Dedicated Host:

  • Isolation from Critical Systems: By running untrusted containers on a separate host, you ensure that any potential security breach has limited impact, confined to the dedicated host, thus protecting your primary systems.
  • Custom Security Policies: A dedicated host allows you to implement strict security policies tailored for untrusted containers without affecting your main environment.
  • Easier Monitoring and Management: With untrusted containers in a segregated environment, monitoring for suspicious activities and managing security becomes more focused and manageable.

2. Hardening Your Host Environment

Best Practice: The security of Docker containers is intrinsically linked to the security of the host system. Even the most secure container configuration might be compromised if the underlying host is vulnerable. Therefore, it’s crucial to harden your host environment with a series of best practices aimed at bolstering overall security.

Key Steps for Host Hardening:

Regularly Update OS and Kernel:

  • Keep the host operating system and kernel up-to-date with the latest security patches. Regular updates fix vulnerabilities that could be exploited by attackers.

Enable Firewalls and Network Isolation:

  • Use firewalls to control the inbound and outbound traffic to and from the host system.
  • Employ network isolation techniques, such as VLANs or network segmentation, to limit the network footprint of your Docker containers.

Disable Unnecessary Services and Daemons:

  • Turn off any services and uninstall programm on the host that are not essential for its operation. Reducing the number of active services and programm minimizes the potential attack surface.

3. Refrain from making the Docker daemon socket accessible.

Typically, the Docker daemon is accessible through a Unix socket located at /var/run/docker.sock. There’s also an option to configure the daemon to listen on a TCP socket, enabling remote connections to the Docker host from another device. However, it is advisable to steer clear of this configuration, as it introduces an extra vulnerability. Inadvertently exposing the TCP socket to your public network could potentially grant anyone the ability to send commands to the Docker API without requiring physical access to your host. Unless your specific use case necessitates remote access, it is best to keep TCP disabled.

4. Keeping Docker Updated

Best Practice: For Enhanced Security Staying current with the latest Docker releases is a fundamental yet often overlooked aspect of Docker security. Docker, like any other software, is regularly updated to address security vulnerabilities, enhance performance, and add new features. Running an outdated version of Docker can leave your system exposed to known vulnerabilities that have been fixed in later releases.

Implementing Regular Updates:

Most Linux distributions include Docker in their package repositories. Use your distribution’s package manager (like apt for Ubuntu/Debian or yum for CentOS/Red Hat) to update Docker.

sudo apt update && sudo apt upgrade docker-ce

This command will update Docker to the latest version available in your repository.

Consider enabling automatic updates for Docker if your environment allows. This ensures that you always run the latest version without manual intervention.

Warning : Be cautious with automatic updates in critical environments as they can introduce unexpected changes. Before applying updates in a production environment, test them in a staging or development environment to ensure they do not disrupt your applications.

5. Leveraging User Namespaces for Enhanced Container Isolation

Best Practice: User namespaces are a powerful feature in Docker that enhances container isolation by segregating user IDs between the host and containers. This segregation helps prevent container processes from gaining elevated privileges on the host system.

To enable user namespaces in Docker, your have 3 options:

a) Configure user namespace for all containers that are running on your server by following this step:

  1. Configure the Docker Daemon: Edit the Docker daemon configuration file (usually located at /etc/docker/daemon.json) and add the following lines:
{   "userns-remap": "default" }

This setting tells Docker to automatically create a new user and group (usually named dockremap) and use it for user ID mapping.

2. Restart Docker Daemon: Apply the changes by restarting the Docker service:

sudo systemctl restart docker

3. Running Containers with User Namespaces: When you run a new container, Docker automatically applies the user namespace mapping. For example:

docker run -it your_image

In this container, the root user (UID 0 inside the container) will be mapped to a non-root user on the host, as defined by the userns-remap setting.

b) Configure user namespace per container run:

During runtime using -u option of docker run command e.g.:

docker run -u 4000 alpine

Or in your Docker Compose file:

version: '3.8'

services:
alpine:
image: alpine
user: "4000"

Here the alpine container will be run using non-root user “4000”.

c) Configure user namespace at the build of the docker image :

During build time. Simply add user in Dockerfile and use it. For example:

FROM alpine 
# Create a non-root user
RUN groupadd -r myuser && useradd -r -g myuser myuser

<HERE DO WHAT YOU HAVE TO DO AS A ROOT USER LIKE INSTALLING PACKAGES ETC.>

# Switch to non-root user
USER myuser

# Start the app with the non-root user
CMD [ "node", "app.js" ]

Important Considerations:

Enabling user namespaces might introduce compatibility issues with certain containers or configurations that expect specific user/group ID settings.

Some Docker features, like certain volume mounts, may require additional configuration to work correctly with user namespaces.

6. Implementing AppArmor for Enhanced Security

Best Practice: AppArmor helps confine container capabilities. After creating a custom AppArmor profile, apply it to a Docker container like that:

With Docker run :

docker run --security-opt apparmor=your_profile_name your_image

With Docker Compose file :

services:
your_service:
image: your_image
security_opt:
- apparmor=your_profile_name

Replace your_profile_name with the name of your AppArmor profile and your_image with your Docker image name.

Tips: This technics only works if AppArmor is already installed on the host.

Tips : You can follow this article to generate your own custom AppAmor profiles

7. Dropping Unnecessary Capabilities with --cap-drop

Best Practice: In Docker, containers run with a default set of capabilities that grant them certain privileges on the host system. While these capabilities are necessary for many common tasks, they can pose a security risk, especially when running untrusted containers. To mitigate this risk, Docker provides the --cap-drop flag, which allows you to remove specific capabilities from a container, thereby reducing its potential impact on the host system.

Implementing --cap-drop:

With Docker run :

docker run --cap-drop=NET_BIND_SERVICE --cap-drop=SETFCAP your_image

With Docker compose file :

version: '3.8'
services:
your_service:
image: your_image
cap_drop:
- NET_BIND_SERVICE
- SETFCAP

You can also drop all capabilities and add them one by one :

With Docker run :

# Run a container with limited capabilities
docker run --cap-drop all --cap-add NET_BIND_SERVICE --cap-add SETFCAP myimage

With Docker Compose file :

version: '3.8'
services:
myservice:
image: myimage
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
- SETFCAP

Warning : Remember not to run containers with the --privileged flag - this will add ALL Linux kernel capabilities to the container.

Tips: Identify Unnecessary Capabilities: Determine which capabilities your container does not need for its operation. This might require understanding the application’s requirements and testing to ensure functionality is not impacted.

8. Add –no-new-privileges flag

Best practice: Always run your docker images with --security-opt=no-new-privileges in order to prevent escalate privileges using setuid or setgid binaries.

With Docker Run :

# Run a Docker container with no new privileges
docker run --security-opt=no-new-privileges myimage

With Docker Compose file :

services:
myservice:
image: myimage
security_opt:
- no-new-privileges

9. Read-Only Filesystems

Best Practice: Where possible, run containers with a read-only filesystem. This limits the ability of an attacker to write malicious files.

With Docker run:

docker run --read-only myimage

With Docker Compose file:


services:
myservice:
image: myimage
read_only: true

10. Disable inter-container communication

Best Practice: Docker normally allows arbitrary communication between the containers running on your host. Each new container is automatically added to the docker0 bridge network, which allows it to discover and contact its peers and also to connect to internet.

Keeping inter-container communication (ICC) enabled is risky because it could permit a malicious process to launch an attack against neighboring containers.

You should increase your security by launching the Docker daemon with ICC disabled (using the --icc=false flag) :

dockerd --icc=false

This can also be done by modifying the Docker daemon configuration file (usually located at /etc/docker/daemon.json) :

{   "icc": false }

If certain containers need to communicate with each other, you can enable this interaction by manually creating networks that allow for selective connectivity between them.

With Docker run :

a) use the docker network create command to create a new network:

docker network create my_network

Tips :

If you don’t want your container to have access to ressource outside this network, use the “internal” flag :

docker network create --internal --driver bridge my_network

or with Docker compose file :

networks:
my_network:
driver: bridge
internal: true

More network options here : https://docs.docker.com/engine/reference/commandline/network_create/

https://serverfault.com/questions/830135/routing-among-different-docker-networks-on-the-same-host-machine

b) When running your containers, connect them to the network you created. This allows only those containers on the same network to communicate with each other.

docker run --network=my_network your_image

With Docker Compose file :

version: '3.8'

services:
your_service:
image: your_image
networks:
- my_network

networks:
my_network:
driver: bridge
# Additional network configuration can go here

Tips :

If you want your container to have not network access you can add it to the none/null default network

docker run --network="none" ubuntu

This command starts an Ubuntu container with no network connectivity. Remember that without network access, the container won’t be able to reach the internet or any other network resources. This is typically used for containers that need to be isolated for security reasons or to test non-networked behavior.

More network options here : https://docs.docker.com/engine/reference/commandline/network_create/

https://serverfault.com/questions/830135/routing-among-different-docker-networks-on-the-same-host-machine

11. Setting Memory and CPU Limits

Best Practice: Limiting resources is crucial for container management. You wouldn’t want a container to exhaust all the host’s resources. Here’s how you can set memory and CPU limits:

With Docker Run :

docker run -m 512m --cpus 2 your_image

With Docker compose file :

version: '2.4'  # mem_limit is supported in version 2.4 and later
services:
your_service:
image: your_image
mem_limit: 512m
cpus: '2'

This command limits the container to 512 MB of memory and 2 CPUs.

12. Managing Container Health with the HEALTHCHECK Instruction

Best Practice: Maintaining the health of containers is crucial, especially for important ones. Docker’s HEALTHCHECK instruction, added to the Dockerfile before the build of the container, allows you to configure how the health of your container is checked. If a container becomes unhealthy, Docker can restart it automatically. Here's a basic example:

HEALTHCHECK --interval=5m --timeout=3s \
CMD curl -f http://localhost/ || exit 1

This HEALTHCHECK instruction checks the health of the container every five minutes. If the check fails (as indicated by curl not receiving a response from the container's web server), the container is considered unhealthy. To restart unhealthy containers automatically, use a Docker service with a restart policy:

With Docker run :

docker run --restart on-failure:5 your_image

With Docker compose :

version: '3.8'

services:
your_service:
image: your_image
restart: on-failure:5

This command ensures that if the container stops unexpectedly or becomes unhealthy, Docker will attempt to restart it up to five times.

To go further :

https://learn.microsoft.com/en-us/azure/governance/policy/samples/guest-configuration-baseline-docker#general-security-controls

Conclusion

Implementing these strategies, particularly with the provided code examples, significantly improves the security posture of Docker environments when running untrusted containers. Continual learning and adaptation of these practices are essential in maintaining robust security.

I’m Security Architect / CTO & part time Web security teacher at Panthéon-Sorbonne University, Paris.

I write about IT security and Business. If you find this article compelling, please do not hesitate to express your appreciation by clapping, sharing, and following me here or on linkedin. Should you have any questions or wish to contribute to the enhancement of the content, feel free to leave a comment :)

If you want to secure your e-mails from spoofing attacks and easily troubleshoot email delivery issues, feel free to visit my company’s website and book a call with me and my team. : https://www.dmarc-expert.com/offers

--

--

Fabien Soulis

I’m Security Architect / CTO & part time Web security teacher at Panthéon-Sorbonne University. https://www.linkedin.com/in/fabiensoulis/