Docker and Proxy
Working behind a corporate proxy can be a painful experience. Attempts to simply accessing web resources are spoiled.
When it comes to docker there are a couple configuration options we should know of. In this post I’d like to give an overview of their effects and what to keep in mind.
There are three different places where a proxy configuration can be applied:
- docker client
- docker daemon
- container runtime
Docker consists of a client and a daemon which don’t have to reside on the same host (see DOCKER_HOST env var). In case there is a proxy in between, you need to configure the docker client accordingly. Luckily, the client adheres to the standard of respecting the following environment variables:
Setting those on the machine of the client can be as easy as exporting the environment variables:
This will affect how the daemon itself connects to network resources.
The main use case is to be able to pull and push images from and to registries through a proxy — eg docker hub.
On the machine where the docker daemon is running we configure the proxy according to this page of the official documentation: https://docs.docker.com/config/daemon/systemd/#httphttps-proxy
Create a systemd drop-in directory for the docker service:
$ sudo mkdir -p /etc/systemd/system/docker.service.d
Create a file called
/etc/systemd/system/docker.service.d/http-proxy.confthat adds the
Or, if you are behind an HTTPS proxy server, create a file called
/etc/systemd/system/docker.service.d/https-proxy.confthat adds the
If you have internal Docker registries that you need to contact without proxying you can specify them via the
Setting up this configuration was the most revealing for me.
It injects the proxy relevant environment variables into each container at runtime. Pretty neat is that it also affects the build time, as building each layer is effectively performed in a container. The injected environment variables are:
This configuration is actually done on the machine of the docker client and in the home directory of the user executing the docker commands. We set it up according to the documentation here: https://docs.docker.com/network/proxy/
On the Docker client, create or edit the file
~/.docker/config.jsonin the home directory of the user which starts containers. Add JSON such as the following, substituting the type of proxy with
ftpProxyif necessary, and substituting the address and port of the proxy server. You can configure multiple proxy servers at the same time.
You can optionally exclude hosts or ranges from going through the proxy server by setting a
noProxykey to one or more comma-separated IP addresses or hosts. Using the
*character as a wildcard is supported, as shown in this example.
Save the file.
When you create or start new containers, the environment variables are set automatically within the container.
So basically you can build any* regular Dockerfile behind a proxy and each layer is provided with the proxy information (eg
RUN sudo apt-get install ... will respect it).
That means you will never have to do the following to your Dockerfile:
Having those ARGs in place will actually result in different images depending on where you build your image:
* not quite: executables which do not take those environment variables into consideration will fail. A famous example would be running a java process, eg gradle.
In most cases the proxy configuration for the docker daemon as well as the container runtime go hand in hand: When building images you probably need to access a docker registry as well as access web resources during the build time.
The exception to this could be if we are solely using an internal registry but still require resources from behind the proxy — or vice versa.
Applying the configuration for the docker client is a rather special case in my experience. Also switching between different docker daemons (remote or local) can easily come in conflict with the container runtime proxy configuration.