Image via Unsplash.com

Server administration and tool management via Docker

Constantin Mathe
7 min readJul 1, 2020

--

As a server administrator you know how hard it is nowadays to keep an overview about your server configuration and more important about all of your installed software. Many applications or services you want to install require different dependencies to work. In general this means that over time your installed packages are growing very quickly. This results in more dependencies and more mess in your server environment.

And it’s even worse, because this huge amount of software is not in a static or stable state. During short period of time the dependencies are more and more mixed due to different updates and versions. This makes it more or less impossible to keep an overview about the installed software packages.

And on some day you are really frightened to install or update your packages, because you have a high risk into crashing currently running software by installing some new dependencies.

Docker

Luckily at least for my personal use cases I found a very nice solution to this problem. Which is based on Docker. If you didn’t already played with docker, than now it’s the best time to download and start.

From Unsplash.com

In a nutshell Docker allows you to pack and ship your applications in so called containers, which are fully preconfigured and ready to run. In general you can find nearly all kind of applications or services already preconfigured in docker container ready to use on your machine.

To get a better overview docker provides a Hub. Which is a big repository where most containers are located. If you like you can compare it more or less with an AppStore from your mobile phone.

Now you maybe think why should I choose Docker if I can also install it in an virtual machine. The answer is easy: Low memory footprint.

With docker you have the ability to replace most of your required installations with very small containers. These containeres are starting in nearly zero time and have a very low memory usage.
Each container normally includes a complete encapsulated environment with all dependencies packed in a few MB in size.

But how to use it now on your server? And what are the benefits of using Docker?

The reason number one for me is the clean separation of tool environments. For example if I can host several Java web applications which are completely isolated from each other. Containers allow us to reduce possible side effects to zero by storing each application in its own container environment.

Tryouts

Often I’m in the situation that I want to quickly try out a new tool to verify if it’s a suitable solution for one of my project use cases. In the past I handled it by using different VMs. Instead of VMs now I just start the desired container and in a few seconds the instance is running and I can make my first tests and tryouts on it. This was a complete game changer for me.

Running on your server

I’m using a Ubuntu Linux machine with latest stable version of Ubuntu LTS. On this machine I only have natively installed a bundle of web servers: Apache (static files to serve) and nginx for dynamic reverse proxy configurations.

Image by author

Other tools or required programs are running in separated docker container, which are completely isolated from each other.

So the idea is to start your container and then define some reverse proxy configuration to your container. What I do is to create a new subdomain like for example: fancyservice.mydomain.com and then to redirect the traffic via local reverse proxy in my case Nginx to the appropriate container.

This setup brings a lot of flexibility, because I can change the underlying container very quickly and via Nginx I have a lot of possibilities for example to cache or control the access in various ways.

How to keep control of your environment

From unsplash.com

Of course even with docker you get quickly a lot of containers on your system and you need to monitor for example the usage and memory consumption.

To keep control about your containers I could recommend two different approaches.
A simple one and a more complex professional one.

Glances

To keep it simple I could really recommend the tool “glances”. Glances is an open source python software which you can install via apt or pip.

Glances is a cross-platform monitoring tool which aims to present a large amount of monitoring information through a curses or Web based interface. The information dynamically adapts depending on the size of the user interface.

— Website: https://github.com/nicolargo/glances

Image by https://nicolargo.github.io/glances/

pip install glances

With glances you can see all necessary informations on one page. If you have running docker containers, they are automatically shown here with their CPU and memory usage. This is a great way to monitor your containers.

Grafana & InfluxDB

If you want you can collect all docker machine stats and push the to InfluxDB and setup a Grafana monitoring page.
I could recommend to install the tool telegraf and enable the docker stats collector in the config file (telegraf.conf). This provides a very easy way to collect all necessary informations and automatically push them to your InfluxDB.

Just as remark of course Grafana and InfluxDB can also be hosted in containers.

Security concerns

Docker is a very good solution to keep all your services and hosted applications in an isolated and very secure environment, but docker also brings some negative points I would like to discuss here.

1. Root permissions

For me the biggest negative point on docker is the root only implementation. Which means docker always runs with root permissions and so each of your container is also running with root permissions.
Currently the docker team is working on a non-root user version of docker, but unfortunately by writing this post it’s only an experimental feature and not released.
So you need to keep in mind that each container you start is running with highest system privileges.

There are some workarounds to get rid of this root privileges. For example instead of docker you can use Podman.

Podman itself writes on their website:

What is Podman? Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System. Containers can either be run as root or in rootless mode. Simply put: `alias docker=podman`. More details here.

So in general it could be a complete replacement for some use cases where docker is not secure enough. And the syntax of the commands should be exactly the same if you use the: alias docker=podman.

2. Exposed container ports

Running containers on a server implied the usage of ports. So what you typically do is to map a container specific port to your host port.

docker run … -p 8080:80 xyz/abc

The above example maps port 8080 from your machine to port 80 in the container. Typically you do this for every container with a web application.
During the running lifecycle of this container your host has opened port 8080 for connections which is some additional ground for possible attacks or port scans.
I recommend to use an external webserver like an NGINX or Apache to configure a reverse proxy for each container.

For security hardening I could recommend to change the docker command to something like this:

docker run … -p 127.0.0.1:8080:80 xyz/abc

This will open the docker container port 8080 only for your local machine. After starting the container in that way you can setup a reverse proxy which listen on localhost:8080 and forwards it to the web. This is also recommended to keep the leaks on your firewall very small. If you start your containers in that way, an external portscan tool will not be able to detect docker containers.

Example NGINX configuration:

location ~ / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
##Access only via localhost to this port
proxy_pass http://127.0.0.1:8080;
}

Here you can see, that in this example configuration all requests are passed through the docker container which is listening via 127.0.0.1:8080. This gives us also some more control about the reachability and security of the service. We can for example configure some web firewall which automatically blocks the most common attacks.

If you now combine these technics then you got from security point of view a very interesting solution.

--

--