If you run SSHD in your Docker containers, you’re doing it wrong!

Tushar Sappal
Tech Bits
Published in
5 min readJan 25, 2019

When people start using Docker, they often ask: “How do I get inside my containers?” and other people will tell them “Run an SSH server in your containers!” but that’s a very bad practice. This article covers why it’s wrong and what one should do instead.

Your Containers should not run a SSH Server

…. Unless your Docker container is a SSH Server :)

It’s tempting to run the SSH server, because it gives an easy way to “get inside” of the container. Virtually everybody in our craft used SSH at least once in their life. Most of us use it on a daily basis, and are familiar with public and private keys, password-less logins, key agents, and even sometimes port forwarding and other nuances.

With that in mind, it’s not surprising that one would get suggestions to run SSH within their container.

Let’s say that you are building a Docker image for a Redis server or a Java web-service. There are few questions that I would like to ask

What do you need SSH for?

Most likely, you want to do backups, check logs, maybe restart the process, tweak the configuration .We will see how to do those things without SSH in this post .

How will you manage keys and passwords?

Most likely, you will either bake those into your image, or put them in a volume. Think about what you should do when you want to update keys or passwords. If you bake them into the image, you will need to rebuild your images, redeploy them, and restart your containers. Not the end of the world, but not very elegant neither.

A much better solution is to put the credentials in a volume, and manage that volume. It works, but has significant drawbacks. You should make sure that the container does not have write access to the volume; otherwise, it could corrupt the credentials (preventing you from logging into the container!), which could be even worse if those credentials are shared across multiple containers. If only SSH could be elsewhere, that would be one less thing to worry about, right?

How will you manage security upgrades?

The SSH server is pretty safe, but still, when a security issue arises, you will have to upgrade all the containers using SSH. That means rebuilding and restarting all of them. Again, if SSH could be elsewhere, that would be a nice separation of concerns, wouldn’t it?

Do you need to “just add the SSH server” to make it work?

No. You also need to add a process manager; for instance Monit or Supervisor. This is because Docker will watch one single process. If you need multiple processes, you need to add one at the top-level to take care of the others. In other words, you’re turning a lean and simple container into something much more complicated. If your application stops (if it exits cleanly or if it crashes), instead of getting that information through Docker, you will have to get it from your process manager.

You are in charge of putting the app inside a container, but are you also in charge of access policies and security compliance?

In smaller organizations, that doesn’t matter too much. But in larger groups, if you are the person putting the app in a container, there is probably a different person responsible for managing , defining remote access policies. In that case, you definitely don’t want to put a SSH server in your container.

But How Do I ?

Take Backup of my Data

Your data should be in a volume. Then, you can run another container, and with the --volumes-from option, share that volume with the first one. The new container will be dedicated to the backup job, and will have access to the required data.

Added benefit: if you need to install new tools to make your backups or to ship them to long term storage (like s3cmd or the like), you can do that in the special-purpose backup container instead of the main service container. It’s cleaner.

Check Logs and Configs ?

Use a volume! Yes, again. If you write all your logs under a specific directory, and that directory is a volume, then you can start another “log inspection” container (with --volumes-from, remember?) and do everything you need here.

Again, if you need special tools (or just a fancy ack-grep), you can install them in the other container, keeping your main container in pristine condition. Doesn’t it look clean ?

Restart my service?

Virtually all services can be restarted with signals. When you issue /etc/init.d/foo restartor service foo restart, it will almost always result in sending a specific signal to a process. You can send that signal with docker kill -s <signal>.

Some services won’t listen to signals, but will accept commands on a special socket. If it is a TCP socket, just connect over the network. If it is a UNIX socket, you will use… a volume, one more time. Setup the container and the service so that the control socket is in a specific directory, and that directory is a volume. Then you can start a new container with access to that volume; it will be able to use the socket.

“But, this is complicated!” — not really. Let’s say that your service foo creates a socket in /var/run/foo.sock, and requires you to run fooctl restart to be restarted cleanly. Just start the service with -v /var/run (or add VOLUME /var/run in the Dockerfile). When you want to restart, execute the exact same image, but with the --volumes-from option and overriding the command. This will look like this:

# Starting the service
CID=$(docker run -d -v /var/run fooservice)
# Restarting the service with a sidekick or sidecar container
docker run --volumes-from $CID fooservice fooctl restart

Cleanliness at it’s best :)

Edit my configuration?

If you are performing a durable change to the configuration, it should be done in the image — because if you start a new container, the old configuration will be there again, and your changes will be lost. So, no SSH access for you!

“But I need to change my configuration over the lifetime of my service; for instance to add new virtual hosts!”

In that case, you should use… wait for it… a volume! The configuration should be in a volume, and that volume should be shared with a special-purpose “config editor” container. You can use anything you like in this container: SSH + your favorite editor, or an web service accepting API calls, or a crontab fetching the information from an outside source; whatever.

Again, you’re separating concerns: one container runs the service, another deals with configuration updates.

Wrapping up

Is it really Wrong (uppercase double you) to run the SSH server in a container? Let’s be honest, it’s not that bad. It’s even super convenient when you don’t have access to the Docker host, but still need to get a shell within the container.

But we saw here that there are many ways to not run an SSH server in a container, and still get all the features we want, with a much cleaner architecture.

--

--