Developer virtual environment with Docker + sshd + PAM
Docker is a popular tool for deploying services. I will not dwell on its advantages and disadvantages, the fact remains that it is a modern and very convenient trend. I use Docker to support web hosting services. Both the regular (apache, mysql, redis) and the extra servises (mail, dns, control panel). This applies to both mass hosting services and services for private customers with custom setups. One of the most important advantages for me is the ability to use Docker as a “package manager” and “way of sweeping the dirt under the rug” (I mean individual environment and dependencies) for client applications and client enviroments.
One of the features of my traditional customers is how I call it “Discontinuous integration”. That is, the client usually performs one-time non-periodic work with the site or application. This leads to two consequences. The first consequence is that the technologies may not be updated for years, the application may depend on the old or custom libraries. The second consequence is that developers doing this one-time work are often different guys, there is no single development center. This leads to many more consequences. Ultimately, I provide clients with a combined service that allows the development and debugging of sites and applications. I leaving the issue of dividing the test and production environments on the client’s conscience.
So, I run a variety of containers with client applications. For example, this is the apache web server or the uwsgi web server, or php-fpm. The client application code is not included in the containers and is mounted separately. The task is to enable the client to “enter” the server exactly with the program environment in which its application is running.
The simplest solution that came to my mind was to use a wrapper around the ‘docker exec’ command. The wrapper should be able to find out if the terminal emulation is needed, find out in which container it is necessary to run the command and actually exec it. I put users who want to know only about their environment in a special group. I compile a table of the user’s correspondence to the container. And I use the “Match Group dockerexecwrapper” block and the “ForceCommand” construct inside this block in sshd_config. However, during developmet I found a more productive solution, although dependent on third-party program.
I found the PAM module https://github.com/flant/pam_docker, which solves my problem. What does it do? This module put this process inside the working container namespace in accordance with the configuration. Using PAM system makes lot of things easy. su, cron, many other uses PAM. There are no more levels of abstraction, no more a chains of running shells, no more a chans of pseudo-terminals and wrappers. For my own purposes, I even dockerized an SSH server with PAM pam_docker module.
The following things should be considered for dockerized SSH container with pam_docker:
1. The container host authorization keys must be generated once and copies to container every build. Otherwise host keys will be different each time and this will be a problem.
2. The container must have access to /var/run/docker.sock of the Docker itself. It was not easy. I had to create entrypoint.sh only to symlink creation.
3. It is required to mount home user directories inside the container, if it is necessary to save SSH authorization by keys. Because the authorization for keys in SSH occurs before accessing PAM
4. It is necessary that passwords (ie /etc/shadows) are available in this container. According to the default PAM configuration, they are checked before switching to the new container.
5. It is necessary to maintain an identical user/group database with same uid’s and gid’s between the pam_docker container and the containers to which users are supposed to move. Or consider that they will go there with uid/gid from the configuration of the SSH container.
6. You may need to edit /etc/security/docker.conf without having to restart the container
7. The container must be started with the — privileged and — pid = host options
Then, I divided users into those who want to know only about their environment, and those who want to choose the environment manually. Next, I created containers with preinstalled environments, with the same set of software as on the running Web servers. Technically, I just corrected the corresponding Dockerfile. To make the containers persistent, I combined these environments with the cron service. I killed two birds with one stone! When creating containers, there is a choice — whether to make a unique container for each environment for each user, or whether to make the containers unique for each environment, but shared by all system users. This is important because I have to duplicate entries in /etc/ passwd, or just for one user, or for everyone. Same for groups.
For example a container with php 7.1:
And this all together can be combined into a docker-compose:
The next task in front of me is to create a wrapper around “sudo su” commands for users, who want to choose the environment manually. This will work like python virtualenv.