Docker as an Integrated Development Environment
Building a portable IDE to run anywhere (as long as you’ve got Docker)
Motivation
One thing that many people who write code do, is install software (such as Apache WebServer, NGinx, PHP, NodeJS, Java, Scala, (Neo)Vim, Eclipse and many, many more) they need to be able to perform development tasks on their local machine.
It’s inevitable, I’ll get a new machine that I’ll need to setup and configure to make it just the way I’m used to. When setting up a new machine I’ve got to remember what I need, go download all those things, install them, configure them and make sure they work. Undoubtedly, I’ll forget a setting or a package and it’ll come back to bite me in the behind. A lot of people automate this by using things like Ansible, however. Even through automation this can take time. One benefit of using Docker for this is that automation comes as part of the process.
A single dependency — Docker. Just like when building software applications, bundling your dependencies with your package is very much a good thing. As long as I have access to Docker, I can work — even on other people’s machines. I can simply download my IDE image and off I pop.
Developer Experience. In terms of setup cost, especially initially, it’s not actually that much different to installing everything locally and you end up with (in my opinion) a much more flexible working environment. The real benefits come from the re-use of the image.
The IDE
Base Image
Choosing a base image can be quite daunting. I’m always a fan of Alpine Linux for my application containers, so that’s what I chose. Alpine also has the benefit of being tiny in size, making the IDE even more portable. In total, once everything is installed and configured, the image is <400mb and I’m sure there are some improvements I could make. In a new Dockerfile, start with the below:
FROM alpine:latest
The Basics
The core of my environment is built to support my workflow. I’m an avid fan of (Neo)Vim and use that as my editor and git (& git-perl) for source code management. I use tmux for splitting my terminal window into many panes and for ‘tabs’ and ZSH is a nice configurable shell. I also need to install bash and ncurses for tmux plugin manager to work. OpenSSh Client for SSH’ing to things. cURL, less and man are other useful utilities.
RUN apk add -U --no-cache \
neovim git git-perl \
zsh tmux openssh-client bash ncurses \
curl less man
I like to use Oh My ZSH! with zsh, so I need to install that too:
RUN curl -L https://github.com/robbyrussell/oh-my-zsh/raw/master/tools/install.sh | zsh || true
And I can’t forget to copy my zshrc, vimrc and tmux.conf into my environment:
COPY zshrc .zshrc
COPY vimrc .config/nvim/init.vim
COPY tmux.conf .tmux.conf
Now that my tmux
and vim
configs exist, I can install all my plugins:
# Install Vim Plug for plugin management
RUN curl -fLo ~/.config/nvim/autoload/plug.vim --create-dirs https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim
# Install plugins
RUN nvim +PlugInstall +qall >> /dev/null# Install Tmux Plugin Manager
RUN git clone https://github.com/tmux-plugins/tpm .tmux/plugins/tpm
# Install plugins
RUN .tmux/plugins/tpm/bin/install_plugins
Executing Docker Commands
You might be thinking “but if I want to run Docker commands then I need a terminal on the host”. Nope. Docker is pretty neat, as the client uses a unix socket to communicate with the daemon. When I launch into the container, I can mount that socket into the container so I can issue commands to the socket, as long as I’ve got the client installed. Add docker
to the apk install
command and when running the image, simply pass -v /var/run/docker.sock:/var/run/docker.sock
. Then in the container, any docker
command should work as expected.
I also make use of docker-compose
, however, this isn’t a package that can be installed with apk
. I have to add py-pip
to my apk install
in the Dockerfile:
RUN apk add -U --no-cache \
neovim git git-perl \
zsh tmux openssh-client bash ncurses \
curl less man \
docker py-pipRUN pip install docker-compose
Now I can use docker-compose
inside my IDE too.
Docker Mounts
Since the container knows nothing about the underlying filesystem structure, I need to be aware of how things are laid out on the host machine if I want to be able to share my project directories. Paths for mounts need to be absolute on the host machine, not on the IDE container, as the host filesystem is what the daemon knows about. When launching new containers from within the IDE, you simply specify the path like so:
docker run -it --rm \
-v /path/to/host/workspace:/workspace \
ls12styler/ide:latest
User Permissions & su-exec
Disclaimer: Security & User Permissions are probably two of the biggest topics when it comes to Docker containers and they’re not exactly straight forward. I’m no expert and what I’ve found and done works for me.
I actually want to be able to edit files and such within the IDE and by default, all commands are run as a root
user inside the container which enables this. But this also gives me permission to edit files that my current user doesn’t (& shouldn’t) have permissions to modify, like files in /etc
.
What I’ve done is create a new user called me
and installed everything within the container in that users’ home directory, setting $home
to the same directory too:
# Create a user called 'me'
RUN useradd -ms /bin/zsh me
# Do everything from now in that users home directory
WORKDIR /home/me
ENV HOME /home/me
This is all well and good, but I need to actually map the new user & group ID’s to my local user. I’ve created a script that I’ll use as the entry point (entrypoint.sh
) into the container when it runs. With it, I’ll create a group with the same ID as my local group and then modifying the created user me
to have the same ID of my local user (passing the local ID’s to the container on start by adding some environment variables to my docker run
command):
#!/bin/sh# Get standard cali USER_ID variable
USER_ID=${HOST_USER_ID:-9001}
GROUP_ID=${HOST_GROUP_ID:-9001}# Change 'me' uid to host user's uid
if [ ! -z "$USER_ID" ] && [ "$(id -u me)" != "$USER_ID" ]; then
# Create the user group if it does not exist
groupadd --non-unique -g "$GROUP_ID" group # Set the user's uid and gid
usermod --non-unique --uid "$USER_ID" --gid "$GROUP_ID" me
fi
When I installed and copied everything over in the Dockerfile, I was actually doing at as the root
user. So everything I put in /home/me
is owned by root
. If I’m going to run as me
in the container then the .zshrc
, .vimrc
& .gitconfig
won’t be picked up when using their counterpart programs due to permissions issues. So in my entrypoint.sh
, I run a chown
on everything in that directory to set the permissions correctly (I also needed to run chown
on /var/run/docker.sock
to be able to use docker
):
# Setting permissions on /home/me
chown -R me: /home/me# Setting permissions on docker.sock
chown me: /var/run/docker.sock
su-exec
(switch user & execute) is a program that I use as the last thing in the entrypoint.sh
to launch me into a tmux
shell as the me
user.
exec /sbin/su-exec me tmux -u -2 "$@"
Add the script to the container and set it as the default command to be run when running the container:
# Entrypoint script does switches u/g ID's and `chown`s everything
COPY entrypoint.sh /bin/entrypoint.sh
# Set working directory to /workspace
WORKDIR /workspace
# Default entrypoint, can be overridden
CMD ["/bin/entrypoint.sh"]
Running the below run
command will pass the extra environment variables required for user/group ID mapping and permission setting.
docker run -it --rm \
-v /path/to/host/workspace:/workspace \
-e HOST_USER_ID=$(id -u $USER) \
-e HOST_GROUP_ID=$(id -g $USER) \
ls12styler/ide:latest
Git Config
I could have just baked my name and email address into the .gitconfig
copied into the container, but that would be bad. Not only would my email address and name be exposed in my source code, it’s also less portable (not that you wouldn’t be able to change these details later on). In the entrypoint.sh
script, at the top, I can use git
to set its own config:
#!/bin/sh
# Git config
if [ ! -z "$GIT_USER_NAME" ] && [ ! -z "$GIT_USER_EMAIL" ]; then
git config --global user.name "$GIT_USER_NAME"
git config --global user.email "$GIT_USER_EMAIL"
fi...
And when running the container I can pass a couple more environment variables to setup my git user information when it runs:
docker run -it --rm \
-v /path/to/host/workspace:/workspace \
-e HOST_USER_ID=$(id -u $USER) \
-e HOST_GROUP_ID=$(id -g $USER) \
-e GIT_USER_NAME="My Name" \
-e GIT_USER_EMAIL="me@email.com" \
ls12styler/ide:latest
Secrets
Secrets management is an important part of building any software. One thing I’d like to do to further improve the portability of my IDE, is fetch my SSH keys and other security related things from something like HaishiCorp Vault. I’d probably have to run such a service myself, so in lieu of having this, I currently distribute my secrets on to the host and then bind mount my SSH keys into the container:
docker run -it --rm \
-v /path/to/host/workspace:/workspace \
-e HOST_USER_ID=$(id -u $USER) \
-e HOST_GROUP_ID=$(id -g $USER) \
-e GIT_USER_NAME="My Name" \
-e GIT_USER_EMAIL="me@email.com" \
-v ~/.ssh:/home/me/.ssh \
ls12styler/ide:latest
Everything Else
You’ll note that in the IDE, I don’t install any of the compile/runtime dependencies such as NodeJs or Scala themselves. The trick here is separate these such ‘environments’ into their own self-contained image.
There’s an abundance of these images available, but finding the right one can be tricky. Plus, developers always have their own preferred way of doing things too. As an example, I built my own Scala image (based on another users’) for use in my development environment. The differences to the original are mainly around setting the working directory and default command when the image starts.
When I’m working on a Scala app, I run one terminal window, split into two. One split that runs the Scala/SBT container and one that runs Vim. To run the Scala container I can run:
docker run -it --rm \
-v /path/to/host/workspace/project:/project \
ls12styler/scala-sbt
And I end up with this:
In Conclusion
As with anything, there’s probably room for improvement. Things like Linux Namespaces would help solve the user permissions issues, which would mean I probably wouldn’t need to create a user & group and not have to set file permissions on startup. Docker Volumes could help assist with this too. Volumes could also help make the container more isolated from the host machine, removing the need for host bound volumes.
What I have works at the moment and I’m successfully running this both at home and at work. The resulting Docker image is less than 400MB in size, compared to IntelliJ IDEA coming out at 654MB zipped. When I get the chance, I’ll take a look at making it even more portable. Hopefully including secrets management and better permissions/user handling.