Dockerfile security tuneup

Ross Fairbanks
Microscaling Systems
6 min readJan 4, 2017

--

I recently watched 2 great talks on container security by Justin Cormack from Docker at Devoxx Belgium and Adrian Mouat from Container Solutions at GOTO Stockholm. We were following many of the suggestions but there was still room for improvement. So we decided it was good time to do a security tuneup of our dockerfiles.

Official images

We’re longtime users of Alpine Linux as we prefer the smaller size and reduced attack surface compared with Debian or Ubuntu based images. So we were using the official alpine image as the base for all our images. However an added benefit of the official images is that Docker have a team dedicated to keeping them up to date and following best practices.

We primarily develop in Go but we also use Ruby for some web and scripting tasks. For these images we’re now using ruby:2.3-alpine as the base. This image installs Ruby from source rather than using the Alpine package. With semantic versioning the 2.3 release of Ruby will receive security updates released by the Ruby core team and the Docker packaging team will update the 2.3-alpine tag.

Otherwise we’d need to install Ruby from source ourselves and track whenever a new Ruby version is released. Or use the Alpine package and track when Alpine package a new version of Ruby.

Notifications & Webhooks

As we saw in the Ruby example its important to rebuild your image if the underlying base image receives a security update. This is a plug for how our MicroBadger notifications can help with this. We use notifications for the alpine and ruby official images. These post to Slack so we know that a public image we care about has changed.

Image change notification in Slack

We also use notifications to trigger our automated builds to be rebuilt whenever their base image has changed. Docker Hub also has this feature but our notifications can be used with any system that supports webhooks such as your CI system or security scanner.

Non privileged user

One of the key differences between containers and virtual machines is that containers share the kernel with the host. By default docker containers run as root which causes a breakout risk. If your container becomes compromised as root it has root access to the host.

You can mitigate this risk by running your containers as a non privileged user. Here’s an example of doing this for a Rails app.

# Create working directory.
WORKDIR /app
# Copy Rails app code into the image
COPY . ./
# Create non privileged user, set ownership and change user
RUN addgroup rails && adduser -D -G rails rails \
&& chown -R rails:rails /app
USER rails

Security scanning

I think security scanning is an area where container registries can really add value. As well as storing your images the registries can regularly scan them for vulnerabilities. Docker provide security scanning for official images and for private images hosted on Docker Cloud.

We also really like Clair from CoreOS, it’s open source and its used for the security scanning in the Quay.io registry. Clair support for Alpine has recently been merged which is great news and hopefully it will be available in Quay soon. There are also specialist scanners like TwistLock and Aqua which are usually paid products.

From the Docker scanning our Go images got a clean bill of health as we copy a binary into the image, and the only dependency is the CA certificates so we can make HTTPS connections. Our Rails apps have far more dependencies since Ruby is an interpreted language. We need to install all the Ruby gems needed for our app and any operating system packages needed by those gems.

The scanning found critical vulnerabilities in libxml2 and libxslt. These are build time dependencies for the Nokogiri gem which is an XML and JSON parser. This gem uses C extensions for performance that need to be compiled but once the gem is installed libxml2 and libxslt are no longer needed.

So we now remove all our build dependencies. This rather complex set of commands shows how we do this.

# Cache installing gems
WORKDIR /tmp
ADD Gemfile* /tmp/
# Update and install all of the required packages.
# At the end, remove build packages and apk cache
RUN apk update && apk upgrade && \
apk add --no-cache $RUBY_PACKAGES && \
apk add --no-cache --virtual build-deps $BUILD_PACKAGES && \
bundle install --jobs 20 --retry 5 && \
apk del build-deps

The Gemfile contains the gems that need to be installed and the Gemfile.lock manages the dependency chains. By caching these files in /tmp the bundle install command will only be run if the Gemfile has changed. If the Gemfile hasn’t changed it will use the docker build cache. This is useful because installing the gems is time consuming and uses a lot of bandwidth.

The run command is multiline so only a single layer is added to the image that has both the apk and gem packages. The build packages are added as a virtual package so they can be easily removed once the install has completed.

Update: Compiled binary may still be vulnerable

Although there is some benefit in removing the build dependencies there will still be a binary in the image for the gem extension. This binary is harder for the scanner to inspect but it may well still contain the vulnerability.

So its also necessary to check if the vulnerabilities have been fixed in the library itself. In the case of Nokogiri we’re running v1.6.8 which is up to date with security patches for libxml and libxslt. There is also a possible issue with the CVE metadata for these packages that I’ve reported to the Docker Scanning team.

Thanks to Justin Cormack and Stephen Day at Docker for flagging this up. Thanks also to x_17 on Reddit for the tip to use the no-cache flag when installing Alpine packages.

Automated builds

An important aspect of container security is that images should be rebuilt whenever there are security updates for the image or any of its base images. Automated builds help with this because the image is linked to a git repository. A build is triggered whenever commits are pushed to a branch that is being tracked. As we saw earlier an automated build can also be triggered if its base image has changed.

In this case our Ruby images are simpler as the automated build can run the same Dockerfile we use locally. For our Go images we need to compile the binary before adding it to the image. Locally we use a makefile for this.

For an automated build we can use a build hook and compile the binary using a docker container. These golang-builder images from CenturyLinkLabs and the Prometheus project both look like good options.

To call the builder image you can call a build hook. Build hooks can also be used to add dynamic metadata to your images which is something I blogged about recently.

It wasn’t possible to cover all the topics in the videos so please do watch them if you get a chance. Finally we’ve just added private repo support to MicroBadger! So you can now also use notifications with your private images on Docker Hub.

If you’ve read this far and liked reading, then consider pressing the like button. My understanding is that is what the button is for, and it’s a shame to waste it.

Check out MicroBadger to explore image metadata, and follow Microscaling Systems on Twitter.

--

--

Ross Fairbanks
Microscaling Systems

Interested in Linux containers, data center efficiency, and reusable rockets. Platform Engineer @GiantSwarm