photo by Danist Soh

Development Environments with Docker

TL;DR There is no Docker way to build your local development environment. I built an (awesome) Frankenstein and it works. You can too.

Published in
10 min readDec 31, 2015

--

It is one thing to have a vision and an entirely different thing to realize that vision. Since Docker day zero we’ve dreamt about 15 second project ramp-up times, versioned development environments, and all those sexy operations idioms like, “rolling deployments” or “software defined infrastructure.” Those of us in the large-scale server software game were all suddenly more impassioned than ever about defining, refining, and commodifing terms and tools like “orchestration,” “service discovery,” and everything elastic.

I think the sudden upsurge in interest was due to the new wonderful interface and abstraction that Docker provided between application and infrastructure. Developers could start talking about infrastructure without knowing much about infrastructure, and operators could invest less time figuring out how to install and manage software. There is something buried in that simpler vision that seems to make people happy. It makes us all feel more productive.

The real world can be a harsh place. Do not believe for a moment that adopting a new technology will be without pain points or mandatory learnings. Having built all sorts of weird environments, examples, and projects over the last few years I can say that Docker is no exception to that rule. But I will say that what you learn from one experience will usually be directly applicable to the next situation regardless of the stack and role of the project. The power you gain with Docker has to be earned with practice, meditation and exposure to varied problems.

Over the last year I’ve poured myself into teaching Docker fundamentals in my book, Docker in Action.

One thing that I have watched almost every new adopter struggle with is Dockerizing a real development environment, and then subsequently understanding the relationship between that environment and others like live beta or production. Everyone jumps into build or environment engineering thinking it would be made magically simple with Docker. It isn’t entirely their fault. Most “Dockerizing” tutorials cover building a single image and involve packaging an existing tool for use in a container. Understanding that is valuable, but Dockerizing components of a development environment is another thing entirely. Having counted myself among the hopeful ignorant I’d like to share an experience.

I have a long and deep relationship with Java and its ecosystem, but this is not a story about Java. Instead I’ve focused my application development in Go and Node. I’m moderately experienced with Go and actively working on honing my skill in that arena. One of the most difficult things for me to pickup whenever I jump into a new stack is a proper workflow. The challenge is often compounded by my distaste for installing software on my laptop. This drives me to do everything with Docker, or in another time Vagrant.

by Renee French (CC 3.0 Attribution)

The project I was working on is a plain REST service written in Go. It was based on gin, and had library and service dependencies on Redis and NSQ. This means that I had a few libraries to import and that running locally would require running live local instances of Redis and NSQ. Making matters a bit more interesting I also had a few static resources that I wanted to serve from NGINX.

For the uninitiated, Go is a programming language but there is also a command line multi-tool named “go.” It is used for everything from dependency management, compilation, running unit tests, and a whole slew of other tasks. Aside from Git and a decent editor, it is the only tool you need to work on a Go project. There is only one problem. I don’t want to install Go on my laptop. In fact the only two tools that I want to install on my laptop are Git and Docker. Having this constraint limits the potential for compatibility problems in other environments and decreases any barriers to entry for new developers.

This particular project has runtime service dependencies. That means that the toolkit will need to include Docker Compose for simple environment definitions and orchestration. This is about the point where most people start to feel uncomfortable. Now what should you do? Start creating a Dockerfile or a docker-compose.yml? Well, first I’ll show you what I did then I’ll help you understand how I came to that design.

In this case I wanted my local builds to be fully automated. I hate manually walking through the steps, and my vim configuration is pedestrian. I only want to control the running environment from a “running or not” level. The goal of my local development environment was rapid iteration, not to produce production quality or even sharable Docker images. That being the case I wrote a Dockerfile that produced images containing Go, Node, and one of my favorite continuous build tools, Gulp. The Dockerfile did not inject the code, or even a Gulpfile into the image. Instead, it defined a volume in an established GOPATH (the location of a Go workspace root). Finally, I set the entrypoint for the image to gulp and set the default command to watch. The resulting image is certainly not what I would call a build artifact. For that matter, the only thing of value that this environment provides is a running instance that helps us determine if the code works. That is perfect for my use-case. I’ll leave the production of “artifacts” to another build entirely.

Next I defined the local development environment with Compose. I defined all of my service dependencies using off the shelf images from Docker Hub, and linked them appropriately with a “target” service. That service referenced my new Dockerfile to build from, bound my local source directory to the mount point where my new image would expect it, and exposed a few ports so I can test it. Eventually, I also added a service that periodically ran through a series of integration tests against my target service. Finally I added a service running NGINX with a volume mounted configuration file and the static assets. Using volumes in that way allowed me to iterate on the configuration and assets without rebuilding an image.

$ cat ./service/local.df
FROM golang:alpine
RUN apk --update add --no-cache git nodejs
RUN npm install --global gulp
ENV GOPATH=/go PATH=$PATH:/go/bin
VOLUME ["/go/src/github.com/.../myproj", "/go/pkg","/go/bin"]
WORKDIR /go/src/github.com/.../myproj
# Bring in dependencies in the image
RUN go get github.com/bitly/go-nsq && \
go get github.com/codegangsta/cli && \
go get github.com/gin-gonic/gin
CMD ["gulp"]
$ cat ./service/gulpfile.js
var gulp = require('gulp');
var child = require('child_process');
var server = null;gulp.task('default', ['watch']);gulp.task('watch', function() {
gulp.watch('./**/*.go', ['fmt','build','spawn']);
});
gulp.task('fmt', function() {
return child.spawnSync('go', ['fmt']);
});
gulp.task('build', function() {
return child.spawnSync('go', ['install']);
});
gulp.task('spawn', function() {
if (server)
server.kill();
server = child.spawn('myproj');
server.stderr.on('data', function(data) {
process.stdout.write(data.toString());
});
server.stdout.on('data', function(data) {
process.stdout.write(data.toString());
});
});
$ cat docker-compose.yml
web:
image: nginx
volumes:
- ./web/assets:/var/www
- ./web/config:/etc/nginx/conf.d
integtest:
build: ./integ
links:
- service
service:
build: ./service
dockerfile: local.df
volumes:
- ./service/src/:/go/src/github.com/.../myproj
links:
- nsqd
- redis
nsqd:
image: nsqio/nsq
...
redis:
image: redis
...

The result of all of this is a local development environment that on your computer with a git clone and running and continuously iterating from the moment you type:

docker-compose up -d

I’ll never need to rebuild an image, or restart a container. Every time a .go file is changed Gulp will rebuild and restart my service from within the running container. Mission accomplished.

Was it simple to create this environment? Absolutely not, but it was achievable. Would it have been simpler to install Go, Node, and Gulp locally and skip the containers? Probably in this case, but only if I was still using Docker for the service dependencies. And I’d hate it. If did I’d have to manage versions of those tools. I’d have shitty environment variables and build artifacts all over the place. I’d have to educate my peers on the environment that would obviously drift from canon. There would be less centralized versioning.

You might not like the environment I described above, or have different needs for your project. Good. That is the best reason to have read it. I guess the point of this whole article is that there is only one wrong way to use any tool like Docker. That is without thinking about the problem you are trying to solve.

What follows are a few questions I asked myself in designing this environment, considerations, and a few potential answers. When you’re sitting down to Dockerize your workspace you could do worse than answering these for yourself.

When you think about builds and your environment, what terms do you consider first?

This is by far the most important question. In this scenario I had a few options. I could have used the go program inside one-off containers. That would look something like:

# get dependencies
$ docker run --rm -v "$(pwd)"/go/src/app golang:1.5 go get -d -v
# start the other services# build and link
$ docker run --rm -v "$(pwd)":/go/src/app golang:1.5 go install \
github.com/allingeek/myproj
# run the program stand alone
$ docker run --rm -v "$(pwd)"/bin/myproj:/bin/myproj alpine myproj
# to iterate, make changes and repeat the last two steps

Keep in mind that most of the boilerplate in this example can be hidden by a shell alias or function. It feels like Go is actually installed on my machine. It also keeps me in touch with the Go workflow and build artifacts. This sort of thing is also great for projects that are not services, but proper library or software projects.

If you already use some other set of tools like Gulp, make, ant, or shell scripts on your host then you can always use those as your primary and use Docker as the target of one of those tools.

Alternatively, I could have a more Docker oriented experience by defining and controlling my build with docker build. That would look more like:

$ cat Dockerfile
FROM golang:1.5-onbuild
# start the other services# install dependencies, build, and link
$ docker build -t local/myproj .
# run the program
$ docker run --rm local/myprog
# to iterate, make changes and repeat the last two steps

There are a few nice things about using Docker to control the build. Tying it with image builds leverages existing build scaffolding. Dockerfile builds use caching so they will only repeat the minimum set of build steps (if you’ve written a sharp Dockerfile). Last, these builds produce images that can be shared with other developers.

In this case I used the onbuild image of the golang repository as a base. That includes some nice logic to download dependencies. This method also results in a Docker image that could easily be used in other environments, though not likely production. The issues with this approach is that in a production grade image you’d likely take steps to avoid a large image and include some init script to validate state prior to launch and monitor the service.

Interestingly Docker uses a blend of build scripts, Makefiles, and Dockerfiles. The build system that they have in place is fairly robust. It handles various degrees of testing, linting, etc, and the production of build artifacts for many operating systems and architectures. In this case the container is a tool used to produce binaries (like the first option) but it does so from within a locally built image.

Expanding on the docker build option you could define a whole development environment using Compose.

$ cat Dockerfile
FROM golang:1.5-onbuild
$ cat docker-compose.yml
service:
build: .
links:
- redis
- nsq
redis:
image: redis
nsq:
image: nsqio/nsq
# install dependencies, build, link, launch dependency services, run
$ docker-compose up -d
# to iterate, make changes and then
$ docker-compose build && docker-compose up -d

Compose is all about environment management. So it is no surprise that this feels cleaner if you want to see things running. It wires everything together, manages volumes intelligently, triggers builds when images are missing, and aggregates log output. I selected this option to simplify working with the service dependencies and because it produces my chosen build artifact…

What type of build artifacts do you want?

The build artifact I wanted in this example was a running container. Either Compose or docker would have been appropriate tools to that end. In your scenario you might prefer to have a distributable image, or you might prefer that the build produce a binary on your host operating system.

If you prefer to walk away with an image then you’re going to need to make sure that either your sources, or pre-build binaries are injected into that image at build time. There are no bind-mounted volumes at build time. This also means that you’ll need to rebuild your image with each iteration.

If you want some artifact that was generated inside of a container, you’ll need to leverage bind-mounted volumes. This is simple enough to accomplish on the command line with docker or with a fancy Compose environment. But keep in mind that the build won’t happen (or at least the artifacts won’t be delivered) until a container is run. This means you can’t just use docker build.

Summary

There is no Docker way to construct a development environment. Docker is a composable tool, not a holy book. Instead of trying to copy someone else’s Docker based build system, take the time to learn the tool, meditate on your needs, and then create an environment where you’ve used Docker to reduce your pain points.

Remember, if you need a jump on Docker fundamentals my book, Docker in Action will be in print shortly and is currently available through the Manning Early Access Program.

Start off 2016 by getting your hands dirty. Happy New Year!

--

--

I'm a cofounder of Topple a technology consulting, training, and mentorship company. I'm also a Docker Captain, and a software engineer. https://gotopple.com