Docker @ EPIC

Hugues
epicagency
Published in
6 min readJan 3, 2017

Thou shall contain all your apps

Unless you’ve been living under a rock you know that containers are all the rage now. Docker, rkt, lxc, … everybody’s using them. And rightfully so. When correctly implemented, containers are a great way to isolate your services, install pre-configured applications (hello discourse!) and secure components of your stack without too much overhead.

There’s pro and cons obviously and for every success story you read, someone drowns in 1000 microservices and complains loudly about it. Often, what you hear is startups splitting their application or using containers to scale on thousands of servers at the flick of a cli’s prompt. Much less often do you hear web agencies using them and how.

That’s why I wanted to talk a bit about how we use Docker at EPIC and where we’re going with it.

Before coming to the port

We’ve had a pretty standard evolution regarding deployment, FTP^H^H^H hem no, not FTP, never…, rsync, then git and ssh. We stood quite a bit at this stage, push on a centralized server, pull on the production server, all was good, mostly because we were less than 5 at the time and most if not all deployments went through one person (usually me).

That worked, initial deployments were a bit of a PITA because it involved a lot of manual action (create a db, create a user, update nginx’s config, setup a deploy key).

Once we started to hire more developers and add more sites to our portfolio, the bottleneck of having only one or two people doing deployments was becoming too much so we looked at better solutions. We stumbled upon DeployBot which offered lots of ways to deploy but especially one: ssh where we could use our exact workflow, only automated.

That was much better and, beside the initial setup that was still on the two backends, all deploys could be done by anyone on the team.

There was still two issues:

  • The assets
  • The first deploy

Up to a couple months ago, how we deployed compiled assets was a bit messy. Since we had the full repository on the production server, assets were compiled live on deploy.

This has multiple problems, first that means we have to install everything needed to compile assets on every production server. That includes compilers for native extensions, full node/sass/ruby/whatever which, security-wise, is not so great.

Then, it implies that we also have to install dev dependencies for every projects and that can grow quickly (thanks JS developers!).

Finally, it prevents atomic deployments, while git updates are very fast and for our purpose virtually atomic, waiting for gulp to compile and move around stuff is definitely not, leaving the site in a broken state for up to a minute for complex [code].

And initial setup was still the work of a few people, hurting the planning and prone to errors and oversights.

The boat has docked

A few months ago we decided to have a second look at Docker. The tools had matured and more importantly, Docker had become a first class citizen in Gitlab so it was a natural fit since we had an instance for quite some time now.

We set aside a few days to work on proof of concepts and use cases for our workflow.

Surprisingly, it went very well and we had a working scheme in very little time.

What we found out during those days was that Docker allowed us to:

  • Have reproducible environments: a website deploy on staging would behave exactly the same in production because each and every files are the same;
  • Deploy atomically: No more waiting for compiled assets, once pulled, a site is restarted with the new image in a matter of seconds, every time;
  • Improve security: even if a website was compromised, there’s no way it can come close to another client’s files. Beforehand, we had to be very careful about permissions and users to ensure the best security possible;
  • Simplify maintenance: instead of maintaining servers, users, tools and permissions we now have bare machines with only docker on it.

The shipyard

So how we did it? Most containers stories deal with deploying the same service to many servers in many datacenters but our needs are pretty much the opposite: many different services (sites) once on few servers. What’s the point then?

Obviously, clustering tools are not very useful for us (too bad, kubernetes looks really cool!) but that doesn’t mean that containers are not for agencies, far from it.

It starts with some CI tool, we use Gitlab CI but others like Drone or Travis would certainly fit the bill. The CI is responsible for the following:

  • Building the assets
  • Building an image out of the site
  • Publishing the image to a private registry
  • Pulling the image on a production server and restarting the site

Depending on the technology used and the needs of the client, additional steps can be performed such as unit or integration testing.

So now we have solved the assets issue, on the production server (actually in the container) we only have the files needed to run the site, nothing more. But, almost for free, we also have solved the initial setup. Because of the way docker works, the first or the 100th deploy are basically the same operation, we just have to create a folder, which the CI does, for persistent files (uploads usually) and we’re good to go.

Gitlab CI + Docker = ❤️

Port city

While I specifically described how we use Docker for deployments at EPIC, it is not, by far, the only usage we have for it. We actually went “all-in” with Docker, our development environment makes heavy use of containers for everything from exposing test sites to running gulp and providing databases instances to everyone.

We also install most, if not all, our internal services through Docker. Thanks to the kind maintainers of tools such as Gitlab, Sentry, Discourses and many others, installing large, complex app is mostly pain free and allows us to focus on building great experience for our customer.

Sailing away

While we are already happy how things turned out we still have some kinks to iron out. Some processes are a bit unstable and would benefit from a bit more work and we still have to truly implement “zero touch” deployments.

Zero-touch is especially important to us because we really want anyone at EPIC to be able to deploy a site from scratch, autonomously and with as little as possible configuration. That means provisioning a database, syncing files between environments, setting up HTTPS (this is almost completely transparent now thanks to Let’s encrypt and Traefik but could be automated further) and possibly spinning a new VPS if needed.

By forcing us to rethink the way updates are done and “frozen” in an image we also now avoid dangerous shortcuts such as changing files on the production server or doing upgrades live. We are also much more conscious of what should and should not go in the final image and better understand the structure of the framework we use.

In the end, containers are a great way to publish website and not just application and while not everything is flowers and rainbows it has definitely improved the way we work at EPIC.

--

--