U Can’t Touch This ?!

I was 15 when MC Hammer conquered the charts and made the world a better place wearing parachute pants. More than 20 years later, containers make the (devops) world a better place. One important aspect of containers is certainly that they enable to bundle up all application-level dependencies, providing a guarantee that it runs exactly the same, from dev to prod.

Another important aspect are the isolation guarantees. In the past decade, a lot of work went into making Linux namespace-aware and establishing a throttling/accounting mechanism (aka as cgroups).

Let’s have a concrete look now how isolation works in DC/OS. If you want to try out the following yourself, then you’ll need two things: a DC/OS 1.8 cluster and the step-by-step deep-dive instructions.

A legacy app

First we have a look at a so-called legacy app. Legacy in this context means an app that actually makes you some money, today. Due to whatever reason you can not or don’t want to create a, say, Docker image right now, so you would launch it in DC/OS as a service, something like so:

Note that I’m executing a script here that captures the world view, that is, information about user and processes, CPU and memory as well as networking stats. While the script is somewhat useful, many Linux tools are not cgroups-aware and hence we end up using systemd-cgtop as well as manually inspecting the cgroups.

Docker

Next, we look at a containerized app, so someone (hopefully your CI/CD pipeline) has created a Docker image and that would be launched as a DC/OS service for example like the following:

The world view from within the container now looks like this:

Note the PID 1 is the actual app we’re executing and although there are CPU and memory cgroups in place (defined by the cpu and mem fields in the app spec) the standard tools and system files used to determine available resources—in our case free and /proc/cpuinfo—show the same results as on the DC/OS agent, the host.

Universal container runtime

The last case is (from the user perspective) the same as the previous in that a Docker image is required to run the app. However, since we’ve introduced the universal container runtime in DC/OS we can run Docker (and appc) containers directly which means one less moving target (the Docker engine) and a more uniform handling of the resources. To use the universal container runtime let’s submit the following app spec:

Note the type field in above app spec and if you want learn more, the Mesos docs has all the details.


Hope you had some fun learning a bit about container handling in DC/OS and I’ll plan to provide some more details on this topics in the near future.