NewRelic with Docker, Moby and AWS CloudFormation

I did recently think about giving aws-on-docker a try. Predominately, because of a planned switch into AWS’ CloudFormation. However, this here is not meant to explain the reasons for going this way. There is enough food for thought to write another article on this. Nope, this is about my journey which happened to be more exciting than originally thought when deciding to pick this stack and eventually rolling into a nasty problem with NewRelic.

I am a NewRelic enthusiast (Wow, am I really saying this?) for quite some time and so was my recent decision to equip the company I am currently working for with this beauty.

An easy goal as it seems! Apparently it wasn’t that easy.

When going this way, docker has a really nice and easy quick start available. It does basically provide the entire CloudFormation template in order to get started. Docker’s CloudFormation template has some really nice elements. It points to predefined ami’s, has some smartly invoked aws-related containers and further docker swarm enabling capabilities. What an amazing idea. This template spins Alpine (Alpine on Moby Kernel with Moby Linux aws-v17.03.1-ce-aws2) based instances and runs a couple of preconfigured containers to get your swarm up and running. It takes care of inner network communication and gets you up to speed like a charm.

Sounds good? Yes, it does.. but unfortunately not that much when you need to get NewRelic-infra installed natively.

Quite a big deal!


So let’s have a look at what we have here.

We are initially launching our services in Europe, thus this is the exact ami, pulled by Docker’s CloudFormation template. (Moby Linux aws-v17.03.1-ce-aws2 — ami-3f994050)

After having had a little exchange with some NewRelic guys, it got pretty clear, that there is no support for Alpine, neither will there be support in a midterm view. Well, too bad news, since NewRelic is a significant part of our Infrastructure. The only remaining thing was to try things on our own. Some things were to be excluded right away. NewRelic-Infra has no open-source sources available, hence no way to compile things. No way to wait for NewRelic to maybe or maybe not support Moby. Thus there seemed only a few remaining things to look at.

  1. Get Debian injected by utilising debootstrap and chroot
  2. Learn more about how Docker’s CloudFormation template works and reconstruct all needed prerequisites on another linux distribution (Debian, CentOS, etc.)

First thing first. :) Getting debootstrap in place appeared to be quite handy. No big deal of a change. Being a bit naive at this point, we gave it a shot. It did not take long until we had to accept that it might turn out to be much more problematic than just mounting procfs into debian. It did additionally feel kinda awkward and not very stable.

So we went for the second variation and took a deeper look into the template’s config. And decided to pick CentOS Linux release 7.3.1611 (Core)


It is quite nice to see that the initial configuration mounts docker.sock and bin/docker into some AWS’ containers (docker4x/*-aws) in order to avoid them carrying docker themselves. This is a great way to do things, but it fails when you pick another ami than Apline, because your docker binaries are logically becoming in-executable within those containers.

Docker containers mounting docker binary, docker lib as well as docker.sock

The way to approach this, seemed quite easy. How about if we did bypass this dependency by running another container with Alpine and Docker (A docker in docker container) and provided it’s docker capabilities to our aws containers.

Docker-in-Docker mounting lib/docker

Kinda cool. The only thing which I found a bit frustrating is not to be able to mount the binary too. Since I wanted to get this problem solved quickly, I decided to be pragmatic and simply copy the docker binary onto our host.

Copy Docker binary onto host system

Finally we had to make sure that we do set proper access rights to our copy of lib/docker, bin/docker and docker.sock. Once this was done, we were actually able to run things smoothly.

Final tweaks on init-aws and quide-aws

An optionally added modification was our decision to extract sshkey generation onto another, external service and provide our keys right onto the host systems. However, you should be also able to continue to use the sshkey generation through docker4x/shell-aws.

Adapt shell-aws for sshkey generation

This is the alternative way of doing things. (We are placing our sshkeys onto our hosts, beforehand. No claim for perfection here ;))

Eventually, there was one tiny change on the meta-aws and l4controller-aws in regards to bin, lib and sock … and there you go.

Finally there was something surprisingly interesting happening with our ssh service running on port 22, when building things on a CentOs Distribution. This made us define 2222 as ssl port. That’s a change you might want to set in your template, if needed.

{
“CidrIp”: “0.0.0.0/0”,
“FromPort”: “2222”,
“IpProtocol”: “tcp”,
“ToPort”: “2222
}

Here you can see the full template:

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.