Optimising our Infrastructure to bring new languages into our stack

Zak Knill
Attest Product & Technology
5 min readDec 1, 2017

Bringing a new technology into your existing technology stack can be a daunting process; but with the right tooling, things get considerably easier.

At Attest, we have always tried to use the best tool for the job [insert cliché], especially when that tech is fun to use. At the beginning of our journey — when we only had a very small team — the right tool was a couple of Java monoliths. We saw huge benefits in our team from really basic things: not having to deal with transactionality in a distributed system. We knew that this wasn’t going to be a sustainable model for our systems, so it came to a point where we wanted to stop pushing more features into our monolith and start breaking components out into well defined (much smaller) services.

Some of our backend engineering team had experience with golang, and made a case for a new service to be written in go. This led to POC, which morphed into a new service (you can read about that here). We knew we needed to make some changes to our pipeline and infrastructure to make it easier to bring new languages into our stack, so we looked at our existing architecture and distilled it down.

Simplified version of our pipeline

On a simplified level, we noticed 3 main areas that we needed to spruce up a bit to ease the transition to new tech.

  • Dev environment: all the tooling, IDEs, local dev processes.
  • Pipeline: safely delivering code — continuously.
  • Infrastructure: running and monitoring our apps.

We spent some time working on each of these areas to ensure that we could seamlessly integrate new tech into our stack, with two main goals:

  1. Increase the speed of delivery.
  2. Be language agnostic.

Infrastructure

When we had only a few apps, written in a single language — things where simpler — but not necessarily better. We had many more components that are associated with a distributed system within out apps. Service discovery was through load balancers and DNS. Circuit breaking, tracing, TLS, connection pooling etc. were implemented in application code. If you’re bringing a new language in, you don’t want to have to re-implement these features — or find new libraries that do them for you. Even though the languages are different you’re still violating the DRY Principle. Instead we saw that all the things that we would normally associate with Infrastructure were things that we were getting for free (free in the sense that new services do not need to re-implement, they can just exploit). The instances services were run on, the containerisation and scheduling technologies were language agnostic. What if all the application code features that were not business value features could also be language agnostic Infrastructure level technologies? Well they can!

The split between our infrastructure and application level features, and the tech used for each one.

We opted to push as much as we could down into the infrastructure layer. Any service (or any language) that was wrapped in a docker container and could be deployed over our EC2 Instances using ECS would get a bunch of freebies:

  • Service discovery from Consul + Linkerd
  • Distributed tracing from Zipkin + Linkerd
  • Connection pooling, retries, circuit breaking and TLS from Linkerd

You can read more about our linkerd setup here.

Moving more or these things into a language agnostic layer meant that we could consider new languages based on their merits, and not based on the preexisting practices and code that we had. This gave us considerably more flexibility and allowed us to make better decisions.

Dev environment

When using tech like docker, and it’s ecosystem, you already get a bunch of language agnostic things for free. We already had our dev environment apps wrapped in containers and talking to local databases and other services using docker networks. The dev environment was probably the area that got the smallest amount of attention. What we did do was drag some of our prod tooling down into dev, namely linkerd. Using linkerd allows you to use docker to run your apps and managed service discovery and networking etc, for you in dev. But in combination with linkerd you can do cool things like proxy any request for a service that is not running on your local machine to a service running in a cloud environment seamlessly, and as if it were running locally.

Proxying requests into dev env using linkerd and docker networks

We opted to use docker-compose yaml files for easier setup, because no one likes running a docker run command with 500 flags / args. Instead define it all in a docker-compose.yaml and docker-compose up and things just work™️.

Pipeline

Jenkins. Who doesn’t love Jenkins? We’ve got a lot of love for Jenkins! We didn’t need to make too many drastic changes here either. All of our build and deployment pipelines are in code, and using Jenkins’ shared library feature every command is a high level function call.

Jenkinsfile sample

All we needed to do was add another highlevel function (backed by lower level build commands) to the library we used and any new service could be built and deployed by our existing pipeline. 🎉.

None of these changes are particularly drastic, but small changes like these allowed us to choose new languages and technologies without worrying about new overheads. We try and make decisions based on the merits of the tech rather than if it would create more work for ourselves, by optimising the pipelines that deliver and run our applications.

TL;DR

We chose a bunch of technologies to try and distil our application code down to only the business logic that we cared about:

  • Docker — containerisation FTW. But also for docker-compose, and docker’s networking support (in dev).
  • Linkerd+Consul for service discovery
  • Linkerd+Zipkin for distributed tracing
  • Linkerd for connection pooling, TLS, circuit breaking and retries.
  • Jenkins for pipeline as code support, heavily exploiting its shared library features.

I gave a talk on this topic at the London Go User Group, watch here.

--

--