Resisting the tech hype: our back-end will not be micro-services

Mathieu Besançon
Equisense
Published in
6 min readMay 17, 2017

This articles is a retrospective on the architecture choices for the back-end we developed at Equisense. It also comes as a result of a discussion on this excellent article on Hype Driven Development.

A shiny new tech out there? source

As we were beginning to think about building our own back-end, the excitement of a greenfield project was hard to contain. What technologies are we going to use? How are we organizing ourselves for a project with a shared ownership? How are we going to deploy now and in production? How do we store data? How do we transition from our legacy system?

Each of these questions raised at least 4 or 5 solutions seen on Github, Medium, Twitter or one of our favorite newsletters. As both product and development time were limited, trade-offs were mandatory to move fast, which meant learning to accept getting quick and dirty on some issues, while keeping a red flag for later.

As lots of people were involved for their opinion on the back-end but few actually contributed to it, listening to everything meant trying to comply with everyone’s vision of code, influence by their own language and domain.

“It’s not really DRY, is it?”

“That’s not really following the 12-factor app principles”

“I should just be able to clone the repo, make modifications and deploy with one command”

While I did not disagree with most of these assertions at the time they were made, what I’ve learnt to do is to know the time to listen to them, and the time to just make things work before making them perfect.

Go constructs allow for great adaptation, without getting unreadable to external eyes

Let’s deploy!

Let’s be honest, deployment and infrastructure have never been the strong suit of any of us at Equisense. Some have always used auto-magic solutions like Heroku or AWS Elastic Beanstalk. I saw two problems with these platforms:

  • Strong and sometimes outdated opinions on the tools you should use for development (or vendoring tools if you look at Heroku). I strongly dislike deployment information polluting the repository, requiring to commit all vendored folders for instance.
  • Very limited visibility on what is going on while deploying (uploading the zipped project to run app, really?)

This also came into contradiction with the high transparency of how things work in Go when the project is being built and run. Once you understand every step in the process (without needing a degree in systems engineering), why bother re-hiding it behind magic?

Time to build services

It had been planned since the first discussions on the back-end architecture to build different services for different purposes. As we were going through the steps of developing the core functionalities of the back-end, it just felt right to split the logic in different packages but compiling to the same binary. The deployment process is then:

  • Build the binary with the appropriate options
  • Add it to a Docker “Scratch” image
  • Push to a Docker repository
  • Pull and run the image on a server / cluster.

With these steps, the whole deployment takes about a minute from validation to run, which is totally fine given our update frequency.

Once we started working on features involving heavier computation, we split them in a different repository and binary. The main API project had grown big enough and these features were a totally different concern. So we had two reasons here:

  • Infrastructure: the computing power used for computation will spike and should not hinder the performance and speed of core API features.
  • Logic and organization: different people could work on the two projects without having to synchronize heavily.

While discussing with friends building real micro-services, I noticed the motivation for the decision to spin off a service was mostly the latter. We started earlier that deployment should not invade development nor the code. I believe the inverse statement holds too: development decision should not influence heavily infrastructure, nor should team organization. “Heavily” is the key-word here, of course it becomes natural when 100 developers are working concurrently on the platform, or if you’re reaching 2 billion lines of code (which only Google seems to enjoy).

If you’re not yet reaching these points, there are some effects you must be aware of.

For time-sensitive tasks, network is a pain.

Just look at numbers, network calls are far more costly and failure-prone than making your CPU pay a slightly higher cost (with horizontal scaling to handle the load) and make each call stay on the same machine. Think twice before making data bounce around.

One characteristic of micro-services is loose coupling

It makes the link an implementation-agnostic contract. If you just need that part, go for lighter solutions.

  • Use interfaces when needed (allowing to compile everything to the same binary).
  • Ship docker-compose or similar solutions to still host everything on the same machines.
  • Leverage Go plugins to use some teams’ work as an independent but already compiled component.
  • Make different components communicate through HTTP even within a binary (define endpoints for each service, with one master mapping ports).

Don’t neglect the code overhead

You can build micro-services using some of the nice solutions you can find out there or using plain HTTP clients and endpoints which is probably easier and yields less cognitive overhead, at least for few services. In any case that means you have to define and implement an API on the back-end service and a client on the other side. This is not always worth the shot for simplistic services. If your external service is just running one stupid computation from provided input, using server-less solutions might be more appropriate.

And again, deployment

As mentioned, deployment and infrastructure can be (are?) a pain in the neck for many developers. Each running service has its own version, must be deployed at its own pace. And because your services are loosely coupled, it’s up to you to make sure the contract established between each of them still holds when deploying. As the number of connections in the system grows exponentially with the number of nodes (services), this can quickly become a nightmare, especially while you’re going through heavy modifications of your model (like most start-ups can expect to be for quite some time).

One example: a computation (not-so-micro-)service

Equisense is currently re-adapting and extending its algorithms to help horse-riders understand how they rode during sessions recorder with our Motion sensor. Some algorithms compute metrics from one given session, we’ll label them as aggregation and another group leverages several sessions to provide riders with a wider view of their track record, regrouped as trend. The two groups are not called at the same time, nor for the same cases, so in a perfectly micro-serviced architecture, we would have split them into two services running separately and called from the main API gateway.

But why would we actually do that, except for the hype?

  • Because what they do is different? Nah, with this thinking, you would quickly slip to splitting absolutely everything into separate services.
  • Because they’ll cause high loads on servers? That’s a scaling problem, not a logic one.

In my opinion, one question that could lead to splitting a service is: if service XYZ is down, which features would not be acceptably unavailable? Let’s try with some examples:

  • If my authentication service is down, it would not be acceptable for the API not to give a response at all, I expect to get at least a meaningful error message with a 500 HTTP status.
  • If my API gateway is down, the team’s data scientists still expect to be able to access both aggregation and trend functions to test on new data.
  • If the service handling session aggregation is down, not having access to trends does not make the situation worse, users can still access their key information through the main API, simply not the analysis of the session data.

This process makes your architecture choices more conservative when it comes to spinning new services and avoids creating overly complex architectures just to follow your intuition that two parts are “logically different”. For that purpose, just use packages.

If you’re sharing similar opinion or deployed/manage a micro-service architecture, don’t hesitate to react here or on Twitter. Oh, we’re recruiting by the way.

--

--

Mathieu Besançon
Equisense

PhD student solving large-scale decision problems for smart grids @ INRIA Lille & Polytechnique Montréal