Above the Fog: Demystifying Microservices

Chris Romano
6 min readJan 16, 2020

--

(co-authored w/ Ben Mizel https://www.linkedin.com/in/ben-mizel/)

Buzzwords in tech are a dime a dozen, and it can be difficult to know which ones actually hold water. Here’s one that, unless you’ve been living under a rock, you’re sure to have encountered: microservices. Let me start by saying this one definitely holds water, an olympic sized pool’s worth.

In the days of old, web applications were built with what’s called a “monolithic” structure: a system design pattern in which all the application’s functions exist within a single, deployed instance. When developers scaled their service, they deployed additional instances of this monolith. Simple, but perhaps not an efficient use of resources. It won’t be news to you that applications envelop a swath of functions, all of which service different amounts of traffic. So to scale an entire application based on the needs of one subfunction is like adding the entire spice cabinet when all you really needed was a bit more salt. How can we separate these flavors?

Enter our buzzword, microservices: a system design pattern that decouples the tightly wound functions of a monolithic application into an appropriate amount of smaller sub-applications. You can get more salt without pepper. An e-commerce website would be the classic example. To the user, the work-flow is the same: log in, browse products, add them to the cart, and submit orders all in one interface. Behind the scenes however, each step in this work-flow comprises its own microservice, all of which communicate with one another to grant the user a seamless interface on which you’ve worked so hard to supply them.

Visual comparison of monolithic vs microservice architecture
Monolithic vs Microservice architecture

So why deconstruct the wheel into multiple, smaller wheels? I’ve touched on one reason already: scalability. Naturally, more people will view our products than actually purchase our products. Now we can scale out our product browsing service without having to also scale our ordering service, the implications of which are massive. We’re making more efficient use of the computing resources at our disposal which ultimately means that company money is spent in a manner proportional to service demand. And scalability is just the tip of the iceberg.

A microservice architecture also facilitates a more streamlined Continuous Integration/Continuous Deployment (CI/CD) of updates and patches. If you’re not familiar with CI/CD, consider a traffic circle: as opposed to bringing traffic to a halt every time in the case of an intersection with stop signs, traffic circles allow cars to be continuously integrated into the intersection. And as opposed to shutting an application service down to update, test, and redeploy it, CI/CD encompasses the software, strategies, and checks necessary to keep the traffic circle from coming to a halt. Teams that are building or updating a service won’t disrupt one another, and a new service version can be deployed alongside the previous one.

Traffic circle illustration
Traffic circle === CI/CD

What happens when a service’s business logic demands a unique tech stack? The microservice architecture affords developers the opportunity to implement the exact technologies to meet their needs.

Finally, assuming there’s a tool to see inside the microservice network (more on that later), discrete services allow developers to better isolate failures.

Sold yet? Not so fast — distributed services have some drawbacks as well. And these can all be neatly generalized into one word: complexity. One does not simply wake up, decouple their services, and bask in glory. Migrating from a monolith or designing from scratch a microservice architecture is a beast in its own right. What constitutes its own service? How will the API of each service be formulated? What communication protocol(s) will be used? How much time will it take? How will our CI/CD pipeline change?

And for those already working in a microservice environment, what challenges do they face? The main difficulty is the lack of windows to see inside the network and monitor communications. When a request originating at the client is initiated, the server that initially handles the request oftentimes needs to communicate with another service, and another, and another, and so on. We can think of this series of communications as inter-service conversations. When a developer wants to take a closer look at these exchanges however, they are left in the dark. There is nothing built in to microservice networks that “associates” one HTTP request with another — nothing that says “request C was initiated by request B which was initiated by request A.” They all simply appear to be discrete requests.

It was this “missing link” — the absence of information connecting requests with their predecessors — that recently posed a significant challenge to our team’s ability to identify the sources of stress in a microservices application. We knew which components were handling the heaviest loads, and we knew which components had piled them there, but that information was of little use without a way of knowing which components were originating the requests in the first place.

Developer peers inside microservice network
Peering inside the network

The solution, it turned out, was implementing “context propagation” — the exchange or “propagation” of a unique identifier or “context” from one HTTP request to another. In Node.js applications, this is done by passing a global trace object with unique headers onto incoming HTTP requests and persisting this trace object to subsequent asynchronous resources using a Node API called async hooks. If the request has no “context,” it’s assumed that the request is new and thus, a context is applied. If the request initiates any subsequent requests, the context is propagated to the new request and, because the context is unique, the requests can correctly be identified as an associated conversation. Now, if these conversations are being logged to a database, the requests can be sorted by their context and analyzed however the developer sees fit.

To offer another analogy, context propagation offered a way to essentially give each new request a baton. When that request completed its leg from one microservice to another, it passed the baton — in the form of a correlating ID — to the next request to carry. So by the time the final request — the anchor leg of the relay — completed its leg, we could know it belonged to the same “team” as every request that had carried the same baton.

Baton passing illustration
Gold medal context propagation team from the last Olympics

The insight this data offered into the behavior of our system was as helpful as we hoped it would be, making it easier for us to debug communications and, more importantly, isolate failures. At this point, it really feels like data we can’t live without.

And if it seems like information that could be of use to any devs out there, we’d highly recommend giving context propagation a try. You’re welcome to play around with the npm package we published, for which we also built Chronos, an open-source visualization tool to help see not only communication data, but the health of your microservices as well (https://github.com/oslabs-beta/Chronos). And if you don’t have a microservices architecture of your own with which to test it, you can check out a basic dummy app on our Github repo.

Whatever you do, don’t look at microservices as some impenetrable black box. Like anything else, it can be made accessible when you have the right tools.

So, go forth and scale, continuously integrate and deploy, choose your dream stack, and isolate those failures!

(co-authored w/ Benjamin Mizel https://www.linkedin.com/in/ben-mizel/)

--

--