Consuming Baby’s First Microservice…at scale!

Toss some containers at the problem. No, really. Do it.

Anyone who knows me also knows that I hate servers. Like I really hate them, and my goal in life is to make them as disposable and inconsequential a loss as possible. Many cloud providers get that about me, and indeed, many people like me.

If you know me, you also know that to say my passion for containers is only paralleled by my passion for telling people to when it’d be more prudent to be using them. You may have read several pieces to this effect, by me, detailing some adventures in containerization, but until now, I’ve focused on the high-level reasoning and implementation, and then gave you a case study of a project I built using Rancher to manage my app’s various microservices (effectively, a web UI and two APIs).

Here’s something a little more practical, and a decent template for getting started yourself.

Let’s say you want to build a web app that graphs stock quotes based on whatever ticker symbol you pop in:

Consider the components, and how they work together logically in this case:

There are effectively two ways of building this app; 1) with finite scalability, and 2) built to scale almost infinitely.

This particular app is almost useless and to anticipate traffic is to invite embarrassment, but for the sake of argument, let’s say it also gave you some dank insights into your portfolio and it caught on.

Because it’s a mega-popular application, I might begin to think of it as less of a web app, and as three components:

  1. a Web UI that…
  2. retrieves data from a database by way of…
  3. consuming a RESTful service

In this case, step 1 is the HTML page that (step 2) POSTs the ticker symbol to your API, that (step 3) acts on the database based on that data.

The reason you wouldn’t just write all of this into a single Sinatra app (like I did initially) would be that it wouldn’t scale as effectively as multiple entrypoints for each of these components.

Allow me to demonstrate:

In this first example, your app is not split out into three services, but just three instances of a monolithic application that happens to serve three functions. You can make this highly-available, but it’d be less performant, and you couldn’t granularly scale components as required, and there’d be little handling for a failed component (the entire instance of the app might go offline, for example).

By splitting the components into their own services that you’d distribute across your fleet, you address these availability/resiliency/scalability concerns:

Each service instance can connect to an instance of another service; if one instance goes down, the other components are not tied to this service’s availability as directly.

You would, of course, plan for failure™ and have some plan in place to replacing failed containers (Rancher, for example, allows you to rebuild a container when a service check — HTTP, etc. — fails) with new instances.

In the case of my application in this sample, it has three components:

  1. A web UI where a Canvas.js graph is rendered. This service takes input from a form (the ticker symbol), and POSTs it to the API, which returns the data that populates the plots.
  2. The API service, it takes the POST’d symbol, queries a handful of external data sources and a database (I’ll detail this next), does some cursory processing to clean and standardize the data, and builds and returns a JSON response that gets parsed by Canvas.js back in the web service.
  3. Data sources: In this case, there’s a highly-available database that stores processed metrics to calculate underlying value and some analysis results on earning reports for registered users. This data is informed by external APIs (in this case Quandl, Yahoo Finance, and a handful of other financial services APIs), and thinks like quotes are returned as-is back to the web service.

You can see, from the above, that this is a simple application, but one with independent segments that, if decoupled, could scale in a snap! You can bundle these components in a container image (or write up a Dockerfile to pull updated data from a git repo, for example, before rebuilding an image) and then load balance connections from one service to the other to ensure, while decoupled, the services can communicate without one set of containers being totally dependent upon the other, avoiding some kind of cascading service failure because a container crashed.