Micro Service Composition

This weekend sees me travelling off to Waterford for a second year, with Gabriel Cebrian, a recent joiner to TES who came on board as part of the Blendspace acquisition.

This year I’m talking about some of our recent work at TES, rebuilding a core part of our technology platform using a micro service architecture.

Why micro services? Well, its an evolution of all the thinking and work I’ve done since I was at ITV 7 or 8 years ago working around trying to bring their disparate IT systems together via the enterprise equivalent of ‘Service Oriented Architecture’. The differences between the two (which is largely semantic, the absence of software companies and management consultants driving a marketing message over substance and {} instead of <>) I can talk about in another post.

A lot of what I did when at MailOnline pushed this even further, but we didn’t really get it right. What I’m talking about in this post is the 3rd generation attempt.

To get a good feel for what a micro service is, and isn’t, I’d suggest you read some of these:


Compoxure tries to solve the deceptively simple problem: if you break a page up into a more distributed set of underlying services, how do you bring them back together to create a single page.

Break a page a part? What are you talking about? Well, consider this:

A simple resource page on TES.

This somewhat simple page (which is very important from an SEO perspective), is clearly made up of a number of disparate parts.

  1. Site masthead and navigation
  2. Resource summary and download
  3. Reviews
  4. Author panel
  5. Recommended / Related resources

Now, the interesting thing about each of these sections is that they each have different data behind them, and they each change at very different rates.

The site masthead needs to be re-used across the site, so it really shouldn’t be stored or managed by the application that renders this page.

The reviews panel is also generic, and can be used across the site — not just on resources. It also changes (from a data and caching perspective) whenever user leaves a review — this is at a different rate to the summary above it.

So, imagine a world where we’ve drunk the micro service kool aid, and so create a set of distinct services — all micro ;) — that do one thing.

  1. Masthead service — for a given input URL it will provide you with the masthead and navigation.
  2. Resource Summary — for a given resource ID it will give you back the JSON data that represents a resource, or an HTML fragment that represents that data rendered on the page.
  3. Resource Reviews — for a given resource ID it will give you back the JSON data that is the reviews, or an HTML fragment that represents the reviews rendered. It also exposes an endpoint for creation of a new review, as well as reporting abuse (e.g. a review that is inappropriate).
  4. Author Service — as above but for authors.
  5. Recommended Resource Service — for a given resource and user combination, send back a list of resources you think the user would also be interested in.

Great — these can all be built, have their own data sources (kept in sync where necessary by asynchronous messaging in the background — e.g. as a resource is created or changed it drops a message onto an MQ) and be deployed independently — even by different teams.

Now — imagine your the team that has the actual responsibility to build this page. How do you put it together?

The typical option list is as follows:

  1. Build a ‘Resource Front End’ application. Have it respond to the original request, and then make a number of service calls for JSON data from each of the services, and then render the entire page.
  2. Use Ajax. Build a single page app that calls back to the various services for JSON and renders client side.
  3. Use ESI. Use Edge Side Includes — serving a fairly flat response and letting the edge server compose HTML together.
  4. Use SSI. Same as above, but closer to home with Server Side Includes.

Now, most organisations go for one of the first two. But both have problems.

If you go with the ‘Front End’ application, you’ve just recreated an application that is in it’s first stages of becoming a monolith, or in the least a single choke point for change.

To make a meaningful change to any part of the page, you’ll need to modify and test not just the service but also the app that brings it together. For any meaningful change to any of the disparate services (e.g. the masthead), you’ll likely want to test it as given you’re rendering the whole page in one go an error in the masthead service will actually cause the entire page not to render. You then move and start implementing code to call the services in parallel, implement circuit breaker patterns, etc. etc.

The second, the Single Page Application approach, is fantastic if you’re behind the firewall or delivering an application vs a content heavy site that depends on SEO. You could go down the render HTML on both client and server React or Airbnb rendr style but this means you have to ‘buy the farm’ and are forever stuck using one of these approaches across your entire estate — as well as NodeJS.

This is not very micro service-ey. Note that if you are behind the firewall, I’d strongly recommend using this approach above all others — its by far the simplest for developers to reason about and understand.

ESI and SSI are interesting — as they point us towards a possible solution, but both have their own challenges when used at scale. What if we could just put something in our HTML markup (e.g. from a CMS) that instructs a layer in our architecture to do the composition for us? But in a way that is simple to understand, decoupled and reliable? It could be its own micro service.

Compoxure is a composition proxy. It’s built as express middleware, so you can build your own application (there is an example in the source) that has all of your configuration, logging etc. vs having to use ours.

It’s deceptively simple:

Compoxure deployment architecture

You put Compoxure in front of a backend that serves HTML that presents the bare bones of the page (as much or as little as makes sense). If you put it in front of your CMS it could actually serve quite a lot of the page.

The HTML contains simple markup that instructs Compoxure to go and get some additional content on the way through (formatted to make it easier to read):

cx-url="{{server:resource}}/resource-summary/{{param:resourceId}}" cx-statsd-key="resource_summary"

This declaration will get parsed by Compoxure, and do the following if you request /teaching-resources/resource-12345/

  • Check the cache at resource:12345 for content. This is currently Redis only, but you could build an adaptor for something else.
  • If miss or stale:
  • Make a call out to the resource server (defined in config so it can be changed centrally), at /resource-summary/12345.
  • It will wrap this call in a circuit breaker, so if it starts to struggle it won’t hammer it into the ground, along with a short timeout.
  • If there is any issue, it will simply close up the space (unless configured to display the error — e.g. in development).
  • If it is a good response, it will then add it to the cache, and render the response inside the div (or replacing it if you tell it to).
  • It will call the logger and statsd functions (which you wire up to your own handlers), so you can keep a very close eye on what it is doing and how it is performing.

What this means is that you can now safely make as many changes as you like to the resource summary service, and provided it continues to render HTML when asked, the risk of any one change breaking the entire page is drastically reduced.

To learn more, check out the github repo — it’s public — and any ideas or contributions are of course very welcome.


If you’re coming to Waterford for Nodeconf I’ll be talking about this and another tool called Bosco — which solves the parallel question — if you have a set of distributed services, how do you manage the static assets like JS and CSS?

See you in Waterford!