Your API Gateway should actually be a Message Queue

Or why we need Digital Osmosis

Gratus Devanesan
Code Smells
3 min readMay 16, 2018

--

Wiki Commons

Conventional API Gateways handle difficult things like routing and provide a uniform layer that allows outside applicatons access without them having to understand what is going on in the inside.

Conventional API Gateways operate on the request/response pattern that most of the internet is based on. But this creates the problem of having to provide a relatively quick response so that the browser doesn’t time out. This becomes unnatural if the internals of the architecture use an event based architecture to allow for cleaner decoupling or otherwise trigger longer running tasks.

Instead the gatway should be a set of message queues organized into different channels. The interior should have a set of serverless functions (or microservices) listening to specific channels waiting to process information arriving on those channels.

Digital Osmosis

Why do I call it Digital Osmosis? In cells nutrients are pulled through the membrane by way of a concentration gradient. A request/response in contrast is forcing a message into the system. Having a message queue based entry point means that the interior pulls in the message when it wants to and only the messages that it wants to. Similarly the concentration gradient is the ratio between serverless functions listenting to specific channels vs messages entering it.

Using auto scaling based on message backup rates will often lead for more consistent and reliable performance than scaling based on memory or cpu usage. Additionally, occasional downtime will not lead to any message losses.

Clients need to become event based

This goes hand in hand with Http/2 Server Push. Instead of a client forcing a request on to the server and demanding a response, the client needs to be designed to listen to the server and process messages coming back from the server.

This is important from the perspective of modern real-time architectures. When we build platform centric applications the server knows more than the client — the client often does not know what to ask for in dynamic multi-user, concurrent situations. State will change continuously as many users interact at the same time, with the same data or process.

We need to flip the traditional request/response architecture on its head. The server should push data and pull in data. The server in effect becomes the facilitator.

There is one more change

The server has become a facilitator, which changes the fundamental client/server relationship. But the next step is that the server is completely influenced by the waves of messages coming in from many clients.

In the old architectures the client and server acted as a singular, insulated pair. In new architectures we can no longer have insulated pairs as many clients would want to share state. The server now becomes the facilitator or controller, always working with many clients, instead of dedicating itself to a single client with each request.

This is a significant paradigm shift, but I feel a very necessary one.

--

--