A new and fast approach to microservices
Unleash the raw power of Data-Driven Design to build a microservices system
REST API development can be a mess
Almost every modern application is powered by a set of REST API services to communicate with the backend servers. The microservices pattern is widely known and used.
The common implementation is to build lots of almost cloned stand alone web applications and hard code some different logic in any of them.
Then, it is necessary to build and deploy them in lots of daemons, services or containers. As a consequence, it is distributed among lots of physical or virtual servers.
Reading these few lines, it’s easy to understand that this way of operating can escalate quickly in terms of effort required to deploy, coordinate, manage, monitor and maintain all those different entities.
The problem
But, what if we wanted a way to have a brand new REST API Method alive and kicking in minutes instead of weeks?
What if we don’t want to bother coding about web servers, authentications, request validations and request limits?
What if we don’t either want to bother about building, deploying, logging, monitoring or scaling?
The minimization of duplicated code is one of the best practices in software design. The idea is to extend this concept to a whole set of microservices and have just one service. It can be deployed either as service, daemon or container and it includes all common operations itself.
Let’s do it
So I created a microservice the standard way. I gave it the ability to be deployed as a Linux daemon, a windows service or a Linux container, using .NET Core (C#). Then, I’ve added: request authentication, validation, queueing, logging, throttling and caching. I’ve added also the tools needed to communicate with different systems. I deployed it in AWS as Linux container with autoscaling (and also as windows service on two virtual servers on-premises but I will focus on the AWS one).
And nothing else. This microservice, in the end, does absolutely nothing.
What we did
Well, I deployed a load-balanced, scalable, monitored completely useless microservice. But let’s just get a closer look into this useless piece of software.
As we have already seen in the last picture, the microservice has a load-balancer in front of it and is part of an Auto Scaling Group
.
As I said before, there are other common components in this microservice:
- It implements our proprietary authentication methods, but it can be modified and adapted to any authentication method
- It can safely encrypt and decrypt pieces of information
- It is “account aware”. It can detect the account related to the request from the authentication token or from the URL path and can refuse requests from not allowed accounts
- It tracks requests size and can refuse requests too big
- It can validate requests JSON body against a JSON schema and refuses malformed ones giving back precise details about the error
- It is CORS compatible
- It relies on a Redis Cache to coordinate the requests flow. So it is able to do request throttling at the cluster level. Redis is used also to cache authentication tokens, to speed up the authentication layer
- All requests basic logging info are automatically sent to an ElasticSearch server, for monitoring and debugging purposes
At this point, we can say that it does nothing, but it could do really many things!
Instruct it how to get things done
I needed a way to assign Actions (Methods) to the cluster. Each Action should be “passed” to the cluster at run-time and should contain all info it needs to be executed. The more convenient place where to put Actions info was a relational database, in my case: MySql.
I created two tables: ActionTemplates
and ActionScripts
Inside ActionTemplates
I put all the Action properties, but it’s inside the ActionScript
table that magic happens.
The Magic of Data-Driven Design
The core of all this system is the ability to manage the Action’s logic as data.
In opposite of the standard “clone-build-deploy” pipeline where logic is hard-coded in the microservices, there is no logic hard-coded. Action logic is dynamically loaded from the database as it were one of the Action properties.
I choose LUA as a scripting language because I’ve tested it and found it easy to implement and really fast to execute. I wanted LUA to contain only logic so, at the moment of the request execution, I pass two objects to LUA: Context
and Tools
.
Context object allows LUA function to read request data and to write response data, it also contains information about the active environment (i.e. Preproduction). It is the equivalent of Request Context in hard-coded logic. Tools object allows LUA function to interact with the host and other systems. It can make Rest calls, Database query, Lambda calls, read and write from S3 and more.
How to create a web method
Now that I have all the tools, it’s time to create a web method. For example, a basic method that authenticates using our standard REST API OAuth token receives an email and search some info from the database related to that email. Access credentials to the database are taken from another web service.
First, I add the following script in ActionScripts
:
And then I insert a row in ActionTemplates
with:
- template = /example/{email}
- method = GET
- authtype = bearer
And that’s all folks.
When I call
I will receive as a response
404 — { “Message”: “unknown@email.com was not found” }
Whereas, when I call
I will receive as a response
200 — { “Message”: “OK”, “Items”:[{ “something”: “abc”, “otherthing”: “cba” }] }
Adding new web methods will require very little effort.