The Evolving Architecture

Mohammed Brückner
Serverless and Low Code pioneers
7 min readSep 7, 2020

This article is part of #ServerlessSeptember. You’ll find other helpful articles, detailed tutorials, and videos in this all-things-Serverless content collection. New articles from community members and cloud advocates are published every week from Monday to Thursday through September (2020).

Find out more about how Microsoft Azure enables your Serverless functions here.

Agile is an approach towards how systems evolve in nature: iterations which are often more experimentation than anything else because the output is not perfectly projected but instead yet to be seen. Changes are gradual iterations. Leaving you with an ever-evolving system, you are able to steer your way forward, responding to learnings along the way and the overall environment with its climate and challenges.

And sure enough I am a believer in this approach.

Therefore, I will take my first-throw Babyname Chatbot architecture and lay out how and when to deal with the inherent constraints, (baby) step by (baby) step. Adjusting to the situation at hands, first and foremost demand.

To maximize speed and satisfy my outcome desire, I bank entirely on serverless.

Plus, I put my bets on low-code as I agree with the philosopher Kelsey Hightower who wrote cunningly: any code you write is a liability”.

In that spirit, let me walk you through a typical architecture evolution story. You will see that the architecture choices are (“business”) scenario based.

Phase 1 — New Beginnings

The initial data flow via Integromat

In a nutshell, and to set the scene, there are two main tools involved in the baseline solution.

ChatFuel, which is the chatbot user interface orchestrator (working on top of Facebook Messenger).

And Integromat, which is an excellent visual lowcode integration SAAS tool.

ChatFuel calls up an Integromat endpoint (=”Integromat webhook”) which triggers a flow to run. A google sheet serves as a database for way too many records. Containing precious baby boy names. For more details, see my article.

Scenario for Phase 1: I just want to see whether there is demand for this service at all and everything needs to be as cost efficient as possible.

Pros: I can build and maintain the flow visually, copy and paste parts of the flow and somebody else, technical or not, could come in and adjust the flow on my behalf without any need to read a compendium first. All in all it was fast to build and effortless to get started with.

Cons: The Google Sheet API responds very slowly at times, in particular if results are not yet cached on the Google Sheet side. Resulting in timeouts of ChatFuel, which has a hard timeout at 5 seconds or so. Further, there are a couple of severe constraints to bear in mind. Integromat allows only so or so many calls, depending on the pay plan you are on. If our baby boy name chatbot goes viral, this integration will run out of resources in no time.

Phase 2 — The Serverless Era

We would enter this phase only if the scenario as follows materialized.

Scenario for Phase 2: There is a modest uptake of the service and it is not clear yet whether the given (capacity) constraints suffice. Some preventive action is due, still not blowing things up though. I am now closely monitoring load.

Here is what’s going to happen.

So far I used SAAS (Software as a Service) tools. In the next phase, I will start serverless services as well. Serverless and SAAS seem to be very close and of course SAAS is serverless by nature. The key bit about serverless however is that these services are not necessarily restrictive about influencing their runtime environment or parameters. Which SAAS are. For example, serverless databases let you configure in detail performance and reliability related parameters, eg read throughput or redundancy. Any SAAS would shield that level of detail away from you. Some control of the underlying bits can be quite useful, like in this chatbot case bringing in API Management in front of the Integromat endpoint.

Phase 2 of the serverless masterplan

In my case I am adding Azure API Management to the mix.

Pros: API Management delivers some useful features like Distributed Denial of Service (DDOS) attack protection, logs, monitoring features, support for different authorization modes. OpenAPI contracts and all the documentation perks that come with that, routing rules, caching rules. The ability to limit the time for the backend API to respond will be especially useful in this case because…

Cons: …all the other underlying constraints still exist. Google Sheets is still having a hard time acting on this sheet with millions of entries (always) timely. The Integromat plan could still always run out of reserved resources.

Phase 3 — Getting nervous here

We would enter this phase only if the scenario as follows materialized.

Scenario for Phase 3: The service is gaining traction and a re-vamp is necessary to keep it alive and kicking.

Now I am adding Logic Apps for all the workflow orchestration and CosmosDB for the persisted baby names data.

Phase 3 means heavy changes

The CosmosDB engine used is “Core” — the SQL flavor. (Formerly known as “DocumentDB”.) Do note that Logic Apps currently (Sept 2020) only offers a connector working with the SQL engine. Of course you could use MongoDB as CosmosDB engine as well, in that case however input/output within the Logic App flow would have to happen via Azure Functions. Using eg the NodeJS mongoose modules and everything else you know and love from the MongoDB universe.

All data residing so far in the Google Sheet is now pulled into CosmosDB. That can be done with another serverless tool called DataFactory (DF). (It’s a one-time exercise, so don’t worry too much about costs for DataFactory which would be something to eyeball if you were to run DF pipelines constantly.

Pros: Now the integration workflow is on a pay-per-use plan and not bound to a resource restriction anymore with Logic Apps. Since Logic Apps is fairly priced, no problem there. For CosmosDB a free tier lowers the burden, too. In general it will be faster to query and first and foremost more predictable in terms of query duration. Since Logic Apps is a visual low-code tool like Integromat, you and others can comparably easily chime in and tweak the flow. Still avoiding to maintain code.

Cons: The whole solution now starts to cost you serious money. You get better scalability and ways to tweak and tune your flow in return. Logic Apps has limitations too, however. There are ways to spread the load, but still. What will hit you harder however is cost — with 500k+ operations that cost will be tangible.

Optional: Phase 4 — Cost & Control

We would enter this phase only if the scenario as follows materialized.

Scenario for Phase 4: With the crazy demand of the service now being measurable, it’s wise to perform low-level optimizations and stay in full control of what is going on at every step. Sacrificing the merits of not owning code.

As a reminder, you could have added code in forms of Azure Functions to the existing integration flow at anytime prior already. Logic Apps and Functions work seamlessly together. I decided to stay away from that so far simply because:

This optional phase 4 however is all about code, so here we go:

Phase 4 means business in terms of code ownership — talking Durable Functions and default Functions

The needed workflow orchestration is now done via a Durable Function. The workflow actions in this all-code flavor are done via Azure Functions.

That could happen in micro-service manner, meaning every step is an independent function. It could be one or at least very few monolithic function(s) as well, however.

Pros: Thanks to a generous free tier for serverless functions and using tweaking and tuning tricks you might be able shave off cost. Of course you have to add the cost of maintaining code on top of that. The main benefit however is the gained freedom to modify every interaction at every step as you desire, eg with CosmosDB.

Cons: You already had a product to manage the minute you started to offer the baby name service. Now you have a (growing?) code base to maintain, too.

That’s it for now! In reality, architecture work is never done and this thing could evolve further, expand in functionality and so on and so on. I hope I could showcase with this exercise what an evolving architecture might mean in practice. Featuring agility and the burden of revamping everything all the time.

If you want to learn more about strategy mixed up with tech, please go ahead and read my other many articles.

Or drop me a tweet, if you please.

--

--

Mohammed Brückner
Serverless and Low Code pioneers

Author of "IT is not magic, it's architecture", "The DALL-E Cookbook For Great AI Art: For Artists. For Enthusiasts."- Visit https://platformeconomies.com