Serverless — the future ???

3-tiered architectures to thick-client apps connected to cloud-based microservice functions

zenobius zeto
7 min readAug 27, 2018

For as long as I can remember there’s been a general understanding in the tech community — that the future of computing lies in an internet connected thin-client architecture.

You know … the vision for the future that sees a proliferation of light weight cloud-connected devices. The story goes that these so-called “thin-clients” display the user interfaces and the business logic and heavy lifting happens in a cloud-based server farm somewhere. It’s typical 3-tier architecture, with the weight of the heavy lifting residing in the back-end.

But I think that something else is sneaking up on us, and an alternate future of software architectures is materializing — based on serverless architectures.

This future is characterized by rich thick-client applications talking directly to cloud data stores and small cloud-based micro-services for protected workloads & service orchestration.

What does this look like, and how is it different?

OK, so picture a typical 3-tier architecture:

The user interface really just serves as a convenient way for the user to drive functions in the back-end. But this is inefficient to build — there’s a huge amount of duplication.

If you want to add a new feature to the product you have to touch every layer through the stack, from the user interface over the wire to the server and then again over to the data-store. Look at all of the layers:

3-tier architecture — That’s a lot of layers to touch, just to add a new feature.

Now, imagine that your user interface application can contain all of the orchestration logic that currently sits in the backend, and can talk directly to a range of cloud-enabled services to perform unique functions, such as authentication, storage, notifications etc.

Now when you want to add a new feature, you only need to add it to your front-end code (and potentially call out to cloud services where required). Look how few layers you need to touch now:

Serverless architecture — cuts out the middle-man and brings the UI closer to the data store

Performing protected/secret actions

You’ll still write some code that runs in the cloud, but this code footprint will be much smaller. Some code must run remotely because it requires access to secrets you cannot trust to run in someone’s browser.

For example: with most credit card payment providers, taking the final step to charge a credit card payment requires a call to the card provider’s API using the merchant’s secret API key. Everything else, such as entering card details, can be done by the user’s browser talking directly to the payment provider — which is great from a PCI compliance perspective as well.

To make the final call to authorise the payment — you can’t load that secret API key inside a user’s browser. Also, you can’t trust a user’s browser to call through to the database to grant the user access to purchased content — or anyone could write some JS code inside their browser to grant themselves access without paying!

So, in the serverless model you write a tiny micro-service in the cloud that invokes the card payment once the front-end has orchestrated and lined up all of the order and payment details. If the payment succeeds, this service then grants the user access to the purchased content in the database. This service has access to touch parts of the database that the user’s front-end cannot.

use case — the architecture

Here’s an architectural diagram showing just some of the cloud micro-services that we use:

Notice that each service we use in the cloud does one thing, and does it extremely well? Some of the services we use are 3rd party (like Auth0, S3, Firebase and Stripe) and some is custom code that we’ve written.

Our custom code footprint in the cloud is absolutely minimal — limited to just a few node.js functions running inside AWS Lambda. These functions perform protected actions that we can’t trust a user’s browser to orchestrate, such as:

  • finalizing card payments and granting users access to the courses they have paid for
  • triggering emails to a segment of our users (we can’t trust code in our user’s front-end to do this, because it would expose the email addresses of other users and allow any authenticated user to trigger emails to any other users)

What’s driving the serverless trend?

This is now possible, because:

  • we have rich browser MVC frameworks like AngularJS that enable us to write large and complex web applications, with full build systems and test runners.
  • almost every service you can imagine is now HTTP enabled and supports common authentication token exchange protocols like JWT. This means that your user’s browsers can send credentials to interact directly with the different cloud services you use.
  • there are 3rd party cloud services for a huge range of functions — all of whom are focused on doing one thing and doing it extremely well. You can now outsource a huge range of rich functionality to cloud services such as:
    - User Authentication
    - Protected File Download / Upload
    - Credit Card Payments
    - Notifications (email, SMS, push)
    - Real Time Streaming Database Access with offline sync capability

And then there’s user preference. Mobile hardware these days is powerfuland people prefer a richer, faster user experience. Unfortunately — a ubiquitous, always-reliable internet hasn’t yet arrived. It is still common for users of the internet to suffer degraded connections and loss of connection regularly. So, our general preference is for richer, more powerful applications on our mobile devices that can deal with offline scenarios and sync data when returning online.

Is this secure?

Absolutely, as long as you make it so.

It’s all in the design. When introducing a new cloud-service, you need to design with security in mind — understanding that users will interact with that service directly using their credentials. You need to understand how to lock down what they can/can’t read and write.

If you are using a 3rd party service then you have to understand how you can lock down access to that service. For example, with Firebase (a real-time streaming database), you write custom security rules that are executed by the database engine to determine which users can read or write from different parts of the database.

If you are writing your own micro-services, then you need to ensure that your services check the authenticity of the security token passed and verify the rights of the user.

So what defines serverless architecture?

There’s no exact definition, but I think there are some basic characteristics that define a system as serverless:

  • Operators do not need to run and maintain back-end servers themselves
  • The vast majority (~ 95% +) of the code-base resides in the front-end
  • The code that resides in the cloud is only the code that absolutely must — as an example, for security purposes some work must be done with access to secrets that the user’s browser cannot be trusted with
  • The front end acts as the orchestrator calling a rich array of cloud-based services to perform specific functions — such as taking a credit card payment, giving access to protected resources, shooting off emails or push notifications in response to events

It’s important to note that this doesn’t mean there aren’t any servers. Of course there are, but someone else is managing, securing, maintaining and patching them, taking the load and responsibility off your shoulders and freeing you up to focus on building your unique product.

Has It Been Worth It?

Absolutely! Using a serverless approach for A Cloud Guru, we were able to get a fully interactive online social learning platform to market in a fraction of the time that it would have taken us in the past. We can now innovate quickly, and add new features to the user interface continuously — without having to touch so many layers of code for each new feature.

It’s not without it’s drawbacks of course, this wild departure from traditional architectures has introduced it’s own set of challenges.

We’re learning a lot, and we’ll follow up with future blog posts going into more depth about the technologies we’ve used and the challenges we’ve had to overcome.

--

--