Serverless in your Microservice Architecture

When to use Functions as a Service

Gratus Devanesan
Code Smells
6 min readAug 28, 2018

--

Mike Roberts, on Martin Fowler‘s blog, has a thoughtful take on the meaning of the word “Serverless” by separating the idea of serverless into 1) general cloud based infrastructures that don’t require the conscious provisioning of servers (databases on demand, container services etc.), and 2) functions as a service like AWS Lambda, Google Cloud Functions, and Azure Functions.

We will be comparing the former with the latter in view of an overall microservices architecture.

What is Function as a Service?

Spring Cloud Data Flow uses the term Task and describes them as “short lived microservices”. Functions, in FaaS, are essentially this — short lived microservices.

Microservices because they are extremely granular — a function should just do one thing, and short lived because all the underlying and supporting infrastructure is completely torn down on completion.

Comparisons

The table below tries to illustrate the difference in terms of effort required to scale and the general expected lifetime of the underlying runtime and dedicated infrastructure.

Comparing lifetime and time to scale for different implementations

For Functions, the lifetime of the underlying infrastructure is just milliseconds longer than the actual execution time of the function. Provisioned containers can run for a long time, as they are really just tiny VMs, and in theory don’t need to be torn down. Generally, though, each new code commit that needs to go to production, will trigger a deploy that will tear down the existing container and build a new one.

Dedicated VMs, like EC2 instances, would generally survive even new code deploys (see Opsworks lifecycle events as an example) and are generally only torn down as some sort of maintenance upgrade.

Physical servers, with the high upfront investment and high maintenance overhead, are generally provisioned with the intention to be kept for years and often over a decade.

Similarly, Functions scale instantly as the underlying runtime is provisioned on a request basis and for practical purposes cloud providers like Google and AWS have unlimited scale. Containers, provided we are using something like Pivotal Cloud Foundry or Heroku, allow us to scale within seconds.

If your EC2 instance is supported by Elastic Beanstalk or similar, then you can scale from n to n+1 instances within minutes.

Physical servers naturally will require the ordering, delivering, mounting, configuring, hardening etc. that may require weeks or even months.

As load goes up, Functions will become increasingly expensive

Cost is another factor. FaaS are highly cost efficient for low loads as you only pay for the actual processing (and a few milliseconds after). Containers are still fairly cheap, but you have to run an instance 24/7 even if there is a lot of time where no load is experienced. FaaS cost increase linearly with load but once we cross a threshold it makes more sense to scale always running containers. The question is whether the load is sudden or distributed.

If a high load is distributed over a larger time period, and varies cyclically, an always available service will be better. If the load is localized in time, and comes in sudden bursts, even with a high volume, a FaaS might make more financial sense.

Using something like Opsworks scaling can be auto configured for EC2 instances but it is still pretty coarse as EC2 instances need significant time to boot up.

With physical servers we need to provision for the maximum load on day one, what’s worse, we need to estimate for the maximum load on day zero.

Tradeoffs and Benefits

FaaS provide a route to easier maintenance and potentially lower cost; the trade off is slightly slower performance.

Performance

The biggest impact on performance comes from cold start times. A new container has to be provisioned and a new runtime has to be stood up. On the other hand if you had a traditional containerized app (Springboot, ExpressJS) the server is always ready to process the incoming request. Generally, the cold start times are in the 100s of ms while the function would run for 1–2s, making the cold start time component accounts for about 5%–15% of the total runtime (and you pay for this every time you call a function). This is something that needs to be properly estimated and evaluated when choosing FaaS as an option.

Maintainability

Each function is easier to maintain, but as a whole a set of independently deployed functions will put a larger maintenance burden on a team than a single service covering many functions.

Even if many functions as a group live in a single repo, they will need individual build scripts and deployment pipelines; additionally we will need to monitor multiple logs.

Lower Costs

Lower Costs are a real benefit provided the functions are not constantly used. The trade off here is easy to calculate and estimate. If you believe your functions will be called constantly FaaS is not the right approach. If you believe the functions will be called occasionally, then yes.

Security

This is a big win. A malicious hacker can’t hack an ephemeral instance — security now becomes primarily a concern for the infrastructure provider. Naturally, there are other vulnerabilities — in some aspects FaaS increase the risks, code level vulnerabilities are still there, but overall it provides a favourable tradeoff.

Designing APIs as a blend of “server” based services and functions

I put servers in quotation marks because I don’t mean physical infrastructure. I mean applications that can listen to incoming http requests and are constantly running in a stateless container (Heroku, PCF etc.). I guess the proper way to differentiate would be SaaS + FaaS, where software in SaaS implies some generic scaffolding, a bit of inherent composition, and more than one single capability.

Setup a Gateway

A Gateway is a must. It can handle things like throttling and you can manage access concerns (ideally something like Akami will handle DoS attacks for you, but your Gateway could in theory do it as well). I don’t think a function should be directly accessible via the web.

Abstract away authentication

Authentication should be handled as a separate service and the function should be exposed only to authenticated users. This can be easily achieved using a JWT. Ideally, the gateway would validate the token and then pass the request to the function if the token is valid.

Define authorization

The authorization model should be at a function level. Each API end point and if we are using REST, each verb for each API end point, should have granular authorization using scopes or claims (or a similar authorization pattern) and should be able to self reflect, in a stateless way, on each request to validate that each request is allowed to call it. This means that each request needs some type of information bundled in the request so the application does not need to hold a user authentication session. As above, JWT lends itself ideally to this type of scenario.

Look at bounded contexts, runtimes, and daily load volumes

Now that you have a gateway, abstracted out authentication, and a model for authorization, we can look at designing our APIs in a way that can maximize efficiency and minimize cost, without affecting the downstream consumers.

When to use FaaS

Use FaaS for infrequent requests. A scheduled task that does some data aggregation should call FaaS. The response time is manageable and we don’t need a service that consumes resources 24/7 if it will only run for seconds each hour (or each 15 minutes).

Use FaaS for asynchronous downstream processing, especially if each function follows its own business logic, and may be owned by a different team. An example is account application processing. Savings, Chequeing, Investments, and Credit accounts may all be processed differently and the user may not need an immediate response. The front end can say the application has been submitted — and once the application has been processed the user will get notified. Look at this article for example of such a flow.

How to use FaaS

Ensure that each functions has a short runtime, and is stateless.
Functions should call other functions in a way that the execution of nested functions don’t result in long run times for the initial function. Consider AWS Step Functions as an example or use a message based workflow.
Abstract away authentication, hide behind a gateway, but let the function handle authorization.

When not to use FaaS

Don’t use FaaS if you expect that a single function will run for a long time. In that case use something like Spring Cloud Data Flow where the “Task” is not dependent on any constrains of the platform and can essentially exist forever.

Don’t use FaaS if you expect high, continuous volume.

Conclusion

FaaS provide a good option for modern architectures but like all solutions they are no panacea and need to be used with trade-offs in mind.

--

--