Serverless is not a Silver Bullet — But it’s Close

Elliot Forbes
3 min readMay 28, 2018

--

Over the past few months, I’ve been a huge proponent for all things serverless. I’ve been using it heavily in my personal projects and really enjoying some of the major benefits that serverless can bring to any project.

I’ve seen a wide number of evangelists, like myself, praise the architecture and how it can revolutionize the way we design and build applications. It’s a sentiment that I still fully stand behind, but as with all things, it’s not always rainbows and unicorns.

The Benefits

The benefits of serverless are plentiful. In large enterprise environments, it can optimize the utilization of the underlying hardware we run our FaaS offerings on, whilst optimizing developer productivity.

Developers are able to isolate business logic into single functions and deploy these to a platform that can independently scale this logic based on demand.

This is just one of the many benefits that I’ve outlined in a previous article:

The Drawbacks

However, whilst I’m a massive advocate for Serverless in general, it should be noted that it isn’t a silver bullet. There will be scenarios that are inherently ill-suited for utilizing a serverless architecture.

By utilizing a serverless architecture, we are merely shifting some of our development problems left.

Connection Pooling

Imagine you have a database sitting somewhere, serving critical production data to your various microservices and applications. This database only has a finite resource limit before it starts being unable to handle new connection requests.

Spikes in serverless usage can see thousands of connection attempts made in a very short space of time and this can exhaust all but the very biggest of databases unless adequate measures are put in place to ensure your database scales correspondingly.

The good news is that some of the very smart people over at Spotinst have managed to come up with a way to implement connection pooling in a serverless context. This is definitely worth checking out:

However, databases aren’t the only thing that will invariably see issues with hundreds, if not thousands of instances of your function all trying to concurrently connect to them. Message Brokers are in the same boat, as are any services that you may or may not be interfacing with. The point is that, without taking these things into consideration, when you deploy to production your new glorious serverless application, you may end up getting called up at 3am when your systems die.

The Considerations of Scale

When developing systems that can deal with incredible scale, considerations need to be made to ensure that every part of your application can handle said scale.

Whilst I have certainly been quick to advocate for the architecture in previous articles, I have seen a number of people expect serverless to solve world hunger level issues within their systems. It can certainly aleviate some of the strains on your systems but without taking adequate precautions, this will offset these strains to other parts of the system.

The Hidden Costs

One of the main allures of serverless is typically the cost associated with the architecture style. You get 1 million AWS lambda executions free every month, however, the cost of associated services can start to mount up quickly.

There was an excellent article written on this by @Amiran Shachar which can be found here:

Conclusion

Hopefully, this article made you realise some of the shortcomings of serverless architectures and some of the considerations you will have to make in order to ensure your path to success is smooth.

Going forward, software architects need to consider potential bottlenecks and ensure the resiliency and scalability of every part of their distributed systems. Without these considerations, there is potential for your systems to crumble underneath you.

If you wish to support me then please feel free to follow me on Twitter: @elliot_f , or check out my latest book:

--

--