Custom Runtimes in AWS Lambda. A good idea? Yes, but only if you have to!

Ben Ellerby
Aug 5 · 3 min read
Custom Runtimes — Lambda Layers
Custom Runtimes — Lambda Layers

At the end of 2018 I was tasked with building out a 100% Serverless infrastructure to replace a 7 million line decade old monolith. The only constraint was that the client wished to continue with PHP…

Luckily AWS released Lambda Layers with custom Runtime API support at the tail end of 2018, allowing a compiled binary to act as a runtime on Lambda. This prevented the need to do the hacky “node.js spawning a PHP process” solution. But this does not eliminate all the complexity…

After building our custom runtime layer, see my article Serverless Anything, we were away!


The Good

Lambda layers are a nice abstraction and play well with the Serverless Framework (SLS). They also have the advantage of being able to be deployed separately from your function code — allowing deploy times of individual functions to be reduced as the whole PHP binary does not need to be zipped and sent to AWS on every line change that is deployed.

The layers are were also committed to the codebase so many of the developers on the project didn’t have to think about the complexity it was adding — it was abstracted away nicely, even though deployment times were a little slow.


The Bad

As you will notice in the above article, the build process is not trivial and prone to errors. Every update of PHP or change to the underlying AMI of AWS Lambda requires a rebuild and the steps can vary with changes in PHP or the AMI/OS.

One example of this was a change in AMI version impacting the OpenSSL version used in the build process. The AMI had 1.0.2k binaries and we were using 1.0.1k headers to compile php. The process needed to be adapted to isolate the build from this change, and the debug process was (to be blunt) long.

It didn’t feel like Serverless but more Serverfull — without the convenience of being able to ssh and debug.

On-top of this, the complexity added in general by a custom runtime, the confusion caused to the team by the Bootstrap file (the interface between the AWS Custom Runtime API and our binary) and the lack of tooling for local development when using custom runtimes do make it a challenge (especially for teams new to Serverless).

// Stats on performance to be added once sufficient data gathered.


Conclusion

AWS Lambda layers have many and varied use cases. The approach of providing custom runtimes via layers is much better than the previous hacks and it opens up legacy migration & support for teams who can’t move from a particular unsupported language.

That said the lack examples, complexity added to your build process and impact to the local development experience make using a natively supported runtime the goto option whenever there is a choice.

It seems we are not free of the complexities of servers yet, but using a natively supported runtime helps us get closer to that Nirvana.

Serverless Transformation

Tools, techniques, and case studies of using serverless to release fast and scale optimally.

Ben Ellerby

Written by

Architect developer, working with startups to launch MVP’s and large corporates to deliver in startup speed. Currently changing the way I build with Serverless.

Serverless Transformation

Tools, techniques, and case studies of using serverless to release fast and scale optimally.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade