Custom Runtimes in AWS Lambda. A good idea? Yes, but only if you have to!

Ben Ellerby
Serverless Transformation
3 min readAug 5, 2019
Custom Runtimes — Lambda Layers

At the end of 2018 I was tasked with building out a 100% Serverless infrastructure to replace a 7 million line decade old monolith. The only constraint was that the client wished to continue with PHP…

Luckily AWS released Lambda Layers with custom Runtime API support at the tail end of 2018, allowing a compiled binary to act as a runtime on Lambda. This prevented the need to do the hacky “node.js spawning a PHP process” solution. But this does not eliminate all the complexity…

After building our custom runtime layer, see my article Serverless Anything, we were away!

The Good

Lambda layers are a nice abstraction and play well with the Serverless Framework (SLS). They also have the advantage of being able to be deployed separately from your function code — allowing deploy times of individual functions to be reduced as the whole PHP binary does not need to be zipped and sent to AWS on every line change that is deployed.

The layers are were also committed to the codebase so many of the developers on the project didn’t have to think about the complexity it was adding — it was abstracted away nicely, even though deployment times were a little slow.

The Bad

As you will notice in the above article, the build process is not trivial and prone to errors. Every update of PHP or change to the underlying AMI of AWS Lambda requires a rebuild and the steps can vary with changes in PHP or the AMI/OS.

One example of this was a change in AMI version impacting the OpenSSL version used in the build process. The AMI had 1.0.2k binaries and we were using 1.0.1k headers to compile php. The process needed to be adapted to isolate the build from this change, and the debug process was (to be blunt) long.

It didn’t feel like Serverless but more Serverfull — without the convenience of being able to ssh and debug.

On-top of this, the complexity added in general by a custom runtime, the confusion caused to the team by the Bootstrap file (the interface between the AWS Custom Runtime API and our binary) and the lack of tooling for local development when using custom runtimes do make it a challenge (especially for teams new to Serverless).

// Stats on performance to be added once sufficient data gathered.

Conclusion

AWS Lambda layers have many and varied use cases. The approach of providing custom runtimes via layers is much better than the previous hacks and it opens up legacy migration & support for teams who can’t move from a particular unsupported language.

That said the lack examples, complexity added to your build process and impact to the local development experience make using a natively supported runtime the goto option whenever there is a choice.

It seems we are not free of the complexities of servers yet, but using a natively supported runtime helps us get closer to that Nirvana.

--

--

Serverless Transformation
Serverless Transformation

Published in Serverless Transformation

Tools, techniques, and case studies of using serverless to release fast and scale optimally.

Ben Ellerby
Ben Ellerby

No responses yet