Payam Moghaddam
Aug 23, 2017 · 2 min read

Thank you for your response Tim.

Your points are true and valid, and can be seen as an alternative approach to tackling Serverless development. You are correct that the start-up time can be optimized; however, it comes as a trade-off with the development experience.

If we could make cold start-up times negligible, then we’d be able to scale fast enough that the incoming requests would not tell the difference. That’s why many people are focused on figuring out how to create really small, focused, and fast functions so that we can make cold start-up times negligible. That’s very much to your point, and that’s why your points have merit that I’ve given “bad advice on actually writing the lambda”.

However, we are not there yet for a couple of reasons. One, we do not yet have all the capabilities for a full-featured cold start-up of Lambda functions, such as the KMS decryption overhead example I gave. Two, creating such tiny functions comes with development time overhead. As I mentioned in my example, I needed an IP address parser. It is not a viable suggestion for me to re-write such a capability so that it’s slightly faster. I want to still develop quickly and use what’s available to me in the community. Arguably, I could have avoided Lodash; however I preferred the faster development experience with it.

The ideas presented in this story are more oriented around having a familiar development experience, for a focused feature (e.g. CloudTrail parsing), which is both rapid to develop, easy to understand, and not locked-in to any platform.

If you have a particular use case that needs both fast cold start-ups and fast scalability, then yes, you need to trade-off the development experience and make it as focused, vanilla, and fast as possible.

)

    Payam Moghaddam

    Written by

    Software Writer, Ninja Fighter