Awfully Thorough Guide to Choosing the Best Serverless Solution, Part 1.1 : AWS Lambda

Ilya Kritsmer
Jul 15 · 9 min read

If you’re responsible for building cloud solutions, your life is far from being easy. Changes are, you’re shooting in the dark, or, more precisely, shooting in the blinding light. Amazon, Microsoft, Google and IBM — just to name the leading “big four” vendors — see in their cloud businesses major revenue engines and are therefore investing a lot of resources to push that portion of their portfolios forward. As a result, the number of cloud computing offerings grow exponentially every year, flooding DevOps, architects, CTOs and information management decision-makers with a seemingly endless flow of potential new services, evolving technologies and — frankly — a whole lot of marketing b*lshit aimed at catching their attention and securing their business.

A new series of the blogs posts is coming to help you sort through the chaos and compare today’s cloud options. We’ll divide the options into categories in a concise and logical manner and tackle NoSQL database offerings from leading providers, a variety of message queues, stream engines, API management systems, Big Data solutions and more. The series kicks off today with the concept that gains most of the attention and creates most of the hype nowadays. That’s right: serverless computing. Let’s jump right in.

So, what is Serverless Computing?

“ Serverless computing is a cloud-computing execution model in which the cloud provider runs the server, and dynamically manages the allocation of machine resources. Pricing is based on the actual amount of resources consumed by an application, rather than on pre-purchased units of capacity.” — Wikipedia

Sounds great, doesn’t it? Assuming you know the general pros, cons and challenges of serverless computing for your specific architecture as well as the use case, you’re likely wondering which serverless computing offering to choose. There is more than one grade of “serverless” and in future posts we’ll examine some other services purists may say are not strictly serverless but something in between (like purgatory). In this post, we’ll offer information and observations regarding the most “serverless” approach possible — FaaS, or Function as a Service — to help you decide which option is best for you.

TL;DR : The Overall Winner

Photo: Dave Simonds
Photo: Dave Simonds
Photo:Dave Simonds

Amazon, Microsoft, Google and IBM have each unveiled their own FaaS solutions, with differing capabilities when it comes to Event Triggers, Supported Languages, Tooling and Debugging, Monitoring and Logging, Performance and Scaling, Security and Pricing.

The clear winner is, unsurprisingly, Amazon with its AWS Lambda. The pioneer FaaS offering is best in class in practically every aspect besides logging.

The second-place spot goes to Azure Functions from Microsoft, which invests a lot to close the (pretty big) gap, specifically in the area of language support.

IBM Cloud Functions, part of a bigger OpenWhisk platform, offers excellent language support, but the IBM ecosystem is very closed off and isolated, which doesn’t appear to concern Big Blue.

Google Cloud Functions is in last place, with a solution that seems like a step child in the family. Important features like security and scale are still in a pre-release state despite the fact that Google’s offering launched back in 2016.

Other FaaS solutions are also available, like Cloudera, Oracle (RIP) and some others, but because they remain far behind the “Big Four” options, or, in case of Oracle, are partnering with the leaders, we don’t include them in our roundup.

So, let’s shed light on our rankings by drilling down to look at the Amazon, Microsoft, Google and IBM offerings in more detail.

Amazon’s AWS Lambda

Amazon became a pioneer in FaaS when it announced its AWS Lambda offering in November 2014. Over the years, Lambda, or Lambda Functions, has become a synonym for serverless computing and left other vendors no choice but to implement their own similar services. Amazon defines Lambda Functions as an event-driven, serverless computing service that runs code in response to events and automatically manages the computing resources required by that code. Simple as that sounds, think about events your application might generate and what’s needed to develop lightweight, cost-effective pieces of code that will run independently and can handle those events. Event triggers might be your own custom code or any other AWS service your application uses, such as SQS.

Note that security, and, more specifically, IAM, is handled at the Lambda creation stage. What is not so obvious, is the fact that by default Lambda does not have a publicly-accessible URL. To create one, you need to expose it either through an API Gateway (more on that — here) or through an ALB, as announced at re:Invent 2018.

Lambda offers 70 templates that can handle event triggers from some of most popular AWS services like Kinesis and more.

Event Triggers

Lambda supports multiple event triggers, which can be either similar type events from multiple Kinesis partitions or triggers of different services, as shown above. Every trigger can be easily tested via the GUI interface and then running Lambda in test mode.

Supported Languages

AWS Lambda natively supports Java, Go, PowerShell (yes, PowerShell), Node.js, C#, Python and Ruby code, and provides a Runtime API — HTTP API for custom runtimes which can receive invocation events from Lambda and send response data back, which allows you to use any additional programming languages to author your functions. This official ASW Blog post demonstrates how to use it for a classic “Hello World!” written in C++. It’s cool, but not unique — IBM has a similar feature that leverages Docker (more on that later).

It is possible to either write a function code inline or upload it as a .ZIP or from S3. This last functionality makes and integration with CI/CD trivial, and indeed there are plenty of solutions that allow you to make the Lambda deployment process a part of a CI/CD pipeline. Even Azure DevOps, the Microsoft-owned platform, got plugins that provide seamless workability with AWS Lambda.

Tooling and Debugging

Poor, incomplete tooling is one of the main barriers of widespread serverless adaption. Amazon was somewhat late to address this issue, but recently invested a lot to improve the situation. The most important part of the software development — a local debugging — has been implemented in toolkits for some of the most popular IDEs, including: JetBrains, PyCharm, IntelliJ, Visual Studio and Visual Studio Code (with very limited functionality). The AWS CLI tool obviously supports Lambda too.

Monitoring and Logging

Automatic, built-in logging is available through AWS CloudWatch. Every Lambda comes with its own dedicated CloudWatch log group, which shows Lambda internal logs (e.g., run time and memory consumed, along with custom code logs) but it is far from being an ideal approach. It is almost crucial to use some centralized logging solutions here, either a custom ELK or any other third-party one, like Loggly.

Additionally, there is a very useful monitoring dashboard that provides valuable insights like most expensive calls. There is also an option to send alerts upon Lambda timeouts and in other situations — take a look at Yan Cui’s comprehensive guide titled “How to monitor Lambda with Cloudwatch metrics” for even more tips and tricks.

Advanced Features

External SDK and frameworks are often necessary to handle events effectively and until recently it was necessary to package them with every single Lambda Function. In order to eliminate this obvious waste of resources Amazon recently introduced Lambda Layers, which define an immutable “layer” to hold common runtime library options or a common custom code that then can be used or shared among multiple Lambda Functions.

Performance and Scaling

Lambda Functions have certain limitations in terms of function code size, function runtime for every invocation, memory consumption for every invocation and so on. The full list can be seen here. What is most interesting in a classic matter of performance is that for every invocation a number of concurrent runs are permitted during an allowed runtime before timeout. For now, it allows for 1,000 executions at a time (why not 1,024?), with maximum execution time of 15 minutes. Concurrency can be configured for the whole account or for an individual function. AWS’ official Understanding Scaling Behavior documentation discusses this in more detail. Spoiler: AWS Lambda is far from being infinitely scalable.

Another well-known performance issue, the so called “cold start,” is inherent to any FaaS offering, including Lambda. The actual time it takes for Lambda to start strongly depends on parameters like runtime language, package size, VPC configuration and others, but the penalty of the cold start will always be there. Some guys at Coinbase did a great job examining the issue thoroughly. A trivial solution for dealing with cold starts is to schedule a periodic invocation of a target Lambda function, but this is not without its flaws. Epsagon CTO Ran Ribenzaft deep dives into it in this article.

Security

AWS takes security seriously. To demonstrate that, the company recently published a Security Overview of AWS Lambda whitepaper that presents “a deep dive of the AWS Lambda through a security lens.” It comprehensively reviews Lambda architecture internals, including an isolation between functions and MicroVM, gives a short list of compliance information and provides links for further reading.

Besides that, AWS Lambda leverages the standard AWS IAM role model configured at the Lambda definition stage and can be easily modified later. Every function has a policy, called an execution role, that grants it permission to access AWS services and resources.

When assigning Lambda an HTTPS endpoint via API Gateway and (probably) exposing it to the external world, it is crucial to use a comprehensive list of API Gateway security features, including authentication, authorization and API keys (see the full review here). Note that recently available Lambda/ALB integration does not allow you to leverage all those features but it’s much cheaper at scale, as we’ll discuss below.

Last but not least, let’s consider VPC, aka Virtual Private Cloud (those who are not familiar with the concept can learn more here). Using anything other than default VPC with the Lambda has its strong drawbacks, the first of which is a dramatically longer (up to 10 secs!) cold start time. This great blog post examines the issue more thoroughly.

Pricing

The Lambda pricing model is based on the number of requests, the duration of each request and the amount of memory Lambda needs to handle each request. Thus, the pricing model is defined as the number of the requests and the number of GB-SECONDS. A Free Tier offers 1M requests and 400,000 GB-SECONDS (as of July 2019).

Shut up and take my money, you say? If you are not operating a service that has to run at scale constantly, you are right. Otherwise, a Lambda-based architecture would likely cost many times more than a service running in containers or VMs. Take a look at this insightful Yan Cui post to better understand common pitfalls of the Lambda pricing structure. A much cheaper way to run Lambda at scale is with ALB, rather than API Gateway, as this excellent post shows.

Wrap Up and Further Reading

Amazon offers a “best-in-class” FaaS solution. Languages support, tooling, DevOps and security — everything but logging — is second to none. If you are new to FaaS and are considering a vendor go for AWS Lambda.

To read more, take a look at Best Practices for Working With AWS Lambda Functions, AWS Lambda FAQs and the comprehensive AWS Lambda Documentation.

Next Post in FaaS Series

Ilya Kritsmer

Written by

Seasoned CTO, owner of a consulting company, mentor. Acid jazz,single malt whisky and cats lover.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade