In Depth AWS Lambda Overview
Each day AWS Lambda is used by more and more people who want to achieve scalability, performance, cost efficiency and to serve millions or even trillions of requests every month. And all of that without managing the underlying infrastructure and the ability to auto scale to thousands of concurrent requests per second. We may agree AWS Lambda is one of the many important services which are offered by AWS today.
AWS Lambda is an event-driven, serverless compute service that lets you run code without provisioning or managing servers and can extend other AWS services with custom logic. Lambda can be automatically triggered in response to multiple events, such as HTTP requests through Amazon API Gateway, changes to data in Amazon S3 buckets or an Amazon DynamoDB table, or invoke your code using API calls made using AWS SDKs and state transitions in AWS Step Functions.
Lambda runs code on a highly available compute infrastructure, and performs all of the administration of the underlying platform, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging. With Lambda, you can just upload your code and configure when and how to invoke it. Lambda takes care of everything else required to run your code with high availability.
When to use Lambda?
AWS Lambda is a suitable compute platform for many application scenarios provided that you can write your application code in languages and runtime environments supported by the service. When you want to focus only on your application code, on you business logic, and leave the server maintenance, provisioning and scalability to other for a good price, then you definitely need to migrate to AWS Lambda.
Lambda is perfect for building APIs, together with API Gateway you can efficiently achieve fast time to market and optimize costs. There are different ways for using Lambda functions, different serverless design patterns which everyone could follow depending on their needs.
A variety of tasks could be implemented with Lambda functions. You can create cron job with the help of CloudWatch and automate some processes. There are no restrictions for usage flows (you have for memory and time) and going to full-fledged microservices applications using Lambda is nice and smooth.
The typical example with image resizing and Lambda shows we can create service oriented actions which don’t have to be running all the time. Even for distributed systems Lambda functions are good choice.
So, you have no need for provisioning and managing of compute resources — try AWS Lambda; you don’t execute heavy processing consuming resources — try AWS Lambda; your code doesn’t execute every second — try AWS Lambda.
So far so good. But the managed runtime environment model of AWS Lambda doesn’t show what’s under the hood and intentionally hides many implementation details from the user, making some of the existing best practices for cloud security irrelevant.
As in almost every AWS service Lambda follows the shared responsibility Security and Compliance between AWS and the customer. This shared responsibility model can help relieve your operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer, down to the physical security of the facilities in which the service operates. For AWS Lambda, AWS manages the underlying infrastructure and foundation services, the operating system and the application platform. You are responsible for the security of your code, the storage and accessibility of sensitive data and identity and access management (IAM) to the Lambda service and within your function.
The figure below shows the shared responsibility model for AWS Lambda. AWS responsibilities appear in orange and customer responsibilities appear in blue. AWS assumes more responsibility for applications deployed to Lambda.
Lambda Runtime Environment
The biggest benefit is when Lambda executes function on your behalf, it manages provisioning of the resources necessary to run your code. This focuses the developers on business logic and writing code, not administering systems.
The Lambda service is divided into two control planes. The control plane is the part of a network that carries signaling traffic and is responsible for routing. That’s by wiki description. More specific a control plane is master component responsible for making global decisions about provisioning, maintaining and distributing a workload. It is also a network topology for a solution provider design responsible for routing and traffic management.
And Lambda service is split into the control plane and the data plane. As each plane serves a distinct purpose in the service. The control plane provides the function management APIs (CreateFunction, UpdateFunctionCode ), and manages integrations with all AWS services. The data plane controls the Invoke API that runs Lambda functions. When a Lambda function is invoked, the data plane allocates an execution environment to that function, or chooses an existing execution environment that has already been set up for that function, then runs the function code in that environment.
AWS Lambda provides support for multiple programming languages through the use of runtimes, including Java 8, Python 3.7, Go, NodeJS 8, .NET core 2, and others. Lambda provides support for these runtimes, including updates, security patches, and other maintenance. You can use other languages in Lambda by implementing a custom runtime. For custom runtimes, maintenance becomes your responsibility, including making sure that the runtime you provide includes the latest security patches.
But how things work, how our functions are executed under the hood?
Each function runs in one or more dedicated execution environments that are used for the lifetime of the function and then destroyed. Each execution environment hosts one concurrent invocation, but is reused in place across multiple serial invocations of the same function. Execution environments run on hardware virtualized virtual machines (microVMs). A microVM is dedicated to an AWS account, but can be reused by execution environments across functions within an account. MicroVMs are packed onto an AWS owned and managed hardware platform (Lambda Workers). Execution environments are never shared across functions and microVMs are never shared across AWS accounts.
The isolation is made using several techniques. At high level each execution environment contains a dedicated copy of the following items:
- The function code
- Any Lambda layers selected for your function
- The function runtime
- A minimal Linux userland based on Amazon Linux
Execution environments are isolated from one another using:
- cgroups — Constrain resource access to limiting CPU, memory, disk throughput, and network throughput, per execution environment
- namespaces — Group process IDs, user IDs, network interfaces, and other resources managed by the Linux kernel. Each execution environment runs in a dedicated namespace
- seccomp-bpf — Limit the syscalls that can be used from within the execution environment
- iptables and routing tables — Isolate execution environments from each other
- chroot — Provide scoped access to the underlying filesystem
Along with AWS proprietary isolation technologies, these mechanisms provide strong isolation between execution environments. This isolation ensures that environments are not able to access or modify data that belongs to other environments.
Although multiple execution environments from a single AWS account can run on a single microVM, microVMs are never shared or reused between AWS accounts. At this time, AWS Lambda uses two different mechanisms for isolating microVMs: EC2 instances and Firecracker. EC2 instances have been used for Lambda guest isolation since 2015. Firecracker is a new open source hypervisor developed by AWS especially for serverless workloads, and was introduced in 2018. The underlying physical hardware running microVMs will be shared by workloads from multiple accounts.
Storage and State
Even though Lambda execution environments are never reused across functions, a single execution environment can be reused for invoking the same function, potentially existing for hours before it is destroyed.
Each Lambda execution environment also includes a writeable file system, available at /tmp. This storage is not accessible to other execution environments. As with the process state, files written to /tmp remain for the lifetime of the execution environment. This allows expensive transfer operations — such as downloading machine learning (ML) models — to be amortized across multiple invocations.
Invoke Data Path
The Invoke API can be called in two modes: event mode and request-response mode. Event mode queues the invocation for later execution. Request-response mode immediately invokes the function with the provided payload, and returns the response. In both cases, the actual function execution is done in a Lambda execution environment, but the payload takes different paths.
For request-response invocations, the payload passes from the API caller — such as AWS API Gateway or the AWS SDK — to a load balancer, and then to the Lambda invoke service. This service identifies an execution environment for the function, and passes the payload to that execution environment to complete the invocation. Traffic to the load balancer passes over the internet, and is secured with TLS. Traffic within the Lambda service (from the load balancer down) passes through a Lambda internal VPC within a single AWS region.
Event invocations can be executed immediately or queued for processing. In some cases, the queue is implemented with Amazon Simple Queue Service (Amazon SQS), and passed back to the Lambda invoke service by an internal poller process. Traffic on this path is secured with TLS, but no additional encryption is provided for data stored in Amazon SQS. For event invokes, no response is returned, and any response data is discarded by the worker. Invocations from Amazon S3, Amazon SNS, CloudWatch events, and other event sources follow the event invoke path in the Lambda service. Invocations from Amazon Kinesis and DynamoDB streams, SQS queues, Application Load Balancer, and API Gateway follow the request-response path.
You can monitor and audit Lambda functions with many AWS methods and services, including the following services:
- Amazon CloudWatch
It reports metrics such as the number of requests, the execution duration per request, and the number of requests resulting in an error.
- Amazon CloudTrail
CloudTrail enables you to log, continuously monitor, and retain account activity related to actions across your AWS infrastructure, providing a complete event history of actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services.
- AWS X-Ray
X-Ray’s end-to-end view of requests as they travel through your application shows a map of the application’s underlying components, so you can analyze applications during development and in production.
- AWS Config
With AWS Config, you can track configuration changes to the Lambda functions (including deleted functions), runtime environments, tags, handler name, code size, memory allocation, timeout settings, and concurrency settings, along with Lambda IAM execution role, subnet, and security group associations.
AWS Lambda offers a powerful toolkit for building secure and scalable applications. Many of the best practices for security and compliance in AWS Lambda are the same as in all AWS services, but some are particular to Lambda. As of March 2019, Lambda is compliant with SOC 1, SOC 2, SOC 3, PCI DSS, U.S. Health Insurance Portability and Accountability Act (HIPAA), etc. As you think about your next implementation, consider what you have learned about AWS Lambda and how it might improve your next workload solution.
- Security Overview of AWS Lambda whitepaper
- Feature image — https://devclass.com/2018/10/12/aws-promises-everyone-their-15-minutes-of-lambda-excecution/