Breaking down the AWS Lambda Shared Responsibility Model

Costas Kourmpoglou
Airwalk Reply
Published in
8 min readApr 27, 2021


This is the first of a series of posts that summarise the most common security areas that our teams have addressed. The aim is to give you a high-level — non-exhaustive — overview of the core areas of Lambda security along with a reusable pattern, that can serve as a starting point or a sandbox environment.

AWS have released many white papers over the years, covering the security pillar of the Well Architected Framework to detailed industry compliance artefacts. Arguably, at the core of the security and compliance of these services, we always find the shared responsibility model .

It serves as a great point of reference for understanding the responsibilities of AWS and the Customer. It can be used as the basis of a threat modelling, security control, or team ownership dialog.

Lambda offers fantastic flexibility, cost, and scaling benefits. It can cater to a variety of workloads; however, you might hear people say:

“I reach for a Lambda function every time I want to ̶e̶s̶c̶a̶l̶a̶t̶e̶ ̶m̶y̶ ̶p̶e̶r̶m̶i̶s̶s̶i̶o̶n̶s̶. run a script.”

We will take a closer look at the shared responsibility model of AWS Lambda and walkthrough AWS’ and the Customer’s responsibilities.

AWS Lambda Shared Responsibility Model

Security “of” the cloud — AWS’ responsibilities

Components that are AWS responsibilities are outlined on the bottom half of the diagram. The responsibility of the security of the Execution Environment and Runtime Language falls within AWS’ remit. However, we recognise this as a shared control.

Firstly, the Customer can provide their own runtime, e.g., building a custom runtime or OCI image shifts the responsibility towards the customer. Secondly, if an AWS provided runtime has reached its End of Life, it’s the Customer’s responsibility to upgrade to a supported runtime. In certain cases, AWS will continue to support EOL Runtimes. E.g., Python 2 became EOL at the end of 2019, however AWS will continue providing critical security updates for Python 2.7 on AWS Lambda, until June 1, 2021.

The second component, which we identify as a shared control is Networking Infrastructure. Lambdas by default start in an AWS Lambda Service VPC. The caveats to that are:

  • By default, open egress to everywhere — cannot apply security groups.
  • Communication to other account resources happens over the Internet.

Depending on your organisation’s threat model, risk appetite and potentially the regulatory environment you operate in, deploying Lambdas outside of your organisation’s VPCs is not recommended.

Deploying inside a VPC, the responsibility of the of the Networking Infrastructure, partially shifts towards the Customer, as the function will no longer solely exist in the AWS Lambda Service VPC.

Intermission — How Lambda works

Lambda is running on a fleet of Amazon EC2 instances — using Firecracker — in AWS managed accounts. This diagram explains how it all fits together, from bare metal to the Lambda Sandbox. When you execute a function outside of a VPC, by default the Lambda can access the internet, but it’s not “dual-homed”. In order to access private or otherwise VPC-bound resources, you’ll need to make them publicly available. You cannot utilise PrivateLink. This is one of the reasons, why many workloads will require to be deployed in a VPC.

Lambda’s Execution Context

How Lambda works inside your VPC

With the introduction of AWS Hyperplane, each Lambda is still launched inside the AWS Lambda Service VPC Account(s), however, the connection flow is mapped directly to an ENI (Elastic Network Interface) in your VPC.

Network Interfaces in the Customer’s VPC map to Hyperplane ENI

For some visual aid, the diagram from the AWS Compute Blog shows what happens when you place a Lambda inside your VPC. In turn this means that the Customer is responsible for Security Groups and the availability of ENIs in their VPC. The ENI is cross-account attached to your VPC. While the networking of Lambda is managed by AWS, outside of the Customer’s VPC, the Customer is still responsible for planning the capacity, availability, as well as its security. Concretely, this means the size of the VPC, its subnets, and the Security Groups, respectively.

The good news is that with Hyperplane, your function scaling is no longer directly tied to the number of network interfaces. I.e., there is no one-to-one mapping between a function and an ENI. Functions in the same account, with the same security group to subnet mapping, will use the same ENI.

Security “in” the cloud — The Customer’s Responsibilities

Moving on to the top half of the shared responsibility model, we’ll focus on the Customer’s responsibilities. Resource Configuration, Customer Function Code and Libraries, and Identity & Access Management.

Resource Configuration:

Outside of the configuration of the resources — memory, timeout, and concurrency — another feature of Lambda is the reusable, 512MB writable /tmp directory. The same execution context — including the /tmp directory — might be reused. Although it can serve as the basis of reducing the execution time of a Lambda, by e.g., downloading a machine learning model and then effectively caching it. This pattern has been used to chain vulnerabilities and gain persistence on an otherwise ephemeral environment. The Customer might need to restrict the conditions, on which the /tmp directory is reused.

Two more areas which we’ll include as part of the resource configuration are environment variables and secrets. Although these topics overlap with IAM and function code, it’s worth mentioning that environment variables are part of Lambda’s function configuration. You can get the content of the environment variables of a function with `get-function-configuration`. This can be problematic if your secrets management pattern involves storing secrets in environment variables. You’ll most likely need to consider using AWS KMS or an alternative pattern altogether.

Customer Function Code and Libraries:

Lambda runtime, code and any other libraries are the Customer’s responsibility. They largely fall under the following categories:

  • Vulnerable libraries and code
  • Lambda layers and their permissions

Vulnerabilities and functionality of the Customer’s code are their responsibility. Secure software development processes apply. Static code analysis, dependency scanning, reviews, etc.. In addition, Lambda layers and their respective IAM permissions need to be fine-tuned, if in place. Lambda layers allow you to package any libraries and runtimes as a ZIP file. You can then configure a function to use this layer, without needing to package it every time. This is a great feature that allows code to be reused and increases the flexibility of Lambda. They also come with their own set of permissions, that can allow other IAM identities (users, roles, groups) to update them. For complex deployments or to enforce a gatekeeper integrity control, you might want to consider code signing.

Identity & Access Management

The Lambda Resource Policy

Since Lambdas can be triggered by different services, they also have their own resource policy. If you define a resource policy with a Principal of ‘*’ without any conditions, anyone can invoke the Lambda, if they know the ARN. In that regard if you’re not familiar with the AWS IAM model you can think of Lambda as an S3 bucket that can execute code. The only IAM permission required in order to invoke a lambda function is “lambda:InvokeFunction”. Lambda integrates with approximately 140 AWS Services. This increases the flexibility by allowing cross-account or even anonymous invocation, however, it also increases the attack surface.

If an IAM Identity within your account is allowed to add an integration, then it can invoke the Lambda. Make sure that you limit what can change a Lambda’s resource policy and that you have visibility of which Identity or AWS Service can invoke your Lambdas.

The Lambda Execution Role

The Lambda service will need to be able to create ENIs in our VPC, it means that it will need to:

  • Create and Delete ENIs
  • Assign and Unassign private IP addresses

This is evident in the default minimum permissions managed policy `AWSLambdaVPCAccessExecutionRole`. This policy is also used by the Serverless framework by default when deploying inside a VPC. The limitations on the resource cannot be established easily which is why the default policy appears to be quite permissive. However, you can limit deployment using permissions boundaries or specific IAM Conditions which can limit deployments to a specific set of subnets.

Finally treating Lambdas as a resource, we are able to trigger other Lambdas. If your function or role is able to invoke any function, it can be problematic, especially when considering a cross-account context. E.g., the managed policy `AWSLambdaRole` allows you to trigger functions in other AWS Accounts as well (of course, the Resource Policy of the lambda function that you want to run comes into play). This could be utilised as an exfiltration method, and the CloudTrail entry of that invoke will not even be in your CloudTrail as it is not your function that is being run.


Deployment roles depend on the AWS and Amazon services that the application needs. If you’re deploying using CloudFormation you can query the CloudTrail events, of your already deployed stack, as a starting point.

Going back to our VPC requirement, we will also need to ensure that Lambda deployments are limited only to specific VPCs and their respective resources. This way we can enforce the environments in which deployments of Lambda functions can take place. This is achieved by deploying a Service Control Policy and/or an IAM Boundary Policy.

We’ve built a solution to showcase what this can look like. You can also use as a starting point when building your own secure reusable pattern. Take a look at our Github repo. We’re using the Serverless framework to manage the deployment of the infrastructure and packaging of the application.

The infrastructure is defined in CloudFormation but deployed using the Serverless framework. This way we separate the infrastructure, and application code, whilst keeping everything in one place. We can then reuse the infrastructure components, e.g. Subnets, to reference where our Lambdas will be deployed.


We’ve talked about the core areas of the shared responsibility model and what you’ll need to consider when securing and operating workloads using AWS Lambda. Although you might be thinking that by deploying a Lambda inside a VPC, you’ll be missing out on the flexibility, once you establish your applications’ trust boundaries and services, you can build reusable foundations.

The Good — We love the flexibility of Lambda and IAM conditions can limit you to launching in a VPC — SAM and the Serverless framework make Lambda development a breeze.

The Bad — You will most likely need to provide a secure network boundary that includes VPC endpoint policies to limit what resources can be accessed.

The Ugly — There is groundwork that needs to be done before developing Lambdas in a VPC. This might create a steep learning curve for all the components that will need to be put in place. We’ll cover VPC endpoints and VPC endpoint services in the next article.

Although you might need to invest in building the foundations when deploying Lambdas to a VPC, you can reuse and extend the infrastructure once you’re comfortable with how your workload operates, resulting in your entire infrastructure having a greater level of security. The shared responsibility model is yet another way of framing most security, compliance, and operational dialogs within an organisation.