Timothy Jones
Sep 4 · 7 min read
Don’t tell anyone your secrets. Do tell people how to store secrets. (Photo by Kristina Flour on Unsplash)

Most Lambda deployments use environment variables to pass configuration to the deployed function. It’s the recommendation that AWS makes in the documentation, and it’s an attractive choice because it’s easy, encrypted at rest, and allows for flexibility in how you get the values there in the first place.

There are many articles with good recommendations about lambda configurations already. Why should you read this one?

Instead of comparing and contrasting approaches, this is a how-to guide for anyone whose primary values are minimising cost without compromising scalability or security. If you have additional or different needs, I recommend reading the links above as well.

This guide is aimed at small to medium teams working in contexts where security matters, but fine-grained permission management might not.

Just tell me the answer

If you’re here from Google and just want a recommendation, feel free to skip to the end for the summary. If you want the detailed rationale behind the recommendation, read on.


Secrets in Environment Variables

Storing configuration in environment variables is fine for details that aren’t secret (like API endpoint locations, hostnames, public keys, etc). However, it’s not so effective for configuration that is potentially sensitive like passwords or API keys.

Even though Lambda environment variables are encrypted at rest, they’re visible to anyone who has the permissions to see the Lambda in the console. This isn’t great, as it violates the principle of least privilege — there’s no need for sensitive data to be easily accessed by people or services that don’t need it.

There’s a practical problem too — having environment variables store sensitive data next to non-sensitive data makes it much more likely that values are accidentally exposed by logs from CloudFormation or general execution. It’s harder for people to remember to protect sensitive data if they’re used to it displaying in the course of their daily work, especially if it is displayed next to non-sensitive data.

It’s worth noting that even though the Serverless Secrets Plugin uses this approach, the documentation starts with the following warning:

IMPORTANT NOTE: As pointed out in the AWS documentation for storing sensible information Amazon recommends to use AWS KMS instead of environment variables like this plugin.

That is, AWS recommends not putting plaintext secrets in your environment variables, and I agree.

Which of these are meant to be secret? (Photo by Jason D on Unsplash)

Environment Variables Encrypted With KMS

As the quote suggests, AWS recommends encrypting environment variables with KMS beforehand . This means the encrypted version is all that would be exposed in the console. The approach is straightforward — you use KMS to encrypt the value before putting it into the environment variable, and then you decrypt it with KMS inside your Lambda code. Here’s a nice walkthrough.

If all you care about is price and security, this approach is perfect. However, since we’re looking for something that scales nicely, it would be better if we had some centralised configuration.


Centralising the Configuration

AWS offers two main approaches for centralised configuration —Secrets Manager and Systems Manager’s Parameter Store (confusingly abbreviated as SSM Parameter Store).

Secrets manager

Secrets Manager is a great fit if you need detailed control over when and where each secret can be used. However, with both storage and access costs, it is considerably more expensive than the free basic storage of Parameter Store.

Because of the increased cost, I’m not considering Secrets Manager here. However, if it’s right for your needs, this tutorial is a good starting point.

Not that kind of centralised (Photo by K8 on Unsplash)

Secrets in SSM’s Parameter Store

Parameter Store has a couple of nice features, including hierarchical parameters. This means you can name parameters with a path for ease of retrieval. For example, if you name your parameters with namespaces separated by slashes:

aws ssm put-parameter              \
--name "/your/app/some_value" \
--type "SecureString" \
--value "foo is a value"
aws ssm put-parameter \
--name "/your/app/other_data" \
--type "SecureString" \
--value "another value"

Then you can give your function permissions to retrieve all parameters under that path:

- Effect: Allow
Action:
- ssm:GetParameters
- ssm:GetParameter
Resource: 'arn:aws:ssm:<REGION>:<ACCOUNT>:parameter/your/app/*'

Getting Parameters at Runtime

Another neat feature is the ability to retrieve the parameters at runtime. This is nice — it decouples deployment and configuration. If you want to do this, here’s a helper gist I wrote that smooths the process for node Lambdas.

However, Parameter Store has a drawback: the throughput is very low. Each parameter retrieved with get-parameters counts as one request and you’re only allowed 100 requests per second.

You can increase this limit, but doing so raises the price, and still only allows 1,000 requests per second.

Aside: If you go this route, I strongly recommend loading the Parameter Store parameters outside your handler, so that they will be loaded once per Lambda instance, not once per handler execution.


Reading Encrypted Values From Parameter Store at Deploy-Time

So, for nicer scaling, we want a centralised parameter store. Parameter Store’s free cost is attractive, but it won’t scale because of the requests-per-second limit.

What about reading the encrypted data (without decrypting) at deploy time? Since Parameter Store is backed by KMS, we could read Parameter Store’s secure parameters at deploy time without decrypting, and decrypt at runtime using the KMS API.

If you’re using the serverless framework, this is actually really straightforward:

${ssm:/path/to/secureparam~false} 

The false says “please don’t decrypt this”.

However, I prefer not to use the Serverless framework — and using SecureString params like this isn’t supported by CloudFormation.

Aside: if you’re using the Serverless framework, and want to go this route, note that you will need to remember the parameter’s ARN to use as encryption context during decryption. More details here.

Sometimes you can’t see the clouds for the… other clouds (Photo by Łukasz Łada on Unsplash)

Secure Parameters and CloudFormation

If you’re using CloudFormation directly, then SecureString parameters are not supported. You can’t use them as parameters:

# Broken CloudFormation:Parameters:
SomeParam:
Type: 'AWS::SSM::Parameter::Value<String>'
Default: '/path/to/secureparam/'
## ERROR ##An error occurred (ValidationError) when calling the CreateChangeSet operation: Parameters [/path/to/secureparam] referenced by template have types not supported by CloudFormation.

Nor can you use them outside the approved locations:

# Broken CloudFormation: Resources:
ReceiveLambda:
Type: AWS::Serverless::Function
Properties:
Environment:
Variables:
VAR_NAME: '{{resolve:ssm-secure:/path/to/secureparam:1}}'
## ERROR ###Failed to create the changeset: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED. Reason: SSM Secure reference is not supported in: [AWS::Lambda::Function/Properties/Environment/Variables/VAR_NAME]

However, you can use String parameters, which are not encrypted by Parameter Store. So, I propose encrypting parameters with KMS first.

Aside: The excellent StackMaster tool adds better support for SSM’s Parameter Store (and many other useful features). However, it still decrypts SecureString parameters, which means they won’t be encrypted in the Lambda console.

We’re going to need two steps down (Photo by Brett Jordan on Unsplash)

Recommendation: KMS + Parameter Store Strings

Instead of using Parameter Store’s SecureStrings to abstract away the KMS step, I recommend using KMS to encrypt strings that go into Parameter Store as plain strings. This provides:

  • Low cost ($1 per month for a KMS Customer Key)
  • Scalability (no rate limits)
  • Security (no exposure of keys)
  • Flexibility of use (can be used by CloudFormation)

This has some drawbacks:

  • Not completely free
  • Config only updated at deploy time (but for extra credit, you could use Parameter Store’s triggers to kick off deployment whenever the parameters change)

Manual Encryption of Parameters With KMS

First, you’ll need to create a customer managed key in KMS. Currently, this costs $1 per month, but as long as you have at least three secrets, it’s still cheaper than Secrets Manager.

This command will insert an encrypted String parameter into Parameter Store:

aws ssm put-parameter                       \
--type String \
--name '/YOUR/PARAM/NAME' \
--value $(aws kms encrypt \
--output text \
--query CiphertextBlob \
--key-id <YOUR_KEY_ID> \
--plaintext "PLAIN TEXT HERE")

Your user must have kms:Encrypt permission on the key you created above. Note that KMS IAM permissions need the key ARN, and won’t work with the key alias ARN.


Use Parameter Store in CloudFormation

There are a few ways to get your SSM Parameters into your CloudFormation stack. Here’s one pattern I particularly like:

Parameters:
SomeParameter:
Type: AWS::SSM::Parameter::Value<String>
Default: '/your/param/name' # This is your parameter name
Resources:
ReceiveLambda:
Type: AWS::Serverless::Function
Properties:
Environment:
Variables:
SOME_PARAMETER: !Ref SomeParameter

Decrypt at Runtime Using KMS in Your Lambda

First, you will need the following permission on your Lambda’s execution role:

- Effect: Allow
Action:
- kms:Decrypt
Resource:
- <KEY_ARN> # Note, the key ARN, not the key alias ARN

Then, you can decrypt the parameters. Here’s a code snippet that will do it in node:

That’s it!

Aside: Now all you have to do is be careful not to log or otherwise expose the decrypted secret.


Summary

To have cheap, scalable configuration without unnecessarily exposing your secrets:

  • Manually encrypt your parameters using KMS before saving them to Parameter Store as a plain string (here’s a bash script to make this ultra-easy)
  • Use the Parameter Store in your deployment lifecycle, ending up in an environment variable on the Lambda
  • Decrypt the environment variable at runtime using KMS (here’s an example node.js module you could crib from).
  • Do the decryption on function load instead of in the handler to minimise KMS calls.

I’ve written and shared a bash script to help with creating KMS-encrypted parameters. If you’re using node for your Lambdas, I’ve shared this module as a starting point for your decryption.

Finally, here’s a quote about AWS Lambda configuration from the keynote speaker at AWS:MiddleEarth (probably):

Gandalf on what to do with secrets in your Lambda (Image: Helge Tennø and New Line Cinema)

Better Programming

Advice for programmers.

Thanks to András Bubics and Zack Shapiro

Timothy Jones

Written by

Senior Software Developer, ex-data scientist, ex academic. Programming for 27 years, writing about it here since much more recently

Better Programming

Advice for programmers.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade