Serverless: Managing environment variables efficiently with stages

Hussain Ali Akbar
6 min readMay 26, 2020

--

I recently started exploring the Serverless framework for managing and deploying our Lambda functions and found it to be extremely powerful. Almost all of the things that I could wish for can be achieved easily out of the box with all the other cases being handled through its extensive list of plugins.

I still however, had to do somethings differently in order to make the most of the framework and I plan on documenting those in a series of posts.

So, lets get down to the first one, which is managing environment variables efficiently!

Background:

Environment Variables like API Access Keys or an API Endpoints which can change frequently or contain sensitive information cannot be checked into source control. And therefore needs to be managed independently.

Lets Start!

By setting up a project first from scratch:

serverless

and the cli will take care of the rest. We’ll be using Node.js and AWS for this post but feel free to use anything you want. Once the project is created, you should see 3 files:

-.gitignore
- handler.js
- serverless.yml

this serverless.yml is the main file which is responsible for the configuration of all your lambda functions. It’s got a lot of stuff inside so lets clean it up a bit and add a few more items:

service: serveless-medium

provider:
name: aws
runtime: nodejs12.x
region: ap-southeast-1
role: arn:aws:iam::XXXXXXXXXXXX:role/lambda-role


functions:
hello:
handler: handler.hello
  1. Service is the name that you provided while setting up the project. This will be used to create the Cloudformation Stack.
  2. Provider contains the list of settings related to AWS or any Cloud Service that you’re using. I have added the region that I am deploying to as well as the role that needs to be attached to my lambda functions.
  3. Functions contains all the functions that are included in your Serverless App and its related configurations.

Go ahead and do serverless deploy to see your Cloudformation stack being deployed and the lambda function being created.

Adding Environment Variables:

Adding Environment Variables to your Serverless App is easy. You can either define them at the provider level in which case, All the functions will get that environment variable. Or you can define them at the function level so that only that specific function gets it like this:

# serverless.ymlservice: serveless-medium

provider:
name: aws
runtime: nodejs12.x
region: ap-southeast-1
role: arn:aws:iam::XXXXXXXXXXXX:role/lambda-role
environment:
API_ENDPOINT: www.api.example.com


functions:
hello:
handler: handler.hello
environment:
API_KEY: MY_SECRET_API_KEY

run serverless deploy again and you should see that your environment variables have been added to the lambda function:

environment variables have been added in the function

while this is easy to do, its generally not a good practice to do this as you’ll have to check in these environment variables.

Enter Serverless Variables:

Serverless provides a way to define variables with which we can use to do all sorts of stuff in our config file through the ${} syntax. It also provides a way to reference variables in another .yml or .json file using ${file(name.yml)}

So, lets create an env.json file and add our environment variables in it (you can use a .yml file as well if you need):

// env.json
{
"API_ENDPOINT": "www.api.example.com",
"API_KEY": "MY_SECRET_API_KEY",
"ANOTHER_KEY": "ANOTHER_SECRET_KEY"
}

I have added an extra variable just to make sure that our variables are indeed being loaded from this file. And lets update the serverless.yml:

# serverless.yml
service: serveless-medium

provider:
name: aws
runtime: nodejs12.x
region: ap-southeast-1
role: arn:aws:iam::XXXXXXXXXXXX:role/lambda-role


functions:
hello:
handler: handler.hello
environment: ${file(env.json)}

we’ve replaced the environment variables with the file variable which references our env.json file. To see how this would work out, there’s a command serverless print that goes through all the variables and prints the final config that Serverless will use:

service: serveless-medium
provider:
region: ap-southeast-1
name: aws
runtime: nodejs12.x
role: arn:aws:iam::XXXXXXXXXXXX:role/lambda-role
functions:
hello:
handler: handler.hello
environment:
API_ENDPOINT: www.api.example.com
API_KEY: MY_SECRET_API_KEY
ANOTHER_KEY: ANOTHER_SECRET_KEY

to verify, run serverless deploy and verify the environment variables in the lambda:

our new environment variables have been uploaded

Loading the variables based on different Environments:

In most cases, we have different environments like dev, staging and production and our variables differ for each of these environments. We can handle these through Serverless’s functionality of stages which allows us to deploy the same service for different environments using different environment variables!

When you run serverless deploy , by default, you’re passing the “dev” stage as an argument. This is visible in our Cloudformation stack as the environment name is appended at the end of our service name:

the default “dev” stage

as well as the lambda function name which is: serveless-medium-dev-hello.

running serverless deploy --stage staging deploys another stack on Cloudformation for our staging environment:

stack for staging environment is deployed

and creates a lambda function by the name: serveless-medium-staging-hello.

Serverless allows us to access this stage argument via ${opt.stage} . So lets go ahead and do that.

First, create 2 separate environment files for each of your environment called env.dev.json and env.staging.json:

// env.dev.json{
"API_ENDPOINT": "www.api.dev.example.com",
"API_KEY": "MY_SECRET_API_KEY_FOR_DEV",
"ANOTHER_KEY": "ANOTHER_SECRET_KEY_FOR_DEV"
}
// env.staging.json{
"API_ENDPOINT": "www.api.staging.example.com",
"API_KEY": "MY_SECRET_API_KEY_FOR_STAGING",
"ANOTHER_KEY": "ANOTHER_SECRET_KEY_FOR_STAGING"
}

and update the serverless.yml:

#serverless.ymlservice: serveless-medium

provider:
name: aws
runtime: nodejs12.x
region: ap-southeast-1
role: arn:aws:iam::XXXXXXXXXXXX:role/lambda-role


functions:
hello:
handler: handler.hello
environment: ${file(env.${opt:stage, self:provider.stage}.json)}

the opt:stage variable will resolve to “staging” and our staging environment file will be loaded if we run serverless deploy --stage staging . It will be undefined if we just do serverless deploy in which case, the self:provider.stage variable will be used which by default is “dev” and our dev environment file will be loaded.

Verify the output by running serverless print and serverless print --stage staging . Once done, deploy both the stacks by running serverless deploy and serverless deploy --stage staging and verify the lambda functions:

dev environment variables
staging environment variables

That’s all there is to it really! This is how we can decouple our configuration from our code in the Serverless function. Be sure to add the config files to .gitignore so that you don’t accidentally commit them!

Whats next?

This approach has one major flaw though. Our config files will have to stay on our systems in which case they can get lost/stolen/deleted/misplaced! So we definitely need to keep them somewhere safe like AWS S3 and not on our local systems. Downloading the the file from a backup location every time you need to deploy is also not very efficient either.

In the next part, lets look at how we can automate this entire process!

PS: The Source Code for this part has been uploaded on Github for reference.

This article is a part of my 5 Article Series on the Serverless Framework!

Part 1: Serverless: Managing environment variables efficiently with stages

Part 2: Serverless: Managing config for different environments with S3 and Bash Scripts

Part 3: Serverless: Creating light and lean function packages

Part 4: Serverless: Breaking a large serverless.yml into manageable chunks

Part 5: Serverless: Reusing common configurations across functions

--

--