Creating Isolated Serverless Environments Using AWS, Terraform, and Jenkins

David Ramanauskas
Slalom Build
Published in
7 min readAug 14, 2020

Serverless development has unlocked a number of patterns that were impractical when dealing with environments that billed by the hour rather than by usage. This post will cover an opinionated pattern of creating a vending machine for serverless environments that can quickly stand up completely isolated environments within the same cloud account.

This post will focus on AWS using Terraform, but in theory, can apply to any major cloud provider or IaC tool.

Why Serverless Needs Isolated Environments

With traditional bill by hour cloud server solutions, a typical workflow may look like this:

Develop Locally-> Test Locally-> Push Code -> Test/Deploy

Serverless solutions, especially those that leverage AWS Lambda, can be difficult for a developer to test locally. It’s not as simple as running the code in your local terminal or deploying a web server to localhost. Serverless code is much more opinionated and oftentimes cannot be run locally. Code needs to be deployed out to the cloud in order to test functionality. Typically, this is done by deploying to a shared development environment within AWS.

In a collaborative team with multiple engineers, a shared environment can quickly get messy. For example, A QE Engineer needs to test a specific version (Version A) of a service in the dev environment. At the same time, a software engineer is developing the latest feature for that same service (Version B). The QE engineer needs version A to be deployed to dev so that they can test it, but the software engineer wants to deploy version B to dev so that he or she can ensure his or her changes will work in AWS. Without proper coordination, the 2 engineers end up stepping on each other’s toes by deploying different versions of the same software to the the same environment.

Without proper coordination, the 2 engineers may have difficulty determining which branch is currently deployed. This problem becomes exponentially more difficult with larger development teams.

We can combat this issue by giving each engineer their own AWS account. However, this can be a costly solution and lead to an enormous number of AWS accounts across the business. Most businesses are not set up to handle a large number of cloud accounts. The security overhead alone is enough to make anyone’s head spin. Instead, the goal should be to create isolated environments all within a single AWS account.

Individualized environments enable developers to work much more quickly and efficiently

Why Vended Environments Were Impractical

Historically, it’s not been possible to give every engineer a prod-like environment for a few reasons:

  • It’s not cost effective (per hour billing, physical server space, etc.)
  • There’s added overhead to building multiple prod-like environments (before infrastructure-as-code (IaC) this could take days with lots of room for error)
  • High potential for naming conflicts for serverless environments if deployed to a single account (each Lamba would need a unique name, etc.)

Serverless architecture combined with IaC tools like Terraform can easily mitigate these problems.

Billing Model

One of the key benefits of a serverless offering is the billing model. Serverless environments bill by request, execution time or storage volume, not by the hour. This enables us to create many environments that only receive a small amount of traffic without getting large hourly charges.

Pay for value:
Pay for consistent throughput or execution duration rather than by server unit.

Overhead

With the introduction of cloud computing and IaC, the overhead to create any number of isolated prod-like environments has been significantly reduced. Aside from any initial overhead in writing Terraform code and configuring a deployment pipeline, any additional overhead to create extra environments is next to zero. Run your vending machine and within minutes an isolated, properly namespaced environment is at your fingertips.

Let’s look at how to do exactly that.

Ease of Environment Creation and Clean Up

Infrastructure Code

The first and most important problem to solve is Terraform variable namespacing. We not only need to create a separate Terraform workspace, but to make sure that certain variable names are unique.

Let’s take S3 for example. Regardless of workspace, S3 buckets cannot have the same name. Therefore, if an S3 bucket name is in the Terraform, it will need to be properly namespaced to differentiate it from the bucket in the dev environment. We can do this by concatenating a unique identifier onto the S3 bucket name. Using this method, each environment would have its own unique identifier. See the S3 Terraform code below. For more information on Terraform local variables, see here.

Note: We will be referring to the unique identifier in Terraform as namespace.

Utilizing local variables in Terraform we can concatenate a random string (namespace) to the end of all the necessary variables.

We also need to declare namespace as its own variable.

Setting namespace to be an empty string allows the namespacing functionality to be an optional feature

As such, your tfvars file would remain unchanged from a normal implementation of an S3 resource. The namespace would be populated at runtime rather than having a hardcoded value.

No need to populate the namespace variable at this time

Now that we know how to structure our Terraform code to utilize an optional variable for namespacing, let’s look at how to populate that variable.

Generating a Namespace Value

We want the namespacing functionality to be optional so that we can deploy to either a shared environment like dev or prod (without namespacing) or a personalized environment (with namespacing). Therefore, we want an easy way to populate the namespace without having to hardcode a value in the tfvars file.

One of the best ways to do this is to have the Terraform workspace name double as the namespace. We can use Terraform’s native workspace interpolation. Below is an example of our local variable declaration with workspace interpolation. You can read more about Terraform Workspaces here.

Even though this is not the only way to populate the namespace, there are a number of benefits to doing it this way:

  • Workspace name and namespace are always in sync which allows for greater traceability (what resources belong to which workspace, etc.)
  • Very little human intervention needed; highly automatable
  • 100% Terraform native

If a developer wanted to create a separate, isolated environment, a sample workflow would now look something like this:

  1. Initialize your Terraform project terraform init
  2. Create a new workspace terraform workspace new demo
  3. Apply your Terraform code terraform apply
  4. Since the workspace is no longer default Terraform will append the workspace name to bucket_name
  5. bucket_name is now my-bucket-demo

Vending Machine Pipeline

Now that the infrastructure side of things is done, we need to make this into a “push button” solution. That can be as easy as doing the following. Enter your workspace name, hit a button, and a namespaced environment is spun up in minutes. We can do this using any CI/CD tool, but in this specific case we’ll be looking at Jenkins.

Here’s an example of what a Vending Machine pipeline might look like.

A more detailed breakdown of pipeline steps include:

  • Build & Deploy — Build or compile your serverless code. For example, building and pushing your Lambda bundle to S3, or building and pushing a docker image to ECR
  • Run Terraform — Apply your Terraform code in the workspace of your choosing. If TF_WORKSPACE is left as default, deploy to shared environment (dev, qa, prod, etc.)
  • Run Tests — Optional stage if a QA engineer may need to run tests in a fresh isolated environment. Determined by value of RUN_TESTS
  • Cleanup — Optional stage determined by value of DESTROY. If set to true, rather than create a new environment, the pipeline will destroy whichever environment is specified by TF_WORKSPACE

Jenkinsfile

The Jenkinsfile describes the job that will be executed by Jenkins in a programmatic fashion. More information can be found here. The file is a collection of calls to Terraform to build and deploy code, create infrastructure, leverage user input to approve the Terraform plans, as well as calls to various testing scripts.

Hashicorp provides a guide for running Terraform in automation which can be found here.

Below is a sample Jenkinsfile using pseudocode:

An example of a simple vending machine style pipeline

Data Static Environments

What about applications that require some form of sample data to work? Sometimes that data is loaded into the shared environment and engineers don’t want to migrate all that data into every new environment that’s stood up. The solution here is simple: automate!

If our application has a backend database that requires sample data in order to be useful, we can create a data seed library. For example, we can create a library that when called can automatically load a set of example data into our environment using our application’s own APIs. Seeding the data using our own APIs will provide us sample data and act as a test to ensure proper API functionality.

From here, we can create a new stage in our environment vending machine pipeline that calls this library immediately after all the infrastructure has been built and deployed. Static data is loaded, then any and all tests can be run against the environment.

Benefits of Serverless Vended Environments

Utilizing this twofold approach of building optional namespacing capabilities into your Terraform project and automating the deployment via CI/CD tooling, we now have a streamlined, push button way to spin up isolated environments in a matter of minutes.

This will help enable developers in a number of ways:

  • Quickly roll-out any number of environments needed for a smooth workflow
  • Easily test code changes before deploying to shared environments
  • Eliminate having to wait for the right moment to deploy and test code
  • Always have a fresh environment to work in (avoid any data contamination or unwanted changes)
  • Ensure all parts of application are tested in unison via data seeding (infrastructure, application code, test code, test data, etc.)
  • Provide push button clean-up of old environments

Feel free to ask any questions in the comments for specifics on how certain features were implemented!

Further Reading

AWS:

Terraform:

Jenkins:

--

--