Serverless Applications with AWS Lambda and API Gateway using Terraform

Paul Zhao
Paul Zhao Projects
Published in
19 min readFeb 16, 2021
Diagram of project infrastructure

As we kick off this project, let us discuss why we would adopt Serverless application rather than provisioning a server or instance?

Here we will be discussing the pros and cons of Serverless Applications

Pros:

1 No server management is necessary

One of the biggest advantage of serverless application is to reduce the human and financial costs of an organization by freeing them from server management. Instead, the resources can be allocated to what matters most in terms of business operation

2 Developers are only charged for the server space they use, reducing costs

For developers in, specific, the deployment of serverless application allows them to focus on codes rather than how to reduce the costs and how much capacity the server may require, both of which could be increasing company’s operational overhead. With that said, serverless application is dynamic, precise, and real-time

3 Quick deployments and updates are possible

Serverless, from a deployment’s point of view, also gives us the edge to reduce the time to upload code all at once or one function at a time. Overall, it accomplishes the same work with less time and resources being put in place

4 Code can run closer to the end user, decreasing latency

Since the server is running locally, it is very likely that applications could be running by end-users from a physical location that is closer to where they are so that latency could be reduced at the end of the day

Cons:

As we all say “there is panacea”, as we discuss about the strong points of serverless applications. Now we’ll dive in to explore the shortcomings of it

1 Testing and debugging become more challenging

As serverless application’s backend was managed by cloud provider, developers wouldn’t gain access to the bankend. In case of debugging or troubleshooting, it will be more complicated. This is due to the facts that the application is broken up into separate, smaller functions

2 Serverless computing introduces new security concerns

With serverless computing, serverless providers will often be running code from several of their customers on a single server at any given time. With that said, it may contribute to what is known as ‘multitenancy’ — think of several companies trying to lease and work in a single office at the same time. This may potentially cause some security concerns such as database breach or application performance upgraded

3 Serverless architectures are not built for long-running processes

Serverless applications were not designed to run for a long time. You may end up paying more than traditional server based applications if leaving serverless applications for long term use

4 Performance may be affected

Because it’s not constantly running, serverless code may need to ‘boot up’ when it is used. To do so, you may need to send a request to cloud provider to allow “warm start” for the code meant to run

5 Vendor lock-in is a risk

Serverless application are managed by cloud provider so that it may cause reliance on specific provider for its services. In the end, it is very challenging to switch provider since each vendor offers slightly different features and workflows

Usages of Serverless Architecture:

Firstly, serverless architecture allows developers to build lightweight, flexible applications that can be expanded or updated quickly may benefit greatly from serverless computing

Secondly, for any inconsistency of usage from end-users, serverless architecture may reduce the costs of unnecessary operation

Lastly, developers are able to push code partially or fully with serverless architecture. The flexibility of code deployment clearly has its merits

Prerequisites

For this walkthrough, you need the following:

  • An AWS account — with non-root user (take security into consideration)
  • AWSCLI installed
  • In terms of system, we will be using RHEL 8.3 by Oracle Virtual Box on Windows 10 using putty
  • Install Terraform

Let us work on them one by one.

Creating a non-root user

Based on AWS best practice, root user is not recommended to perform everyday tasks, even the administrative ones. The root user, rather is used to to create your first IAM user, groups and roles. Then you need to securely lock away the root user credentials and use them to perform only a few account and service management tasks.

Notes: If you would like to learn more about why we should not use root user for operations and more about AWS account, please find more here.

Login as a Root user
Create a user under IAM service
Choose programmatic access
Choose programmatic access
Create user without tags
Keep credentials (Access key ID and Secret access key)

Installing AWS CLI

Visit here and download Mac packer

Download MacOS pkg installer
Install it successfully

To verify your aws cli installation

$ aws --version
aws-cli/2.0.46 Python/3.7.4 Darwin/19.6.0 exe/x86_64

To use aws cli, we need to configure it using aws access key, aws secret access key, aws region and aws output format

$ aws configure
AWS Access Key ID [****************46P7]:
AWS Secret Access Key [****************SoXF]:
Default region name [us-east-1]:
Default output format [json]:

Set up RHEL 8.3 by Oracle Virtual Box on Windows 10 using putty

First, we will download Oracle Virtual Box on Windows 10, please click Windows hosts

Second, we will also download RHEL iso

Let us make it work now!

Click Oracle VirtualBox and open the application and follow instructions here, you will install RHEL 8.3 as shown below

Oracle VM VirtualBox

Notes: In case you are unable to install RHEL 8.3 successfully, please find solutions here. Also, after you create your developer’s account with Red Hat, you have to wait for sometime before register it. Otherwise, you may receive errors as well.

Now it’s time for us to connect to RHEL 8.3 from Windows 10 using VirtualBox.

Login RHEL 8.3

Click activities and open terminal

Open terminal

Notes: In order to be able to connect to RHEL 8.3 from Windows 10 using putty later, we must enable what it is shown below.

Bridged Adapter selected

Now we will get the ip that we will be using to connect to RHEL 8.3 from Windows 10 using Putty (highlighted ip address for enp0s3 is the right one to use)

IP address

Then we will install Putty.

ssh-keygen with a password

Creating a password-protected key looks something like this:

$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/pzhao/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/pzhao/.ssh/id_rsa.
Your public key has been saved in /home/pzhao/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:RXPnUZg/fGgRGTOxEfbo3VOMo/Yp4Gi80has/iR4m/A pzhao@localhost.localdomain
The key's randomart image is:
+---[RSA 3072]----+
| o . %X.|
| . o +=@ |
| . B++|
| . oo==|
| .S . o...=|
| . .oo o . ..|
| o oo=.. . o |
| +o*o. . |
| .E+o |
+----[SHA256]-----+

To find out private key

$ cat .ssh/id_rsa
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
NhAAAAAwEAAQAAAYEAwoavXHvZCYPO/sbMD0ibtkvF+9/NmSm2m/Z8wRy7O2A012YS98ap
8aq18PXfKPyyAMNF3hdG3xi1KMD7DSIb/C1gunjTREEJRfYjydOjFBFtZWY78Mj4eQkrPJ
.
.
.
-----END OPENSSH PRIVATE KEY-----

Notes: You may take advantage of GUI of RHEL to send Private Key as an email, then open the mail and copy the private key from email

Open the Notepad in Windows 10 and save private key as ansiblekey.pem file

Ansiblekey.pem

Then open PuTTY Key Generator and load the private key ansiblekey.pem

Load private key in putty key generator

Then save it as a private key as ansible.ppk file

We now open Putty and input IP address we saved previously as Host Name (or IP address) 192.168.0.18

Load private key in putty

We then move on to Session and input IP address

IP address saved

For convenience, we may save it as a predefined session as shown below

Saved session

You should see the pop up below if you log in for the very first time

First time log in

Then you input your username and password to login. You see below image after log in.

Login successfully

To install terraform, simply use the following command:

Install yum-config-manager to manage your repositories.

$ sudo yum install -y yum-utils

Use yum-config-manager to add the official HashiCorp Linux repository.

$ sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo

Install.

$ sudo yum -y install terraform

Notes: In case of a wrong symbolic link set up, please check out this link. Also, you may need to re login after changing the symbolic link.

To check out installation of terraform

$ terraform version
Terraform v0.14.3
+ provider registry.terraform.io/hashicorp/aws v3.21.0

Enough theoretical concepts, let us now get our hands dirty

Building the Lambda Function Package

1 Create a folder, in which we will store all of project related components

$ mkdir terraform-lambda
$ cd terraform-lambda/

2 Generate a main.js file

vim main.js

'use strict'exports.handler = function (event, context, callback) {
var response = {
statusCode: 200,
headers: {
'Content-Type': 'text/html; charset=utf-8',
},
body: '<p>Hello world!</p>',
}
callback(null, response)
}

3 Since Terraform is not a configuration tool, so the zip file must be prepared using a separate build process prior to deploying it with Terraform

we now zip our main.js file in the current folder

$ zip ./terraform-lambda.zip main.js
updating: main.js (deflated 29%)

4 Then we need to push this file to a S3 bucket in AWS

First, we need to create a S3 bucket using AWS CLI

$ aws s3api create-bucket --bucket=terraform-lambda-serverless-project --region=us-east-1
{
"Location": "/terraform-lambda-serverless-project"
}

Then we need to upload terraform-lambda.zip file into this S3 bucket

$ aws s3 cp terraform-lambda.zip s3://terraform-lambda-serverless-project/v1.0.0/terraform-lambda.zip
upload: ./terraform-lambda.zip to s3://terraform-lambda-serverless-project/v1.0.0/terraform-lambda.zip

Notes: Here we set up v1.0.0 is to prepare for version later on

To double check the object and version we created inside S3 bucket

$ aws s3api list-object-versions --buck terraform-lambda-serverless-project
{
"Versions": [
{
"ETag": "\"e2e02c770d0fcdbb35851a35b4d49475\"",
"Size": 764,
"StorageClass": "STANDARD",
"Key": "v1.0.0/terraform-lambda.zip",
"VersionId": "null",
"IsLatest": true,
"LastModified": "2021-02-15T20:31:30.000Z",
"Owner": {
"DisplayName": "zhaofeng871112",
"ID": "ae85ae64c26f9a6e11c2ca324773534b26979b36c683d72b10ccf330175cbe77"
}
}
]
}

Creating the Lambda Function

We now move to create our Lambda Function using lambda.tf file

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = "us-east-1"
}
resource "aws_lambda_function" "example" {
function_name = "ServerlessExample"
# The bucket name as created earlier with "aws s3api create-bucket"
s3_bucket = "terraform-serverless-example"
s3_key = "v1.0.0/example.zip"
# "main" is the filename within the zip file (main.js) and "handler"
# is the name of the property under which the handler function was
# exported in that file.
handler = "main.handler"
runtime = "nodejs10.x"
role = aws_iam_role.lambda_exec.arn
}
# IAM role which dictates what other AWS services the Lambda function
# may access.
resource "aws_iam_role" "lambda_exec" {
name = "serverless_example_lambda"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}

Notes:

s3_bucket = "terraform-serverless-example"
s3_key = "v1.0.0/example.zip"

Both are these need to match with what we created previously. In my case, it should be like shown below

s3_bucket = "terraform-lambda-serverless-project"
s3_key = "v1.0.0/terraform-lambda.zip"

Also, handler name needs to match with our main.js file name

handler = "main.handler"

In my case, it should be like what it shown above. However, you’re free to name .js file as any and modify the handler name accordingly

Now we may start to terraform our infrastructure

$ terraform initInitializing the backend...Initializing provider plugins...
- Finding latest version of hashicorp/aws...
- Installing hashicorp/aws v3.28.0...
- Installed hashicorp/aws v3.28.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

To check out what resources will be created upon deploying terraform apply, we may use terraform plan

$ terraform planAn execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:# aws_iam_role.lambda_exec will be created
+ resource "aws_iam_role" "lambda_exec" {
+ arn = (known after apply)
+ assume_role_policy = jsonencode(
{
+ Statement = [
+ {
+ Action = "sts:AssumeRole"
+ Effect = "Allow"
+ Principal = {
+ Service = "lambda.amazonaws.com"
}
+ Sid = ""
},
]
+ Version = "2012-10-17"
}
)
+ create_date = (known after apply)
+ force_detach_policies = false
+ id = (known after apply)
+ max_session_duration = 3600
+ name = "serverless_example_lambda"
+ path = "/"
+ unique_id = (known after apply)
}
# aws_lambda_function.example will be created
+ resource "aws_lambda_function" "example" {
+ arn = (known after apply)
+ function_name = "ServerlessExample"
+ handler = "main.handler"
+ id = (known after apply)
+ invoke_arn = (known after apply)
+ last_modified = (known after apply)
+ memory_size = 128
+ package_type = "Zip"
+ publish = false
+ qualified_arn = (known after apply)
+ reserved_concurrent_executions = -1
+ role = (known after apply)
+ runtime = "nodejs10.x"
+ s3_bucket = "terraform-lambda-serverless-project"
+ s3_key = "v1.0.0/terraform-lambda.zip"
+ signing_job_arn = (known after apply)
+ signing_profile_version_arn = (known after apply)
+ source_code_hash = (known after apply)
+ source_code_size = (known after apply)
+ timeout = 3
+ version = (known after apply)
+ tracing_config {
+ mode = (known after apply)
}
}
Plan: 2 to add, 0 to change, 0 to destroy.------------------------------------------------------------------------Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

As we are clear about what to expect, we deploy terraform apply to build up our infrastructure

$ terraform apply
.
.
.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yesaws_iam_role.lambda_exec: Creating...
aws_iam_role.lambda_exec: Creation complete after 1s [id=serverless_example_lambda]
aws_lambda_function.example: Creating...
aws_lambda_function.example: Still creating... [10s elapsed]
aws_lambda_function.example: Creation complete after 15s [id=ServerlessExample]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

After the function is created successfully, try invoking it using the AWS CLI

$ aws lambda invoke --region=us-east-1 --function-name=ServerlessExample output.txt
{
"StatusCode": 200,
"ExecutedVersion": "$LATEST"
}

Check the output for success

$ cat output.txt
{"statusCode":200,"headers":{"Content-Type":"text/html; charset=utf-8"},"body":"<h1>Test our environment</h1>"}

Cross check lambda function in AWS console

Lambda function created

Configuring API Gateway

Next, we move on to configure our API Gateway

Create api_gateway.tf file and configure the root “REST API” object

vim api_gateway.tf

resource "aws_api_gateway_rest_api" "example" {
name = "ServerlessExample"
description = "Terraform Serverless Application Example"
}

The “REST API” is the container for all of the other API Gateway objects we will create

All incoming requests to API Gateway must match with a configured resource and method in order to be handled. Append the following to the lambda.tf file to define a single proxy resource

resource "aws_api_gateway_resource" "proxy" {
rest_api_id = aws_api_gateway_rest_api.example.id
parent_id = aws_api_gateway_rest_api.example.root_resource_id
path_part = "{proxy+}"
}
resource "aws_api_gateway_method" "proxy" {
rest_api_id = aws_api_gateway_rest_api.example.id
resource_id = aws_api_gateway_resource.proxy.id
http_method = "ANY"
authorization = "NONE"
}

Notes: The special path_part value “{proxy+}" activates proxy behavior, which means that this resource will match any request path. Similarly, the aws_api_gateway_method block uses a http_method of "ANY", which allows any request method to be used. Taken together, this means that all incoming requests will match this resource.

Each method on an API gateway resource has an integration which specifies where incoming requests are routed. Add the following configuration to specify that requests to this method should be sent to the Lambda function defined earlier:

resource "aws_api_gateway_integration" "lambda" {
rest_api_id = aws_api_gateway_rest_api.example.id
resource_id = aws_api_gateway_method.proxy.resource_id
http_method = aws_api_gateway_method.proxy.http_method
integration_http_method = "POST"
type = "AWS_PROXY"
uri = aws_lambda_function.example.invoke_arn
}

The AWS_PROXY integration type causes API gateway to call into the API of another AWS service. In this case, it will call the AWS Lambda API to create an "invocation" of the Lambda function.

Unfortunately the proxy resource cannot match an empty path at the root of the API. To handle that, a similar configuration must be applied to the root resource that is built in to the REST API object:

resource "aws_api_gateway_method" "proxy_root" {
rest_api_id = aws_api_gateway_rest_api.example.id
resource_id = aws_api_gateway_rest_api.example.root_resource_id
http_method = "ANY"
authorization = "NONE"
}
resource "aws_api_gateway_integration" "lambda_root" {
rest_api_id = aws_api_gateway_rest_api.example.id
resource_id = aws_api_gateway_method.proxy_root.resource_id
http_method = aws_api_gateway_method.proxy_root.http_method
integration_http_method = "POST"
type = "AWS_PROXY"
uri = aws_lambda_function.example.invoke_arn
}

Finally, you need to create an API Gateway “deployment” in order to activate the configuration and expose the API at a URL that can be used for testing:

resource "aws_api_gateway_deployment" "example" {
depends_on = [
aws_api_gateway_integration.lambda,
aws_api_gateway_integration.lambda_root,
]
rest_api_id = aws_api_gateway_rest_api.example.id
stage_name = "test"
}

After all, the api_getway.tf file should be like shown below

resource "aws_api_gateway_rest_api" "example" {
name = "ServerlessFunction"
description = "Terraform Serverless Application"
}
resource "aws_api_gateway_resource" "proxy" {
rest_api_id = aws_api_gateway_rest_api.example.id
parent_id = aws_api_gateway_rest_api.example.root_resource_id
path_part = "{proxy+}"
}
resource "aws_api_gateway_method" "proxy" {
rest_api_id = aws_api_gateway_rest_api.example.id
resource_id = aws_api_gateway_resource.proxy.id
http_method = "ANY"
authorization = "NONE"
}
resource "aws_api_gateway_integration" "lambda" {
rest_api_id = aws_api_gateway_rest_api.example.id
resource_id = aws_api_gateway_method.proxy.resource_id
http_method = aws_api_gateway_method.proxy.http_method
integration_http_method = "POST"
type = "AWS_PROXY"
uri = aws_lambda_function.example.invoke_arn
}
resource "aws_api_gateway_method" "proxy_root" {
rest_api_id = aws_api_gateway_rest_api.example.id
resource_id = aws_api_gateway_rest_api.example.root_resource_id
http_method = "ANY"
authorization = "NONE"
}
resource "aws_api_gateway_integration" "lambda_root" {
rest_api_id = aws_api_gateway_rest_api.example.id
resource_id = aws_api_gateway_method.proxy_root.resource_id
http_method = aws_api_gateway_method.proxy_root.http_method
integration_http_method = "POST"
type = "AWS_PROXY"
uri = aws_lambda_function.example.invoke_arn
}
resource "aws_api_gateway_deployment" "example" {
depends_on = [
aws_api_gateway_integration.lambda,
aws_api_gateway_integration.lambda_root,
]
rest_api_id = aws_api_gateway_rest_api.example.id
stage_name = "test"
}

After terraform apply

$ terraform apply                             aws_api_gateway_rest_api.example: Refreshing state... [id=vuhxihydd4]
aws_iam_role.lambda_exec: Refreshing state... [id=serverless_example_lambda]
aws_api_gateway_resource.proxy: Refreshing state... [id=euwe79]
aws_api_gateway_method.proxy_root: Refreshing state... [id=agm-vuhxihydd4-oixuhxkl23-ANY]
aws_api_gateway_method.proxy: Refreshing state... [id=agm-vuhxihydd4-euwe79-ANY]
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:# aws_api_gateway_deployment.example will be created
+ resource "aws_api_gateway_deployment" "example" {
+ created_date = (known after apply)
+ execution_arn = (known after apply)
+ id = (known after apply)
+ invoke_url = (known after apply)
+ rest_api_id = "vuhxihydd4"
+ stage_name = "test"
}
# aws_api_gateway_integration.lambda will be created
+ resource "aws_api_gateway_integration" "lambda" {
+ cache_namespace = (known after apply)
+ connection_type = "INTERNET"
+ http_method = "ANY"
+ id = (known after apply)
+ integration_http_method = "POST"
+ passthrough_behavior = (known after apply)
+ resource_id = "euwe79"
+ rest_api_id = "vuhxihydd4"
+ timeout_milliseconds = 29000
+ type = "AWS_PROXY"
+ uri = (known after apply)
}
# aws_api_gateway_integration.lambda_root will be created
+ resource "aws_api_gateway_integration" "lambda_root" {
+ cache_namespace = (known after apply)
+ connection_type = "INTERNET"
+ http_method = "ANY"
+ id = (known after apply)
+ integration_http_method = "POST"
+ passthrough_behavior = (known after apply)
+ resource_id = "oixuhxkl23"
+ rest_api_id = "vuhxihydd4"
+ timeout_milliseconds = 29000
+ type = "AWS_PROXY"
+ uri = (known after apply)
}
# aws_lambda_function.example will be created
+ resource "aws_lambda_function" "example" {
+ arn = (known after apply)
+ function_name = "ServerlessExample"
+ handler = "main.handler"
+ id = (known after apply)
+ invoke_arn = (known after apply)
+ last_modified = (known after apply)
+ memory_size = 128
+ package_type = "Zip"
+ publish = false
+ qualified_arn = (known after apply)
+ reserved_concurrent_executions = -1
+ role = "arn:aws:iam::464392538707:role/serverless_example_lambda"
+ runtime = "nodejs10.x"
+ s3_bucket = "terraform-lambda-serverless-project"
+ s3_key = "v1.0.0/terraform-lambda.zip"
+ signing_job_arn = (known after apply)
+ signing_profile_version_arn = (known after apply)
+ source_code_hash = (known after apply)
+ source_code_size = (known after apply)
+ timeout = 3
+ version = (known after apply)
+ tracing_config {
+ mode = (known after apply)
}
}
Plan: 4 to add, 0 to change, 0 to destroy.Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yesaws_lambda_function.example: Creating...
aws_lambda_function.example: Creation complete after 6s [id=ServerlessExample]
aws_api_gateway_integration.lambda_root: Creating...
aws_api_gateway_integration.lambda: Creating...
aws_api_gateway_integration.lambda: Creation complete after 0s [id=agi-vuhxihydd4-euwe79-ANY]
aws_api_gateway_integration.lambda_root: Creation complete after 0s [id=agi-vuhxihydd4-oixuhxkl23-ANY]
aws_api_gateway_deployment.example: Creating...
aws_api_gateway_deployment.example: Creation complete after 1s [id=mqaxco]
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.

Inside AWS console, we may cross check

Api gateway created

Allowing API Gateway to Access Lambda

Though we have both api gateway and lambda created, they need to be able to access to each other

For Lambda functions, access is granted using the aws_lambda_permission resource, which should be added to the lambda.tf file created in an earlier step:

resource "aws_lambda_permission" "apigw" {
statement_id = "AllowAPIGatewayInvoke"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.example.function_name
principal = "apigateway.amazonaws.com"
# The "/*/*" portion grants access from any method on any resource
# within the API Gateway REST API.
source_arn = "${aws_api_gateway_rest_api.example.execution_arn}/*/*"
}

In order to test the created API you will need to access its test URL. To make this easier to access, add the following output to api_gateway.tf:

output "base_url" {
value = aws_api_gateway_deployment.example.invoke_url
}

Now terraform apply to deploy the latest update

$ terraform apply
aws_iam_role.lambda_exec: Refreshing state... [id=serverless_example_lambda]
aws_api_gateway_rest_api.example: Refreshing state... [id=vuhxihydd4]
aws_lambda_function.example: Refreshing state... [id=ServerlessExample]
aws_api_gateway_resource.proxy: Refreshing state... [id=euwe79]
aws_api_gateway_method.proxy_root: Refreshing state... [id=agm-vuhxihydd4-oixuhxkl23-ANY]
aws_api_gateway_method.proxy: Refreshing state... [id=agm-vuhxihydd4-euwe79-ANY]
aws_api_gateway_integration.lambda_root: Refreshing state... [id=agi-vuhxihydd4-oixuhxkl23-ANY]
aws_api_gateway_integration.lambda: Refreshing state... [id=agi-vuhxihydd4-euwe79-ANY]
aws_api_gateway_deployment.example: Refreshing state... [id=mqaxco]
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:# aws_lambda_permission.apigw will be created
+ resource "aws_lambda_permission" "apigw" {
+ action = "lambda:InvokeFunction"
+ function_name = "ServerlessExample"
+ id = (known after apply)
+ principal = "apigateway.amazonaws.com"
+ source_arn = "arn:aws:execute-api:us-east-1:464392538707:vuhxihydd4/*/*"
+ statement_id = "AllowAPIGatewayInvoke"
}
Plan: 1 to add, 0 to change, 0 to destroy.Changes to Outputs:
+ base_url = "https://vuhxihydd4.execute-api.us-east-1.amazonaws.com/test"
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yesaws_lambda_permission.apigw: Creating...
aws_lambda_permission.apigw: Creation complete after 1s [id=AllowAPIGatewayInvoke]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.Outputs:base_url = "https://vuhxihydd4.execute-api.us-east-1.amazonaws.com/test"

Now let us check out what it is being deployed using base_url

Outcome of v1.0.0

A New Version of the Lambda Function

As we complete our v1.0.0, we will deploy v1.0.1 now

We will be updating our main.js file to reflect different outcome

'use strict'exports.handler = function (event, context, callback) {var response = {
statusCode: 200,
headers: {
'Content-Type': 'text/html; charset=utf-8',
},
body: '<title color="blue">Test our environment</title>',
}
// callback is sending HTML back
callback(null, response)
}

Notes: You are free to make any updates as you wish, as long as it will reflect the difference of version v1.0.1 from version v1.0.0

Now we will be zipping it in the current folder

$ zip ./terraform-lambda.zip main.js
adding: main.js (deflated 28%)

We will be using same S3 bucket we created previously and upload a new object as v1.0.1

$ aws s3 cp terraform-lambda.zip s3://terraform-lambda-serverless-project/v1.0.1/terraform-lambda.zip
upload: ./terraform-lambda.zip to s3://terraform-lambda-serverless-project/v1.0.1/terraform-lambda.zip

As we intend to apply a brand new version of v1.0.1 , we need to add the following to lambda.tf.

variable "app_version" {
}

Then locate the aws_lambda_function resource defined earlier and change its s3_key argument to include the version variable:

resource "aws_lambda_function" "example" {
function_name = "ServerlessExample"
# The bucket name as created earlier with "aws s3api create-bucket"
s3_bucket = "terraform-serverless-example"
- s3_key = "v1.0.0/example.zip"
+ s3_key = "v${var.app_version}/terraform-lambda.zip"
# (leave the remainder unchanged)
}

The terraform apply command now requires a version number to be provided:

$ terraform apply -var="app_version=1.0.1"
aws_iam_role.lambda_exec: Refreshing state... [id=serverless_example_lambda]
aws_api_gateway_rest_api.example: Refreshing state... [id=vuhxihydd4]
aws_lambda_function.example: Refreshing state... [id=ServerlessExample]
aws_api_gateway_resource.proxy: Refreshing state... [id=euwe79]
aws_api_gateway_method.proxy_root: Refreshing state... [id=agm-vuhxihydd4-oixuhxkl23-ANY]
aws_api_gateway_method.proxy: Refreshing state... [id=agm-vuhxihydd4-euwe79-ANY]
aws_api_gateway_integration.lambda_root: Refreshing state... [id=agi-vuhxihydd4-oixuhxkl23-ANY]
aws_api_gateway_integration.lambda: Refreshing state... [id=agi-vuhxihydd4-euwe79-ANY]
aws_lambda_permission.apigw: Refreshing state... [id=AllowAPIGatewayInvoke]
aws_api_gateway_deployment.example: Refreshing state... [id=mqaxco]
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:# aws_lambda_function.example will be updated in-place
~ resource "aws_lambda_function" "example" {
id = "ServerlessExample"
~ last_modified = "2021-02-15T22:12:21.317+0000" -> (known after apply)
~ s3_key = "v1.0.0/terraform-lambda.zip" -> "v1.0.1/terraform-lambda.zip"
tags = {}
# (17 unchanged attributes hidden)
# (1 unchanged block hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yesaws_lambda_function.example: Modifying... [id=ServerlessExample]
aws_lambda_function.example: Modifications complete after 1s [id=ServerlessExample]
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.Outputs:base_url = "https://vuhxihydd4.execute-api.us-east-1.amazonaws.com/test"

Shall we test our version v1.0.1 now

v1.0.1

Rolling Back to an Older Version

Sometimes new code doesn’t work as expected and the simplest path is to return to the previous version. Because all of the historical versions of the artifact are preserved on S3, the original version can be restored with a single command:

$ terraform apply -var="app_version=1.0.0"

Clean Up

To clean up our infrastructure, we need to apply terraform destory

$ terrafom destroy 

Also, we need to use aws cli to delete our S3 bucket since we created it using aws cli

$ aws s3 rb s3://terraform-lambda-serverless-project --force
remove_bucket: terraform-lambda-serverless-project

Conclusion

Let us recap we have accomplished in the project using this diagram for our infrastructue

First, we built up the Lambda Function package and upload to AWS S3 bucket (zip main.js file into terraform-lambda.zip)

Second, we created the Lambda Function using lambda.tf file

Third, we configured API Gateway using api_gataway.tf file

Fourth, we allowed API Gateway to access Lambda by adding aws_lambda_permission into lambda.tf file

Then we tested a new version vim adding variable "app_version" {
}
to lambda.tf file. Also s3_key = "v${var.app_version}/example.zip" is updated in the same file

Finally, we applied terraform apply -var=”app_version=1.0.1" to deploy version 1.0.1 and rolled back to version 1.0.0 using terraform apply -var=”app_version=1.0.0"

To clean up we can do terraform destroy -var=”app_version=1.0.0"

Throughout this project, we have seen the power of terraform to appy, update and destroy infrastructures using Iac along with AWS Lambda Function and API Gateway. By doing so, we truly accomplish Serverless Infrastructure

--

--

Paul Zhao
Paul Zhao Projects

Amazon Web Service Certified Solutions Architect Professional & Devops Engineer