Photo by Adi Goldstein on Unsplash

Serverless and CloudFormation: Rebuilding the Wild Rydes App

Kiki Morgan
Feb 10 · 18 min read

Testing the benefits and limitations of using cloudformation to build a serverless app

I recently started my serverless journey by building the Wild Rydes app. Using Lambda, API Gateway, S3, DynamoDB and Cognito, I created a ride-sharing app that allowed users to request rides from unicorns. Getting the app to finally work was exhilarating. Needing a bigger challenge, I decided to rebuild the serverless app with cloudformation. Cloudformation and Infrastructure as Code (IaC) make it easy to build, destroy, and rebuild infrastructure with less time spent fiddling with the console. The AWS Wild Rydes tutorial included cloudformation templates to help set up the s3 bucket and the cognito user pool. Which propelled me into module 3, setting up DynamoDB and Lambda, feeling as if the rest would be a breeze. That feeling quickly evaporated as I struggled to work around deploying a lambda function with a large local file. Things didn’t get better as I moved on to module 4 and attempted to deploy a RESTful API with CORS, I was absolutely bewildered. I searched for solutions online, but most examples were for AWS SAM or other tools. Throughout this journey, I would have appreciated a detailed tutorial to help eliminate some of the unavoidable confusion. So, I created one.

Jump To:

Part 1: Setting Up AWS CLI
Part 2: Create the helper stack
Part 3: Create the wildrydes stack
— — 3a: S3
— — 3b: Cognito
— — 3c: DynamoDB & Lambda
— — 3d: API Gateway
Part 4: Cleanup
Troubleshooting

Part 1: Setting Up AWS CLI

When working with AWS Services, working with the CLI (Command Line Interface) offers greater functionality and speed over working with the console. To set up the CLI takes a few extra steps but the time it saves later on is worth it.

Create an AWS account

To work with AWS services you must have an AWS account.

Tips:

  • After creating an account, follow directions to secure root account.
  • Set up a budget alert in the billing console. The resources created in this tutorial should not incur a high cost but it’s better to err on the side of caution for this project and future endeavors

Create an IAM User

  1. Navigate to the IAM console
  2. Select Users from the IAM navigation pane and click on Add User
  3. Enter a username and grant programmatic access, which is necessary for working with the CLI
  4. Add user to group with admin access.
    If one is not already created, click Create Group. Give the group a name, such as Admin, and select Administrator Access from the policy list
  5. Click Next until you get to the review page. Review your selections and click Create User
  6. On the success page, download the .csv file with the user’s Access Key and Secret Access Key. If you don’t download this file, you will have to delete this user and start over again.

Install AWS CLI

There are many ways to install the CLI. Since I have pip3 and Python 3 installed on my machine, I used the following command

pip3 install awscli --upgrade

The — upgrade option tells pip3 to upgrade aws CLI to the latest version

After installation, you might need to update your PATH variable with the path to the executable AWS file

  1. Find the path to the aws executable file: which aws
  2. Open your shell’s profile script, which might be .bash_profile, .profile or something similar
  3. Add an export command to your profile script with the path to the file: export PATH=/usr/local/bin:$PATH
  4. load the updated profile into your current session: source ~/.bash_profile

Once the CLI has been installed and PATH has been updated, verify that AWS CLI is installed correctly

aws — version

Configure AWS CLI

After installation, we need to configure the CLI to use the credentials we created earlier. The following command is the quickest way to set up your credentials

aws configure

Running this command will prompt you for four pieces of information: access key, secret access key, AWS Region, and output format

AWS Access Key ID [None]: <AccessKey>
AWS Secret Access Key [None]: <SecretAccessKey>
Default region name [None]: <Region> // e.g. us-east-1
Default output format [None]: json

Anytime aws configure is used, the provided credentials will be stored under the default profile in ~/.aws/credentials and ~/.aws/config. Anytime you run a CLI command, the default profile is used.

If you manage multiple profiles, you will need to use the — profile flag every time you run a command or use export AWS_PROFILE=specificProfile to change the environment variables for your current bash session.

Part 2: Create the helper stack

Now that our CLI is configured correctly, the fun begins.

There are many benefits to using cloudformation, as well as a few limitations. Some of the limitations we encounter in this tutorial are:

  • Populating an s3 bucket.
  • Storing lambda code

We will touch upon the first limitation later, dealing with lambda code storage now. When creating lambda functions, you have 3 options:

  • Put the function code directly in the yaml/json template
  • Upload the code to an existing s3 bucket beforehand, using the s3Bucket and s3Key property to reference the lambda file in the template
  • Specify a local file in your template, using the package and deploy commands afterwards. The package command uploads the local file to an already existing s3 bucket and replaces all local file references within the template with the new s3 location. The deploy command updates the stack and its resources

If the code is less than the 4kB limit, then option one is the way to go. If the code is over the limit, then either option 2 or 3 will work. In either case, you need a pre-existing bucket to store the lambda code. This is where we will start our cloudformation journey.

To start, open up your favorite editor and create a yaml file. In our template, the sections we will be focusing on are Resources and Outputs. The resources section defines the resources that will be created in your account and is the only required section of the template. This is where we will create our s3 bucket. The outputs section defines the information we may need or want to pass to other stacks. We will be using more of this section in part 3.

To know how to define and format resources, use the template reference document. Using knowledge from the doc, we can create an empty s3 bucket.

AWSTemplateFormatVersion: 2010-09-09Resources:
LambdaBucket:
Type: "AWS::S3::Bucket"
Properties:
BucketName: UNIQUE-BUCKET-NAME
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
RestrictPublicBuckets: true
IgnorePublicAcls: true

The PublicAccessBlockConfiguration property ensures that the bucket, and all objects within the bucket, stay private.

To run the stack, use the create-stack command in the CLI.

aws cloudformation create-stack --stack-name STACKNAME --template-body file://path-to-file.yaml

To check that the stack was successfully created use the describe-stacks command.

aws cloudformation describe-stacks --stack-name STACKNAME

Depending on the number of resources within a template, creating a stack can take some time. When the value of StackStatus changes from “create_in_progress” to “create_complete”, the stack has been successfully created.

Now that we’ve knocked that out of the way, we can move on to the wildRydes app itself.

Part 3: Create the wildRydes stack

There are 4 main modules within the wildRydes tutorial: Static web hosting, User management, Serverless backend, and RESTful API.

Architecture diagram for the Wild Rydes app
Architecture diagram for the Wild Rydes app

We want to keep the helper stack separate from the wildRydes stack, as it’s near impossible to have the lambda function and it’s s3 bucket in the same template. So, create a new yaml file for this app.

Part 3a: Host a static website

S3 buckets are not only a great way to store files, they are also a reliable and fast way to host static websites. Using CSS, Javascript, and HTML files stored in the bucket, we will create the front end of our app.

Creating the Bucket
This step is very similar to what we did for the helper stack. This time we need to configure a bucket that allows public reads and enables website hosting.

AWSTemplateFormatVersion: 2010-09-09Resources:
SiteBucket:
Type: "AWS::S3::Bucket"
Properties:
BucketName: UNIQUE-BUCKET-NAME
WebsiteConfiguration:
IndexDocument: index.html
Outputs:
WebsiteURL:
Value: !GetAtt SiteBucket.WebsiteURL
Description: URL for static site hosted on s3

To enable website hosting, use the WebsiteConfiguration property and set index.html as the homepage for the website. !GetAtt is an intrinsic function that returns an attribute of the resource specified. In this case, we can use !GetAtt to output the website url.

The bucket policy determines who has access to buckets and their files. Because we want users to be able to interact with the site, we need to give anyone on the internet read permissions.

SiteBucketPolicy:
Type: "AWS::S3::BucketPolicy"
Properties:
Bucket: !Ref SiteBucket
PolicyDocument:
Version: 2012-10-17
Statement:
Effect: Allow
Principal: "*"
Action: s3:GetObject
Resource: !Sub arn:aws:s3:::${SiteBucket}/*

!Ref returns the value of the specified resource, making it easy to grab the sitebucket name and reference it in the bucket policy. !Sub allows us to substitute variables into a string, so we can inject the sitebucket name into the resource arn.

Launch the wildrydes stack with the create-stack command, using the describe-stacks command to check stackStatus.

Populating the Bucket
This is when we run into the other limitation of cloudformation. To get this site up and running we need to populate the bucket with the CSS, Javascript, and HTML files from ‘wildrydes-us-east-1/WebApplication/1_StaticWebHosting/website’

There are many solutions to bypass this limitation. The easiest way is to execute the aws s3 sync command in the CLI.

aws s3 sync s3://wildrydes-us-east-1/WebApplication/1_StaticWebHosting/website s3://SITEBUCKET --region REGION

If the command was successfully executed, you should see a list of objects that were copied to your bucket

Testing
Using the describe-stacks command again, grab the website url from the output section. Copy and paste the url into your web browser and you should see the image below

Part 3b: Create a cognito user pool

Creating the User Pool
Cognito is a service that handles user authentication and authorization. By utilizing user pools, we can create a user directory for our app.

UserPool:
Type: "AWS::Cognito::UserPool"
Properties:
AliasAttributes:
- email
AutoVerifiedAttributes:
- email
UserPoolName: WildRydes

The AliasAttributes property allows you to specify an attribute that end users can sign in with in place of the default option, which is username. AutoVerifiedAttributes allows you to either specify email or phone number. By choosing email, this will prompt the user pool to send a verification code to the specified email when users register.

Creating the Pool Client
To integrate the user pool with our app, we need to create an app client.

UserPoolClient:
Type: "AWS::Cognito::UserPoolClient"
Properties:
ClientName: WildRydesWebApp
GenerateSecret: false
UserPoolId: !Ref UserPool

For this tutorial we have to set the GenerateSecret property to false because secrets aren’t currently supported with Javascript apps.

Output the user pool id and app client id as we will be referencing those values later.

Outputs:
UserPoolId:
Value: !Ref UserPool
Description: User pool id
UserPoolClientId:
Value: !Ref UserPoolClient
Description: App client id

Run deploy to implement the new changes.

aws cloudformation deploy --template-file /path_to_template/template.yaml --stack-name existing-stack-name

Deploying an updated stack takes some time as cloudformation has to create a change set and then update the stack. The command lets you know when the stack is successfully updated.

Updating Config.js
Stored in the js folder within our SiteBucket is a config.js file. To connect the website with our userpool we must update the settings in this file with the user pool id, app client id, and region.

Unfortunately, you have to manually update the config file with the outputted values. Use describe-stacks to grab the correct values.

Download config.js to your local desktop

aws s3 mv s3://SITEBUCKET/js/config.js Desktop/config.js

Edit the file with your favorite editor or vim, it should look something like this

window._config = {
cognito: {
userPoolId: 'us-west-1_O5JFdvhytg',
userPoolClientId: '6k6nudo0cjga0jf83qmgplhyt3',
region: 'us-west-1'
},
api: {
invokeUrl: '' // e.g. <https://rc7nyt4tql.execute-api.us-west-2.amazonaws.com/prod>',
}
};

Re-upload the file with the correct values

aws s3 mv Desktop/config.js s3://SITEBUCKET/js/config.js

Testing
To test user management:

  1. Navigate to the register page by clicking on the giddy up button on the site homepage.
  2. Complete registration with an email address and password.
  3. This will bring you to the verify page. Enter your email and the verification code that was sent to your email address.
  4. If verification is a success, you will be brought to the sign-in page. Log in with the email address and password you entered during the registration process.
  5. Once successful, you will be redirected to /ride.html and you should see the following notification

Success! With the static site running and the userpool correctly configured, we can move on to the serverless backend.

Part 3c: Building the Serverless Backend

Creating the Table
Whenever users interact with the front end of the app and request a unicorn, the data of that request gets stored in a dynamoDB table. For this table we only need a ride attribute, RideId.

DynamoDB:
Type: "AWS::DynamoDB::Table"
Properties:
AttributeDefinitions:
-
AttributeName: "RideId"
AttributeType: "S"
KeySchema:
-
AttributeName: "RideId"
KeyType: "HASH"
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 5
TableName: Rides

RideId is a string type partition key. Use AttributeDefinitions to define it as a string type. Use KeySchema and HASH KeyType to define it as a partition key. As we are using the default settings, billing mode is provisioned. This means we need to define the read and write capacity units with ProvisionedThroughput.

The table is complete, but it is nothing without a lambda function. The lambda function is triggered whenever a user requests a unicorn. It selects a unicorn, writes to the dynamoDB table, and updates the front-end app with the dispatched unicorn. It provides all the logic for the backend of our app. Before we can tackle creating the function, we must create a lambda role. A lambda function is nothing without a role.

Creating the Lambda Role
Every lambda function has an associated IAM role, detailing the permissions for the function. This function only needs access to write items to the DynamoDB table.

DynamodbRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
-
Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
Policies:
-
PolicyName: "dynamodbWriteAccess"
PolicyDocument:
Version: 2012-10-17
Statement:
-
Effect: Allow
Action:
- "dynamodb:PutItem"
Resource:
- !GetAtt DynamoDB.Arn

The trust policy, detailed under AssumeRolePolicyDocument, is necessary to allow lambda to assume this role. Once the trust policy is added, we can add an inline policy giving lambda dynamoDB write access

If you want to grant the function the ability to write to CloudWatch Logs, you can also attach the AWSLambdaBasicExecutionRole managed policy with the ManagedPolicyArns property. With the role in place, we can move on to the backend logic.

Creating the Lambda Function
As I mentioned earlier, there are 3 ways to handle lambda code in your template. We created a separate s3 bucket in the helper stack to handle code storage.

RequestUnicornFunct:
Type: "AWS::Lambda::Function"
Properties:
Code: path-to-local-file/requestUnicorn.js
FunctionName: requestUnicorn
Handler: requestUnicorn.handler
MemorySize: 128
Role: !GetAtt DynamodbRole.Arn
Runtime: nodejs10.x

The handler property is the entry point for your lambda code, what gets run when the function gets invoked. The format for this property is filename.handler

Use the code property to reference the local path to the lambda code. (Details on how to grab the lambda code below)

Our lambda function resource is created but if we were to run the deploy command, we would run into an error. The lambda code doesn’t exist. To rectify this:

  • copy and paste the contents of the requestUnicorn.js file from aws github into a local file
  • update the cloudformation stack with the path to the local file
  • use package command to zip the code, upload it to the s3 bucket we created in the helper stack and replace the local path with the s3 location of the file in your stack
aws cloudformation package --template-file /path_to_template/template.yaml --s3-bucket bucket-name-from-helper-stack --output-template-file packaged-template.yaml
  • use deploy to update the cloudformation stack, making sure to specify the outputted file from the package command. Because we created an IAM role in this update, we must use the — capabilities flag to explicitly acknowledge the role we are creating
aws cloudformation deploy --template-file /path_to_packaged_template.yaml --stack-name EXISTING-STACK --capabilities CAPABILITY_IAM

For the rest of tutorial, you have a few options:

  • Continue working with the original file — this means packaging the template every time you want to deploy new changes, creating a new file in your lambda storage bucket.
  • Switch to working with the packaged file — This means working with a different file but only needing to run the deploy command when new changes are made
  • Copy and paste the values of the code property from the packaged file, which should look like the code below, into the original file and just run deploy every time new changes are made
Code:
S3Bucket: BUCKETNAME
S3Key: d92b86b3d27db8cb6f80f49d6ac22ce6

Testing
To test that the lambda function is working, copy and paste the following test event into a local file

{
"path": "/ride",
"httpMethod": "POST",
"headers": {
"Accept": "*/*",
"Authorization": "eyJraWQiOiJLTzRVMWZs",
"content-type": "application/json; charset=UTF-8"
},
"queryStringParameters": null,
"pathParameters": null,
"requestContext": {
"authorizer": {
"claims": {
"cognito:username": "the_username"
}
}
},
"body": "{\\"PickupLocation\\":{\\"Latitude\\":47.6174755835663,\\"Longitude\\":-122.28837066650185}}"
}

Use the lambda invoke command to test the function

aws lambda invoke --function-name requestUnicorn --payload fileb://path_to_test_event.json response.json

response.json is the file where test execution results will be saved. When you open response.json you should see something similar to the following:

{
"statusCode": 201,
"body": "{\"RideId\":\"SvLnijIAtg6inAFUBRT+Fg==\",\"Unicorn\":{\"Name\":\"Rocinante\",\"Color\":\"Yellow\",\"Gender\":\"Female\"},\"Eta\":\"30 seconds\"}",
"headers": {
"Access-Control-Allow-Origin": "*"
}
}

With that response we can move on to setting up the API Gateway.

Part 3d: RESTful API

Creating the Rest API
To turn our static website into a dynamic one, we need to expose an HTTP endpoint by configuring a RESTful API in API Gateway.

RestApi:
Type: "AWS::ApiGateway::RestApi"
Properties:
EndpointConfiguration:
Types:
- EDGE
Name: wildRydes

We want an edge optimized endpoint so that public users have the best experience when accessing this site over the internet.

Creating the Cognito Authorizer
To authenticate API calls, we’re going to connect it the cognito user pool we created earlier.

CognitoAuthorizer:
Type: "AWS::ApiGateway::Authorizer"
Properties:
IdentitySource: method.request.header.Authorization
Name: wildRydes
ProviderARNs:
- !GetAtt UserPool.Arn
RestApiId: !Ref RestApi
Type: COGNITO_USER_POOLS

Working in the console, you can simply select Authorization for the identity/token source. Outside the console you must specify the header mapping expression. For authorization that is ‘method.request.header’

Output the authorizer id and the rest api id as we will need those values to test our progress so far.

Outputs:
RestApiId:
Value: !Ref RestApi
Description: Rest API Id
AuthorizerId:
Value: !Ref CognitoAuthorizer
Description: Authorizer id

Run deploy to implement the new changes, don’t forget the — capabilities flag

Testing
To test the authorization configuration:

  1. Visit /ride.html on the serverless app website.
  2. Look for the ‘Successfully Authenticated’ notification. At the bottom of the popup is an authorization token, a long string of characters
  3. Copy the token and use it in the test-invoke-authorizer command.
aws apigateway test-invoke-authorizer --rest-api-id XXXX --authorizer-id XXXX --headers Authorization='PASTEHERE'

use describe-stacks to grab the rest-api-id and authorizer-id

The expected output is a 200 response code and claims for the user

{
"clientStatus": 0,
"latency": 2,
"claims": {
"aud": "76sh9q637rbabe8h5ocu850bfq",
"auth_time": "1579206976",
"cognito:username": "janedoe-at-gmail.com",
"email": "janedoe@gmail.com",
"email_verified": "true",
"event_id": "2c63b68b-1295-477f-95dd-b6c26263b566",
"exp": "Thu Jan 16 21:36:16 UTC 2020",
"iat": "Thu Jan 16 20:36:16 UTC 2020",
"iss": "https://cognito-idp.us-east-1.amazonaws.com/us-east-1_8sZUpbpqu",
"sub": "3c785245-c159-45d8-9df3-dfaa83da180d",
"token_use": "id"
}
}

Creating the Ride Resource
With the API secured through our cognito user pool, the next step is to create a ride resource for the API

ApiResource:
Type: "AWS::ApiGateway::Resource"
Properties:
ParentId: !GetAtt RestApi.RootResourceId
PathPart: ride
RestApiId: !Ref RestApi

The parent resource in this case is the API root. To get the id of that use the !GetAtt function.

Creating the POST Method
Resources need one or more specified HTTP methods to execute correctly. This ride resource needs a POST method. To connect lambda and API gateway, we need to configure the POST method to use lambda proxy integration backed by the requestUnicorn function created in part 3c

ApiMethod:
Type: "AWS::ApiGateway::Method"
Properties:
AuthorizationType: COGNITO_USER_POOLS
AuthorizerId: !Ref CognitoAuthorizer
HttpMethod: POST
Integration:
IntegrationHttpMethod: POST
Type: AWS_PROXY
Uri: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${RequestUnicornFunct.Arn}/invocations
MethodResponses:
- StatusCode: '200'
ResponseModels:
application/json: 'Empty'
ResponseParameters:
method.response.header.Access-Control-Allow-Origin: true
ResourceId: !Ref ApiResource
RestApiId: !Ref RestApi
  • HttpMethod: the client will use POST request to call the ride resource
  • AuthorizationType: connects the user pool created in 3b to the method
  • Integration: sets up lambda proxy integration. For type AWS_PROXY, the IntegrationHttpMethod must be POST. URI is the long path to the requestUnicorn function
  • MethodResponses is necessary to enable CORS

Enabling CORS
CORS (cross-origin resource sharing) is a mechanism that restricts HTTP request methods from external resources, securing the interactions between our frontend app and backend API. Enabling CORS in the console is a simple step, not so much in cloudformation. To do so for this resource we must:

  • Create OPTIONS method
  • Add 200 Method Response with Empty Response Model to OPTIONS method
  • Add Mock Integration to OPTIONS method
  • Add 200 Integration Response to OPTIONS method
  • Add Access-Control-Allow-Headers, Access-Control-Allow-Methods, Access-Control-Allow-Origin Method Response Headers to OPTIONS method
  • Add Access-Control-Allow-Headers, Access-Control-Allow-Methods, Access-Control-Allow-Origin Integration Response Header Mappings to OPTIONS method
  • Add Access-Control-Allow-Origin Method Response Header to POST method
  • Add Access-Control-Allow-Origin Integration Response Header Mapping to POST method

The IntegrationResponse requirement is handled for us in the JS code. The MethodResponse requirement is handled in the POST method. To fulfill the other six requirements in the CORS checklist, we will create an Options method. The Options method handles preflight requests, assuring the server that our POST requests are safe.

OptionsMethod:
Type: "AWS::ApiGateway::Method"
Properties:
AuthorizationType: NONE
HttpMethod: OPTIONS
Integration:
IntegrationResponses:
- StatusCode: '200'
ResponseParameters:
method.response.header.Access-Control-Allow-Headers: "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'"
method.response.header.Access-Control-Allow-Methods: "'POST,OPTIONS'"
method.response.header.Access-Control-Allow-Origin: "'*'"
ResponseTemplates:
application/json: ''
PassthroughBehavior: WHEN_NO_MATCH
RequestTemplates:
application/json: '{"statusCode": 200}'
Type: MOCK
MethodResponses:
- StatusCode: '200'
ResponseModels:
application/json: 'Empty'
ResponseParameters:
method.response.header.Access-Control-Allow-Headers: false
method.response.header.Access-Control-Allow-Methods: false
method.response.header.Access-Control-Allow-Origin: false
ResourceId: !Ref ApiResource
RestApiId: !Ref RestApi

Creating the Resource Policy
The hardest part is out of the way, but the API Gateway is not fully functional yet. The API Gateway still needs permission to invoke the lambda function. Use lambda permission to add a resource-based policy.

ApiPermissions:
DependsOn: RequestUnicornFunct
Type: "AWS::Lambda::Permission"
Properties:
Action: lambda:InvokeFunction
FunctionName: requestUnicorn
Principal: apigateway.amazonaws.com
SourceArn: !Sub arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${RestApi}/*/POST/ride

To prevent the resource policy from being created before the lambda function exists, use the DependsOn property. Or else the stack will fail

Deploying the API
Finally, deploy the API and update the config.js file with the invokeURL!

ApiDeploy:
DependsOn: ApiMethod
Type: "AWS::ApiGateway::Deployment"
Properties:
RestApiId: !Ref RestApi
StageName: prod
Outputs:
InvokeUrl:
Value: !Sub https://${RestApi}.execute-api.${AWS::Region}.amazonaws.com/prod

Because we created a rest API and method in the same template as the deployment resource, we need DependsOn to prevent stack errors.

Run the deploy command to implement new changes, use the describe-stacks command to grab the invokeUrl.

Updating Config.js
Use the s3 mv command to move the config.js file to your local desktop, update the config file with the invokeUrl, then move the config.js file back to your sitebucket. The same process we used in 3b.

The complete config.js file should look similar to this:

window._config = {
cognito: {
userPoolId: 'us-west-1_O5JFdvhytg',
userPoolClientId: '6k6nudo0cjga0jf83qmgplhyt3',
region: 'us-west-1'
},
api: {
invokeUrl: 'https://cz7i1fitfj.execute-api.us-east-1.amazonaws.com/prod'
}
};

Testing
To test, sign into the wildRydes website and request a unicorn. If you see a notification in the right sidebar that a unicorn has arrived, then you have successfully created a serverless application using cloudformation!

Part 4: Cleanup

The beauty of IaC is the ability to deploy and destroy resources with a few commands. To delete these stacks, we will take advantage of the delete-stack command. Cloudformation can’t delete buckets that still have objects in them. So, we have some files to delete.

Remove files from s3 buckets

Removing files from an s3 bucket is pretty simple, we need the rm command and the name of the bucket

aws s3 rm s3://WEBSITE-BUCKET --recursive

Don’t forget to remove the files from both the website hosting bucket and the lambda code bucket. The — recursive option deletes all s3 objects

Delete stacks

Now that the s3 buckets are empty, we can successfully run delete-stack. Run twice to delete the wildrydes stack and the helper stack

aws cloudformation delete-stack --stack-name STACKNAME

Troubleshooting

Cloudformation has a steep learning curve. Hopefully this tutorial has set you up to succeed, but realistically errors will occur. When you hit a rough spot, use the following resources to figure it out.

  • use validate-template command to check whether your template file is valid yaml or json
aws cloudformation validate-template --template-body file:///path/to/test-template.yaml

Conclusion

Using only the CLI, we built a serverless ride-sharing app that utilized s3, cognito, dynamoDB, lambda, and API gateway. Cloudformation made it easy to deploy many resources with a single file, making resources easier to manipulate and control. But we couldn’t fully automate the process, as certain actions pushed the limitations of the AWS service. To increase automation, we would have to pair cloudformation with a bash script or other services in the aws landscape. Hope you enjoyed this tutorial, we’ve only scratched the surface in regards to Serverless and Cloudformation. Thanks for reading!

Code for these two cloudformation stacks can be found on github.

Kiki Morgan

Written by

Lifelong learner diving into Serverless and AWS

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade