Developing a Serverless Web Application with Session & User Management on AWS

Rebeca Chinicz
The Startup
Published in
20 min readMay 18, 2020

A detailed walk-through

Intro

A while ago, I had to build a traditional (i.e. server-based, monolithic) web app for a project-centered course I took. Recently, during lock-down, I thought it’d be interesting to build such an app on the cloud (on AWS, specifically), and more interesting still, to build such an app with a serverless architecture. In this article, I will try to explain how to do that, step-by-step, and demonstrate why such an architecture might actually be better, depending on an app’s specific needs.

I decided to go with a common web app scenario: an online store, where users can browse a catalog, save items to their cart (before and after login) and get notified when they make a new purchase. In other terms, an app that can manage users and sessions, authenticate and identify users and trigger specific functions based on specific back-end events.

The architecture

You can check out a working, active version of the website here.

The code for this project can be found on my GitHub, so if you want to follow this step-by-step, head on over there and clone the repository.

The Services Used

  • Simple Storage Service, AKA S3 (green bucket icon) — object (i.e. file) storage service.
  • Lambda (orange Greek letter lambda) — serverless function-code to be triggered by events of your choosing.
  • Cloudwatch (magenta cloud with magnifying glass) — event logging and monitoring tool.
  • DynamoDB (blue database icon with thunder) — NoSQL database with partition & sort key and index support.
  • Simple Email Service, AKA SES (blue envelope) — handles the sending of emails. Costs $0.10 for every 1,000 email you send. See more pricing info here.
  • CloudFront (indigo circle with smaller, inner circles and lines) — CDN (Content Delivery Network).
  • Route 53 (indigo 53 road sign) — Cloud DNS (Domain Name System)
  • API Gateway (indigo thing with “</>” in the middle) —helps to define an API’s resources, authentication/authorization, method and integration.
  • Cognito (red ID card with check symbol) — user and identity management.

Pre-requisites

  • An AWS account. This project shouldn’t cost more than 0.30 $, although almost every service used here (except SES, which is more of an extra, as we’ll soon see) is eligible for free-tier use.
  • (Preferably) Some familiarity with HTML and JavaScript.

0. The site itself

The first thing we’re going to do is define what users can do. A guest (or a registered user prior to login) will be able to browse the catalog, add/remove items from their cart, view their cart, login, sign up and very their account after signing up. A logged-in user may also browse the catalog, add/remove items from their cart, view their cart and make a purchase.

The site will consist of six main pages: index/home, catalog, my cart, login, register/sign up and verify (for a newly-registered user to verify their account). Most of the pages will have a similar structure: check if there is a registered user currently logged in, if so set the page’s display and functions (add to cart, remove from cart, buy, etc.) to work with the API, else, if it’s a guest, get their/assign them a temporary identity credential, and set the display and functions to work with a specific DB table (more on that later).

I won’t cover the code in detail here, but if you feel like you need to see it to understand it better, check out the JavaScript front end files on the GitHub repository, as I tried to write clear and instructive comments on those.

1. Hosting the website on S3

One neat feature of Amazon S3, is that it allows us to not only store the files for our site, but also to host the site from there. It does this as long it’s a static website, and the bucket (essentially the root directory of our files) has a public read, i.e. anything in it can be viewed (but not modified) by anyone.

To get started, let’s first create a bucket:

  • Head on over to your AWS console and select S3, from Services.
  • Click Create Bucket.
  • Enter a unique name for your bucket, under Bucket Name. For bucket-naming rules click here.
  • Select the region you want your bucket to be in. I should point out that if you want all of the services used in the website to be in the same region, you should choose us-east-1 (N. Virginia), us-west-2 (Oregon) or eu-west-1 (Ireland).
  • Now, in the “Bucket settings for Block Public Access” section make sure to uncheck “Block all public access”, as seen below:
  • After that, you’ll see an alert informing you this will make the bucket and its object become public. Check “I acknowledge that the current settings might result in this bucket and the objects within becoming public.”
  • Finally, click Create Bucket.

After the bucket has been successfully created, let’s upload the website’s files (“frontend” folder of the GitHub repository):

  • On the S3 Management Console, find your new bucket, and click on it.
  • Click on Upload.
  • Drag and drop or manually add all files in the frontend folder, such that they maintain their structure (i.e. don’t upload all files on the same level, keep the file structure), but don’t upload the folder as a folder (i.e. upload its contents, but not the folder itself).
  • Click next, and then click the selector under “Manage public permissions”, select “Grant public read access to this object(s)” and click Next.
  • Click next and finally upload.

It may take a while for the upload to finish. Once it does, let’s set the bucket up for static website hosting:

  • Head over to the Properties tab on the bucket page.
  • Click on Static website hosting.
  • Click on the option “Use this bucket to host a website”, and then on Save.

Voila, you now have a static website up and running. Above the options on the “Static website hosting” section, you should see an Endpoint link. If you follow this link, it’ll take you to the website, tough the site shouldn’t be able to do much right now, since it relies on other services, which we are yet to set up.

2. Distributing and routing our site with CloudFront and Route 53

Our site is currently on a single region, which may make it slower, since no matter where our users are, they may always be requesting the same object from the same location, which may or may not be close to them. The site also doesn’t have an SSL certificate, it only works through HTTP and isn’t associated with our own domain.

This step will address these problems/concerns. If you don’t have a domain of your own, or you do but you don’t want to secure traffic with an SSL certificate and HTTPS, feel free to skip on to the next step.

Before continuing, make sure you have the S3 endpoint URL ready to be copy-pasted. Now, let’s create the app’s distribution:

  • Under Services, select CloudFront.
  • Once distributions page has loaded, click on Create Distribution.
  • Click Get Started, under the first (Web) option.
  • On Origin Settings, under Origin Domain Name, enter the S3 bucket endpoint URL.
  • On Default Cache Behavior Settings, under Viewer Protocol Setting, select Redirect HTTP to HTTPS.
  • On Distribution Settings, in the Alternate Domain Names (CNAMEs) text-area, enter the name of the app under your domain, e.g. jukebox.mydomain.com
  • In the SSL Certificate section, choose Custom SSL Certificate and click on Request or Import a Certificate with ACM.
  • On the opened window, under domain name, enter your chosen domain name for your app and then click next.
  • Leave DNS validation selected and click Next.
  • Click Review, then Confirm and Request.
  • On Validation, click the caret next to your domain name.
  • Click Create record in Route 53, and then Create on the modal.
  • If everything went OK, you should see the following message:
  • After a short while, your certificate should have been validated, and you can enter domain name you entered 6 steps ago into the Custom SSL Certificate field back on the CloudFront distribution creation page.
  • Scroll down to the bottom of the page and click Create Distribution.

It’s probably going to take a while for your new distribution’s state to change from “in progress” to “deployed” (you can check that in the Distributions part of the CloudFront Management Console), but, in the meantime, you can start creating a record for your app’s domain name on Route 53:

  • Under Services (on the top bar), click on Route 53.
  • From your dashboard, go to your Hosted zone (if you don’t have one already, follow the steps on this guide to create one)
  • Click on your chosen domain.
  • Click Create Record set.
  • Enter the name of your application (or whatever you want the site to be called) to the left of your domain name.
  • Choose “Yes”, next to Alias.
  • In your CloudFront Management Console, click on the new distribution and copy its Domain Name.
  • Back in the Route 53 Hosted Zones, paste that in the Alias Target field, and click Save Record Set.

Mazal tov! Your domain name is now registered and associated with your distribution, which will speed up and secure access to your S3-hosted website. If you can’t call your website through the registered domain name, you may need to force the CloudFront distribution to redeploy, and check if it’s under the right alias.

3. Table creation on DynamoDB

Now that we have a well-distributed and routed website, let’s start building the tables where we’ll store our user and session data. We’re going to create three different tables.

The first table will store the guest users’ carts. This table is temporary, as a guest can only keep their cart for a maximum of two hours, which also protects the table from being over-written to. The primary key (more on DynamoDB tables and keys here) will be the IdentityID the guest gets when first entering the website, and the only attribute will be the number-set CartItems, but we don’t declare that here:

  • From Services, click on DynamoDB, and then on Create table.
  • For Table name, enter JukeboxGuestCarts.
  • For Primary key, enter IdentityID, and leave it as a string.
  • Click Create.

One table down, two to go. The next table we’ll create will record the registered users’ carts. Here, the key will be the username (which in our case will always be an email), and the same one attribute as the previous table, CartItems:

  • Click on Create table.
  • For Table name, enter JukeboxUserCarts.
  • For Primary key, enter Username, and leave it as a string.
  • Click create.

The last table will record purchases. The primary partition key will be a PurchaseID we’ll generate when recording a purchase, and the sort key will be the Username. The table will also have CartItems, Total and PurchaseDate attributes:

  • Click on Create table.
  • For Table name, enter JukeboxPurchases.
  • For Primary key, enter PurchaseId, and leave it as a string.
  • Select sort key.
  • For Sort key, enter Username, and leave it as a string.
  • Click create.

And that’s it. Habemus database.

4. User management with Cognito

To get the guest part of our website working, the first thing we’re going to need is a Cognito Identity Pool to be associated with our app. Identity Pools are a great way to give guests temporary credentials, identification and, by attaching specific policies, access to specific resources. In our case, we’ll want to give our guests limited access to the guest carts table, which will help us manage their sessions. So without further ado:

  • From Services, click on Cognito (under Security, Identity and Compliance).
  • Click on Manage Identity Pools, and then Create new identity pool.
  • On Identity pool name, enter Jukebox Guest.
  • Under Unauthenticated identities, check Enable access to unauthenticated identities.
  • Click Create pool, then Allow.
  • From Services, go to IAM.
  • Click on Roles.
  • Searching by “Jukebox” (or whatever you named your Identity pool), find the role that looks like “Cognito_<pool name>Unauth_Role” and click on it.
  • Click on Attach policies, and then on Create policy.
  • Select JSON (as opposed to the visual editor), and paste the following, replacing <Guest carts table name> with the name you gave the table that records the guests’ carts:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"dynamodb:PutItem",
"dynamodb:Delete*",
"dynamodb:Get*",
"dynamodb:BatchWrite*",
"dynamodb:Update*"
],
"Resource": "arn:aws:dynamodb:*:*:table/<Guest carts table name>"
}
]
}
  • Click review policy.
  • Give the policy a name you’ll remember, preferably, an adequate description, and then click on Create policy.
  • Back on the other page (“Add permissions to …Unauth_Role”), hit the refresh button on the right corner over the table.
  • Search for and select the new policy you created two steps ago.
  • Click Attach policy.
  • Open the file frontend/js/config.js.
  • Navigate to the Sample code section of your Identity pool and copy the string with the comment “Identity pool ID” next to it.
  • On identity.idPoolId, paste the pool ID.
  • On identity.region, enter the region of your pool, e.g. us-east-1.

All right! You can now access your website and, as a guest, add/remove items to/from your cart. Next, we’re going to set up a User Pool, to handle user management:

  • From the Cognito dashboard, select Manage User Pools, and then click on Create a user pool.
  • In Pool name, enter Jukebox, and click on Review defaults.
  • Next to App clients, click on Add app client, and then again.
  • Under App client name, enter JukeboxApp.
  • Under token expiration, enter 1.
  • Click on Create app client, then on Return to pool details, then on Create pool.
  • Now, copy the Pool Id you see on your User Pool’s General Settings section.
  • Back on the config.js file, paste the User Pool Id on cognito.userPoolId.
  • Under General Settings, navigate to App Clients, and copy the App client id.
  • Paste it on cognito.userPoolClientId, in config.js.
  • Finally set the region of the Cognito User Pool, on the config.js file.

And there it is: user management. Users can now login, register and verify their account, with a code they’ll receive on their email upon registration. Now, it should be noted that logged-in users still have no permissions (to anything), so right now, they can’t do anything.

5. Extra: sending emails with SES

If you don’t want to use Amazon SES, skip to the next section.

As I’ve mentioned before, SES is not free, but it is relatively cheap. This is an extra touch for the website: configuring SES will allow us to send account verification emails from our own chosen email address (instead of Cognito’s default address), as is recommended, and also to send emails notifying users when they’ve made a purchase, again from our own email address.

To get started, let’s set up SES:

  • From Services, select Simple Email Service (SES).
  • Choose the region you like, preferably one that’s close to your client base and/or the other services.
  • Under Identity Management, click on Email addresses, and then on Verify a New Email Address.
  • Enter your email (or the email you want to use for the app).
  • After you’ve verified your email address, refresh the table and you should see its status change to verified.
  • If the account your using is new, you may be in the “Sandbox”. The one consequence of this that is relevant here is that you can’t send emails to un-verified accounts if you’re in the “sandbox”, so follow the steps on this guide to get your account of the “sandbox”

Now that we have SES set up, let’s set our Cognito User Pool to work with when sending account verification emails:

  • From Services, go to Cognito.
  • Choose Manage User Pools and select this app’s User Pool.
  • Under General Settings, choose Message customizations.
  • Under SES Region, select the region of your recent SES account.
  • Under FROM email address ARN, select the email you’ve verified on that SES account.
  • On “Do you want to send emails through your Amazon SES Configuration?”, select “Yes — Use Amazon SES”.
  • Scroll down and click on Save changes.

6. The backbone of the back-end

This is where the magic happens. It’s where we replace the antiquated monolithic server, with several, modern, modular, individual and independent functions. This is where things really get serverless.

For every possible operation a logged-in user can perform (add cart item, remove cart item, get cart and add purchase), and every operation that must happen without the user even realizing it (clear cart and add cart), an Amazon Lambda function will be declared.

The code for the functions can be found on backend/lambda-functions. Here’s a quick overview of what each function does:

  • Add Cart: gets called after the user has logged in. Gets as part of the request data the list of items that were in the user’s guest cart, and writes them to their cart on the users’ carts table, if they weren’t there already.
  • Clear Cart: gets called after a purchase is made. Deletes the user’s cart from the users’ carts table.
  • Add Cart Item: gets called whenever the user clicks the “add to cart” button. Adds an item to the given user’s cart.
  • Remove Cart Item: gets called whenever the user clicks the “remove from cart” button. Removes an item from the given user’s cart.
  • Get Cart: gets called when the catalog and the cart pages load. Return a list of all items in the given user’s cart.
  • Add Purchase: gets called when the user clicks “Buy” and then “Confirm”. Records a purchase on the purchases table, and then sends an Email to the user, informing them of the purchase and its details, through Amazon SES. SES is not eligible for free-tier use, so feel free to comment out/delete lines 5 and 44–72 of the file addPurchase.js.

I now suggest you to open DynamoDB on another tab, add an item to the user carts table with your the username example@lemail.com and any cart items you wish on the CartItems attribute, keeping in mind that this attribute is a NumberSet. More info on DynamoDB data types here.

This example item will help with testing our functions, since they’re not yet connected to the website. If you want to test the SES/email purchase confirmation email functionality, I suggest you also add an example item with a real email address whose inbox you can check.

As you may have noticed, there is a seventh Lambda function on the folder, but I’ll get to it later. Let’s just say it’s not exactly essential to get the site up and running. Speaking of which, to create the aforementioned Lambda functions, repeat the following steps for each function mentioned above:

  • Head on over to Lambda, from Services, and click Create function.
  • Give the function an appropriate name.
  • The default runtime should already be the latest version of Node.js, but if it isn’t, make sure to select that.
  • Under Permissions, click on Choose or create an execution role.
  • Now, click on Services and open IAM on another tab.
  • Go to Roles and click on Create role.
  • Under Choose a use case, select Lambda and then click on Next: Permissions.
  • Click Create policy. A new tab will open.
  • Click on JSON, as opposed to the Visual Editor.
  • For any function except addPurchase, copy and paste the following, replacing <User carts table name> with the name you gave the table that records the users’ carts:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"dynamodb:BatchGet*",
"dynamodb:PutItem",
"dynamodb:Delete*",
"dynamodb:Get*",
"dynamodb:BatchWrite*",
"dynamodb:Update*"
],
"Resource": "arn:aws:dynamodb:*:*:table/<User carts table name>"
}
]
}
  • For addPurchase, copy/paste the following, replacing <Purchases table name> with the name you gave the table that records purchases:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"dynamodb:PutItem"
],
"Resource": "arn:aws:dynamodb:*:*:table/<Purchases table name>"
}
]
}

Finish creating the policy above, and then create another policy, this time copy-pasting the following:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ses:SendEmail",
"ses:SendRawEmail"
],
"Resource": "*"
}
]
}
  • Back to the Role tab, click the refresh icon to the right, over the table.
  • Search for the new policy/policies you created, select it/them and click on Next: Tags, then Next: Review.
  • Give the role an appropriate name, that you’ll remember.
  • Back to the Lambda tab, select Use an existing role.
  • Hit refresh and select the new role you created.
  • Click create function.
  • Scroll down to the Function code section and paste the code of the specific function on the editor with a file called index.js open.
  • On the bar of buttons to the right of the function name, near the top of the of page, click the Select a test event selector, and then on Configure test events.
  • On the Event name, enter the name of the function.
  • On the editor, delete the existing contents and paste the contents of the file <function name>.json, from the directory backend/test-request-events. Replace the example username from the addPurchase test with a real email you have access to, if you want to test the SES functionality (assuming you want it and you’ve set it already).
  • Click Create.
  • Click Save and then Test. The function should succeed and you should be able to see its logs and monitoring information.

Lastly, we’ll configure the Lambda function addPurchase to correctly work with SES, so if you’re not going to use that, skip on down to the next section:

  • Go to Lambda, from Services.
  • Select the addPurchase function.
  • Inside the editor, edit line 5 as necessary, to state the region where your SES account is set up.
  • Beneath the editor, on Environment Variables, click Edit, and then Add environment variable.
  • For Key enter SRC_EMAIL, and for Value, enter your SES verified email address.
  • Click Save, and then Save again on the main function page.

After you’ve done that for the six functions mentioned in this section, you should have a fully (isolated) functional, serverless backend. Note that it is still not integrated with the frontend, so for now, these functions are not operational, but they will be after the next section.

7. The API

Finally, we’ve arrived at the glue holding the app together: the API. Amazon’s API Gateway is extremely easy to use, and in this case, it will both integrate the client request to the appropriate Lambda function, and also authenticate client requests. How? On every request we make from the client, through the API Gateway to our backend fleet of Lambda functions, we’ll add a header with the authentication token Cognito gives logged-in users, when the client finds themselves to be a valid user in an active session.

To get this working:

  • From Services, click on API Gateway, and then on Create API.
  • On Choose an API type, click Build, on REST API.
  • Appropriately name the API and for Endpoint type, select Edge optimized, then click on Create API.
  • From the main page of the API Gateway service, select your new API.
  • On the Actions menu, select Create Resource.
  • Enter an appropriate name, such as Add Cart Item, and an appropriate path, such as /add-cart-item.
  • Check “Enable API Gateway CORS” and click on Create Resource.
  • Click on the newly created resource, and then on the Actions menu, click on Create Method.
  • From the selector, choose POST and then check.
  • Now click on the newly created method.
  • Click on the Method Request box.
  • Next to Authorization, next to “None”, click on the pencil icon, and under Cognito user pool authorizers, select the one you created a while ago, and then check.

Repeat this process for all Lambda functions created on the previous section, except that for functions clearCart and getCart, the method should be GET, instead of POST.

Now that we have our API defined, we just need to deploy it. I should point out that if you change something in the API definition and forget to deploy, the change will NOT work. So, make sure to:

  • Select Deploy API, from the Actions menu.
  • On Deployment stage, select [New stage], give it an appropriate name like “prod”, give it an appropriate description, and hit Deploy.
  • Now, from the vertical menu on the left of the page, click on Stages, and in Stages, click on the one you just created.
  • Copy the Invoke URL you see on that page.
  • Open the frontend file config.js again, paste the URL as the value of api.invokeUrl, and save.

And that’s it! Our website should be fully functional and operational by now.

8. Periodically cleaning the guest carts table

So, if our website is technically functional, what’s left that isn’t essential, but isn’t quite an extra? Making sure guest session data doesn’t stay on the guest table longer than a specific amount of time, let’s say, two hours. This is particularly important because this table is partially public.

To clear out the guest table every two hours, we’ll need to define a Lambda function which deletes and re-creates the table, and a chronological trigger for it, with a rate of 2 hours. This function will need a role with permission to delete and create the guest carts table. Let’s start with that:

  • Head over to IAM from Services.
  • Click on Roles, then Create role, then select Lambda, then click on Next: Permission.
  • From the managed policies attach the AWSLambdaBasicExecutionRole.
  • Click on Create policy.
  • Copy-paste the following onto the JSON editor, replacing <Guest carts table name> with the name you gave the table that records guests carts:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "SpecificTable",
"Effect": "Allow",
"Action": [
"dynamodb:CreateTable",
"dynamodb:Delete*"
],
"Resource": "arn:aws:dynamodb:*:*:table/<Guest carts table name>"
}
]
}
  • Finish creating the policy, refresh the policies table on the Role tab, attach the new policy, and finish creating the role.

Now, onto the function and its trigger:

  • Go to Lambda, from Services.
  • Choose Create a function.
  • Name the function, select the Node.js runtime and add the role created above.
  • On the Designer, choose Add trigger (to the left of the function).
  • Select CloudWatch Events/EventBridge.
  • For Rule, select Create new rule.
  • Give the rule an appropriate name and description, and enter rate(2 hours) under Schedule expression.
  • Click Add.
  • On the editor, copy the code from file backend/lambda-functions/clearGuestTable.js and paste it there.
  • Scroll down to Basic Settings and click on Edit.
  • Raise the timeout to 5 seconds and click Save. We need to this because after deleting a table, it stays on a DELETING state for a while, but it still exists while on this state, so that if we were to try to create the table immediately after deleting it, we’d get an error indicating that the table we are trying to create already exists.

→ We’ll deal with that by delaying the table creation by a few seconds. This would be a problem if we didn’t change the function timeout as we’re doing in this step, because the default timeout is 3 seconds, and it often takes a little longer than that for a table to be fully deleted.

  • Back on the main function page, click Save.

You should see the trigger event to the left of the function on the Designer. Now, let’s test this function:

  • On the top-right corner, click on Select a test event and then on Configure test event.
  • Give the event a name matching the function.
  • Under Event template, select Amazon CloudWatch.
  • Click Create, and then Test.
  • If you check on DynamoDB, you should see your guest carts table has in fact just been created, and anything that was on it before, is now gone.

Congrats! You now have a fully functional and maintainable, serverless website.

Final Thoughts

This type of architecture is not a general, one-size-fits-all solution that will definitely work on every case, but it is a great solution that works amazingly well on many cases.

That’s it, folks! I hope this article was helpful or at least informative in some way. Cheers!

--

--