AWS DEPLOYMENT

How to Deploy an HTTPS Node.js, PostgreSQL, Redis Back-end, and a React Front-end to AWS

Deploying a single instance HTTPS Node.js API back end with a PostgreSQL and Redis databases, along with a React front end to Amazon Web Services using Elastic Beanstalk, RDS, ElastiCache, Let’s Encrypt, NGINX, Amplify, and without a Load Balancer

Tiago Santos
The Startup

--

In this tutorial we’re going to go through all the steps required to deploy your back-end Node.js API, with a Postgres and Redis databases, to production using AWS. I’m assuming you already have a production ready Node.js API coded, along with a React app front-end. I’m also going to assume you’ve never used AWS before, if you have, feel free to skip the sections that you have already done. If your API doesn’t use databases or uses only one of the database types we’ll be deploying, skip those sections as well. Everything we’ll be doing in this tutorial is free tier eligible, if your AWS account is less than one year old, so you can deploy to AWS for free. Let’s start!

1) Create an AWS account

First, you’re going to need to have an AWS account. So go to https://aws.amazon.com/ and create one. If you need more detailed instructions visit the official AWS documentation here.

2) Create Administrator user

Since it’s a bad practice to perform any tasks with the root user, except when required, we are going to be creating an Administrator user to do our deployment with.

2.1. Sign in to AWS.

2.2. On the top navigation bar go to Services type IAM and press Enter.

The following steps are quoted from the official AWS documentation:

2.3. Enable access to billing data for the IAM admin user that you will create as follows:

2.3.a. On the navigation bar, choose your account name, and then choose My Account.

2.3.b. Next to IAM User and Role Access to Billing Information, choose Edit.

2.3.c. Select the check box to Activate IAM Access and choose Update.

2.3.d. On the navigation bar, choose Services and then IAM to return to the IAM dashboard.

2.4. In the navigation pane, choose Users and then choose Add user.

2.5. On the Details page, do the following:

2.5.a. For User name, type Administrator.

2.5.b. Select the check box for AWS Management Console access, select Custom password, and then type your new password in the text box.

2.5.c. By default, AWS forces the new user to create a new password when first signing in. You can optionally clear the check box next to User must create a new password at next sign-in to allow the new user to reset their password after they sign in.

2.5.d. Choose Next: Permissions.

2.6. On the Permissions page, do the following:

2.6.a. Choose Add user to group.

2.6.b. Choose Create group.

2.6.c. In the Create group dialog box, for Group name type Administrators.

2.6.d. Select the check box for the AdministratorAccess policy.

2.6.e. Choose Create group.

2.6.f. Back on the page with the list of groups, select the check box for your new group. Choose Refresh if you don’t see the new group in the list.

2.6.g. Choose Next: Tags.

2.7. Choose Next: Review. Verify the group memberships to be added to the new user. When you are ready to proceed, choose Create user.

Now sign out with the root user and sign in with the Administrator user we just created. We’re not going to be using the root user anymore.

3) Create an AWS key pair to SSH into EC2

Elastic Beanstalk will automatically create an EC2 instance for us when we create our app, this is where our Node.js server will be hosted. We will need to access our server so we can configure it to have HTTPS, and also so we can access our RDS database in order to run SQL commands on it, so that we can create our tables and populate the database. We connect to our EC2 instance using an SSH tunnel, so our communications are encrypted and secure. But to use SSH we need to have a key pair established between the server (AWS EC2) and the client (our local machine, in my case my laptop).

3.1. On the top navigation bar, at the right corner make sure to choose the AWS region you want to deploy your back-end in. Throughout this entire tutorial always make sure you have this region selected. I will be using the U.S. East (Ohio) us-east-2 region for this tutorial.

3.2. On the top navigation bar go to Services type EC2 and press Enter.

3.3. On the left bar, scroll down to NETWORK & SECURITY section and click Key Pairs.

3.4. On the top right corner click Create key pair.

3.5. Enter the key pair name you want, in my case it’s aws.

3.6. On the File format choose pem.

3.7. Click Create key pair and save the file on your computer.

3.8. Now go to your bash terminal, in your local machine and move your key to a safe place, for example in my case since I saved the key in the /mnt/c/Tiago folder, I’ll now move it to the /home/tiagofbsantos/.ssh/ folder using the following command: mv /mnt/c/Tiago/aws.pem /home/tiagofbsantos/.ssh/.

3.9. After that I run the command:chmod 400 /home/tiagofbsantos/.ssh/aws.pem, in order to set the permissions of the key file.

We have now created our key pair, we’ll be using it in the next steps.

4) Create Elastic Beanstalk (EB)

AWS Elastic Beanstalk simplifies a large chunk of the deployment of your back-end, by automatically creating and configuring several AWS resources/services that will be necessary to have a production ready API, while letting you retain fine-grained control of the resources created.

4.1. In the production folder of your Node.js API, make sure that your package.json has the following lines in it:

  "main": "server.js",
"scripts": {
"start": "node server.js"
},

Replace server.js by your main server file if you named it something else. This is so Elastic Beanstalk knows which file to use to launch your server.

4.2. Create a zip file of the production version of your Node.js API, only the files and folders in it not the parent folder. Don’t include the node_modules folder, and you can also skip the .git folder. Elastic Beanstalk will run npm install.

4.3. On the top navigation bar go to Services type Elastic Beanstalk and press Enter.

4.4. Click on Create Application.

4.5. Type in your application name, try to keep it short as long names might create problems (see step 10.3. for more info). Mine is nodeapi.

4.6. On Platform choose Node.js, leave the defaults for Platform branch: Node.js 12 running on 64bit Amazon Linux 2 and for Platform version: 5.1.0 (Recommended).

4.7. On Application code choose Upload your code.

4.8. On Source code origin, make sure Local file is selected, click Choose file and select the zip file created in step 4.2.

4.9. Click Configure more options.

4.10. Scroll down to the Security section and click Edit.

4.11. On EC2 key pair choose the key pair we created in step 3.7. Mine is aws. And click Save.

4.12. Scroll down to Network and click Edit.

4.13. In Instance settings > Instance subnets choose the Availability Zone you want and click Save. Write it down, you’ll need it later. Mine is us-east-2c.

4.14. Click Create app at the bottom and wait until the application is created. When it’s done the screen will change to the page of the environment you just created. There will probably be a warning sign saying the health of the environment is degraded. This is because we haven’t created the databases and haven’t set up the environment variables yet. So, let's create the databases first.

5) Create a PostgreSQL database using Amazon RDS

There are two ways of creating a relational database on AWS:

  1. You can either create a relational database attached to your Elastic Beanstalk instance, by going to the configuration section of your EB environment and selecting databases. This database will be deleted if you delete your EB environment and cannot be used in other Applications. This method is not recommended for production.
  2. Or you can create a relational database that is independent of you EB instance, that will persist even after you delete your EB environment and that can be accessed by other apps you might develop in the future. We are going to be choosing this route in this tutorial.

To do that follow these steps:

5.1. On the top navigation bar go to Services type RDS and press Enter.

5.2. At the top right side click Create database.

5.3. On Choose a database creation method make sure you have Standard Create selected.

5.4. On Engine options select PostgreSQL.

5.5. Choose the PostgreSQL version you want, I’ll be using PostgreSQL 12.3-R1.

5.6. On Templates choose Free tier, if this option doesn’t appear wait a little bit, Amazon makes sure the paid options render immediately while the free one doesn’t.

5.7. Choose your DB instance identifier, Master username and Master password, I’ll be using postgresdb, postgresusername and postgrespass respectively.

5.8. Disable Storage autoscaling, to make sure we always stay in the free tier.

5.9. Make sure your VPC is the same as the one you deployed your EB instance in. If you are a new user you should only have one VPC, so you can always skip VPCs checks.

5.10. Expand the Additional connectivity configuration section.

5.11. On VPC security group make sure Choose existing is selected.

5.12. Click on Choose VPC security groups and select the security group EB created. Mine looks like this: awseb-e-fuxtmctujc-stack-AWSEBSecurityGroup-G5JP5N2F64H1.

5.13. Remove the default security group.

5.14. In Availability Zone make sure to choose the same Availability Zone where you deployed your EB instance, in step 4.13. Mine is us-east-2c.

5.15. In the Database authentication section make sure Password authentication is selected.

5.16. Click on the Additional configuration section to expand it.

5.17. In Initial database name write the name you want for your database, I’ll be using postgresdbname.

5.18. You can leave the default settings for the rest and click Create database at the bottom.

5.19. Wait for AWS to create your database, the banner will turn from blue to green and it will show Successfully created database postgresdb once it’s done.

5.20. Click on View credential details and copy the Endpoint, you’ll need it later. In my case it’s postgresdb.cu4ulyxj6kaf.us-east-2.rds.amazonaws.com.

6) Create a Redis database on AWS using Amazon ElastiCache

Redis is a fast, open-source, in-memory key-value data store, we’re going to be deploying it to Amazon ElastiCache. In my API I use it to store user sessions with JSON Web Token. If your API doesn’t use a Redis database skip this step entirely and move on to step 7).

6.1. On the top navigation bar go to Services type ElastiCache and press Enter.

6.2. Click Get Started Now.

6.3. On Cluster Engine make sure Redis is selected, Cluster Mode enabled is not.

6.4. Give your database a name, mine is redisdb.

6.5. On Node type make sure to select either cache.t2.micro or cache.t3.micro if you want to use the free tier. Amazon will charge you if you choose anything else, even if you immediately delete it. We’re going to go with cache.t3.micro. Click Save.

6.6. On Number of replicas put 0 and press Enter.

6.7. Scroll down to Advanced Redis settings and give a name to your subnet. Mine is ebsubnet.

6.8. On Subnets select the subnet that is in the same Availability zone as your EB application (from step 4.12.). In my case it’s us-east-2c.

6.9. Now on Availability zones placement make sure you have Select zones selected and choose the same availability zone you chose in the last step.

6.10. In the Security section click the pencil in Security groups to edit. Select the security group EB created and deselect the default one. Click Save.

6.11. Scroll down to the bottom and click Create.

6.12. Wait for AWS to create your ElastiCache instance, don’t forget to refresh your page once in a while since AWS won’t let you know once it’s finished creating the database.

6.13. Click on redisdb(database name) and copy the Endpoint, you’ll need it later. Mine is redisdb.f3xrsg.0001.use2.cache.amazonaws.com.

7) Connect RDS and ElastiCache databases to an Elastic Beanstalk application

Now that we have both databases deployed, we need to make sure our code knows how to connect to them. We will do this using environment variables and URIs.

7.1. On the top navigation bar go to Services type Elastic Beanstalk and press Enter.

7.2. Click on your environment name. Mine is Nodeapi-env-1.

7.3. Now click Configuration on the left.

7.4. In the Software category click Edit.

7.5. Scroll down to Environment properties, here is where you define the environment variables your Node.js API uses in it’s code. This includes the necessary information for your code to connect to the databases you just created. In my code I make the connections for both databases using their respective URI, but there are other ways to do this. For the Redis database the URI format is redis://[Redis database Endpoint (from step 6.13.)]:6379. So in my case it looks like this: redis://redisdb.f3xrsg.0001.use2.cache.amazonaws.com:6379. You add this as an environment variable by writing on the Name column the exact name you use in your Node.js code, in my case it’s REDIS_URI. And then on the Value column you write the value of the variable, in my case redis://redisdb.f3xrsg.0001.use2.cache.amazonaws.com:6379.

If you’ve never used environment variables in JavaScript, this is how I make the connection in my code:

const redis = require("redis");
const redisClient = redis.createClient(process.env.REDIS_URI);

As you can see, you use the process.env.ENVIRONMENT_VARIABLE_NAME to access the environment variable you defined in the Elastic Beanstalk configuration.

7.6. Now we do the same thing for the PostgreSQL database. The Postgres URI format is as follows: postgres://[user name (from step 5.7.)]:[user password (from step 5.7.)]@[RDS database Endpoint (from step 5.20.)]:5432/[database name (from step 5.17.)]. In my case this translates to postgres://postgresusername:postgrespass@postgresdb.cu4ulyxj6kaf.us-east-2.rds.amazonaws.com:5432/postgresdbname. So, let’s add our URI as an environment variable. Write POSTGRES_URI, or whatever variable name you use in your code, in the Name column. And postgres://postgresusername:postgrespass@postgresdb.cu4ulyxj6kaf.us-east-2.rds.amazonaws.com:5432/postgresdbname in the Value.

In my code I make the connection like this:

const knex = require("knex");
const db = knex({
client: "pg",
connection: process.env.POSTGRES_URI
});

7.7. Now that we’ve added both URIs we can click Apply at the bottom.

7.8. Wait for it to finish, now you should see Ok as the health of your EB environment. Even though things aren’t working yet. For this to work we need to configure our security group.

8) Configure an AWS security group to allow database connections

The security group acts as a firewall on AWS. As such it will block the connections that it wasn’t configured to allow. Therefore, our EC2 instance won’t be able to connect to our databases until we explicitly allow those connections in the security group. So, let’s do that:

8.1. On the top navigation bar go to Services type EC2 and press Enter.

8.2. On the left bar, scroll down to NETWORK & SECURITY section and click Security Groups.

8.3. Select the security group Elastic Beanstalk created (the same one as in step 5.12.).

8.4. On the top right, go to Actions and click Edit inbound rules.

8.5. On the left click Add rule on the Type choose PostgreSQL on the Source choose Anywhere.

8.6. On the left click Add rule on the Type choose Custom TCP, in the Portrange 6379, which is the default port for Redis, on the Source choose Anywhere.

8.7. On the left click Add rule on the Type choose HTTPS on the Source choose Anywhere. This is will be needed for HTTPS, configured in step 10).

8.8. At the bottom click Save rules.

9) Access an EC2 instance with SSH, and an RDS database from the EC2 instance

We are now going to continue what we started in step 3).

9.1. On the top navigation bar go to Services type EC2 and press Enter.

9.2. On the left bar, scroll up to INSTANCES section and click Instances.

9.3. Select your instance, in the instance Description, on the right side you have IPv4 Public IP, copy it.

9.4. Now go to your bash terminal, in your local machine, and run the following command: sudo ssh -i [AWS private key full path (this is the key created in step 3.7.)] ec2-user@[EC2 IPv4 Public IP]. In my case this command looks like this: sudo ssh -i /home/tiagofbsantos/.ssh/aws.pem ec2-user@3.128.146.176.

9.5. You are now inside your EC2 instance. Run the command: amazon-linux-extras list. This will show you the list of Amazon approved extra packages you can install on your Amazon Linux 2 operating system. Search for the PostgreSQL version you want.

9.6. And then run the command: sudo amazon-linux-extras install [extra name]. In my case I’m installing the postgresql11 extra. So my command looks like this: sudo amazon-linux-extras install postgresql11.

9.7. Now we’re going to access our RDS PostgreSQL database we created earlier. For that run the command: psql --host=[RDS database Endpoint (from step 5.20.)] --port=5432 --username=[RDS database username (from step 5.7.)] --password --dbname=[RDS database name (from step 5.17.)]. Which in my case is: psql --host=postgresdb.cu4ulyxj6kaf.us-east-2.rds.amazonaws.com --port=5432 --username=postgresusername --password --dbname=postgresdbname this will prompt you for your RDS database password (you defined this is step 5.7.). In my case, I write postgrespass and press Enter. We are now connected to our RDS database through our EC2 instance.

9.8. Run whatever SQL commands you need to build the tables of your database and populate it. When you’re finished write \q and press Enter to exit the database. And then write exit and press Enter to terminate the SSH connection to EC2 and return to your local machine.

10) Configure an Elastic Beanstalk single instance Node.js back-end API to use HTTPS with NGINX and Let’s Encrypt’s Certbot

We will now test if the HTTP version of your API is working properly, and then convert it to HTTPS.

10.1. On the top navigation bar go to Services type Elastic Beanstalk and press Enter.

10.2. Click on your environment name. Mine is Nodeapi-env-1.

10.3. On the top, click on the URL of your application, this is written in blue with a square with an arrow next to it. Mine is nodeapi-env-1.eba-vp5btvzv.us-east-2.elasticbeanstalk.com. This is the public URL of your API. This link may have a maximum of 63 characters, if it is longer certbot will give a “CSR is unacceptable” error, if this happens you need to choose a shorter app name (step 4.5.). Place this link in your front-end or test it with Postman to make sure everything is working.

10.4. Now that we’ve verified that the Node.js API, the PostgreSQL and Redis databases are all working properly it’s time to add HTTPS to our server, so our communications with the back-end are encrypted.

10.5. Access your EC2 instance using SSH (step 9.4.).

10.6. Install an Amazon extra called epel by following the steps 9.5. and 9.6. You should also be able to simply run the command: sudo amazon-linux-extras install epel instead of following the previous steps. This will be needed to install Certbot’s dependencies.

10.7. We need to edit a NGINX configuration file, for this run this command: sudo vi /etc/nginx/nginx.conf, press i to go into insert mode, now inside the http block at the top paste this line:server_names_hash_bucket_size 128; . Then press Esc > :wq > Enter, this will save and quit from the file editor. This is to fix a bug with long domain names.

10.8. Go to the NGINX configuration folder with this command: cd /etc/nginx/conf.d/.

10.9. Now create the config file we’re going to use to configure the HTTPS for our API: sudo touch http-https-proxy.conf.

10.10. Edit the file we just created: sudo vi http-https-proxy.conf, press i to go into insert mode, and paste the following text into it, replacing my server_name by your server_name. Your server_name is server_name [Your app's URL (from step 10.3.), cannot have the "http://" at the beginning or the "/" at the end];. In my case the file looks like this:

upstream nodejs {
server 127.0.0.1:8080;
keepalive 256;
}
server {
listen 80;
server_name nodeapi-env-1.eba-vp5btvzv.us-east-2.elasticbeanstalk.com;
if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") {
set $year $1;
set $month $2;
set $day $3;
set $hour $4;
}
access_log /var/log/nginx/access.log main;
access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd;
location ~ /.well-known {
allow all;
root /usr/share/nginx/html;
}
location / {
proxy_pass http://nodejs;
proxy_http_version 1.1;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
gzip on;
gzip_comp_level 4;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
}

Then press Esc > :wq > Enter. The if block with the two access logs are only necessary if you configured your Elastic Beanstalk environment to use Enhanced Health reporting, which EB does by default. If you chose Basic Health reporting, then delete these lines from the file. The last 3 lines enable gzip so the communications between the front-end and back-end are compressed, makes for faster data transfers. The rest is basic server configuration and things that Let’s Encrypt’s certbot will need in order to perform its ACME challenge.

10.11. We’re going to be using Let’s Encrypt’s certbot to generate our SSL certificate and to automatically configure the rest. So, let’s create certbot’s folder first: sudo mkdir -p /opt/certbot.

10.12. Now we need to download it: sudo wget https://dl.eff.org/certbot-auto -O /opt/certbot/certbot-auto.

10.13. Now change the file’s permissions sudo chmod 700 /opt/certbot/certbot-auto.

10.14. Let’s rename the NGINX config file automatically created by Elastic Beanstalk into a backup file, so it doesn’t conflict with our new HTTPS config file sudo mv /etc/nginx/conf.d/elasticbeanstalk/00_application.conf /etc/nginx/conf.d/elasticbeanstalk/00_application.conf.bak.

10.15. We need to restart our NGINX server, so our new configurations are applied: sudo service nginx restart.

10.16. We need to edit a certbot file so it will recognize our version of Linux, in our case Amazon Linux 2. You need to find this line in the file elif [ -f /etc/redhat-release ]; then, in my case it’s at line 813 but this changes with updates so it might be at another line for you. Run this command: sudo vim +813 /opt/certbot/certbot-auto (the +813 is there to jump to that line in the file). Press i to go into insert mode, and replace this line elif [ -f /etc/redhat-release ]; then with this line elif [ -f /etc/redhat-release ] || grep 'cpe:.*:amazon_linux:2' /etc/os-release > /dev/null 2>&1; then. Then press Esc > :wq > Enter.

10.17. Now we’re finally ready to run certbot using the following command: sudo /opt/certbot/certbot-auto run --debug --redirect --agree-tos -n -d [Your app URL (from step 10.3.), without "http://" in the beginning] -m [Your email] -i nginx -a webroot -w /usr/share/nginx/html. In my case this command looks like this: sudo /opt/certbot/certbot-auto run --debug --redirect --agree-tos -n -d nodeapi-env-1.eba-vp5btvzv.us-east-2.elasticbeanstalk.com -m email@domain.com -i nginx -a webroot -w /usr/share/nginx/html. This command will create the SSL certificate necessary for HTTPS, apply and configure the certificate on NGINX, and redirect all HTTP traffic to HTTPS. This certificate will be valid for 3 months, after that time it will expire, and you’ll need to run this command again.

10.18. We need to restart our server now to apply the changes sudo service nginx restart.

10.19. Make sure you’ve followed step 8.7. Add HTTPS rule to the eb security group.

10.20. And that’s it, you should now have HTTPS running on your backend, as well as HTTP to HTTPS redirection active. Run step 10.3. again, but this time with https:// to test everything is working properly. Don’t forget to copy your new HTTPS address into your front-end.

10.21. Bear in mind that, using this method, whenever you deploy a new version of your API to Elastic Beanstalk you will need to run steps: 10.5; 10.7; 10.8; 10.9; 10.10; 10.14; 10.15; 10.17 and 10.18 again.

11) Deploy a front-end React app to AWS Amplify

AWS Amplify is the simplest way to deploy a front-end to AWS. I’m assuming you have a github repository with a React app. So, let’s start deploying our React app:

11.1. On the top navigation bar go to Services type Amplify and press Enter.

11.2. At the bottom right, under Deploy click GET STARTED.

11.3. On the From your existing code section select GitHub, or any of the other sources if you prefer. Click Continue.

11.4. Under Recently updated repositories select the repository you want to deploy. And the branch you want. Click Next.

11.5. On App name choose the name you want.

11.6. Under Build and test settings Amazon will autofill this for you, usually it works, but if you have any problems in deploying your app, the problem is probably here, so pay attention to what is written there. For a React app it should look something like this:

version: 1
frontend:
phases:
preBuild:
commands:
- npm ci
build:
commands:
- npm run build
artifacts:
baseDirectory: build
files:
- '**/*'
cache:
paths:
- node_modules/**/*

Since this is a YAML file, be careful with your indentation. Click Next.

11.7. Review if everything looks good and press Save and deploy.

11.8. Wait for AWS to finish its work. Your front-end is now deployed.

The End

With both your front-end and back-end deployed with HTTPS your web app will now rank higher in search engines and you can easily transform it into a Progressive Web App. This was a long tutorial, but AWS is a world in of itself. You should now know how to deploy reasonably complex APIs to AWS, along with their respective frontend.

--

--

Tiago Santos
The Startup

Backend Software Engineer. Focused on TypeScript, Node, Express, MongoDB. Website: tiagofbsantos.com