Spinning up MERN app on EC2 with VPC

David Eliason
Nov 7, 2018 · 18 min read

In wanting to learning how to build MERN applications, I oftentimes would go deep into silos of knowledge areas (such as node event loops, and such) while having this gnawing feeling of not really understanding how everything connected together in a big picture way. This tutorial is meant to give a 10,000 foot perspective through actually building something that links together the different parts, while taking advantage of the AWS cloud platform. Hopefully this will remove some of the mystery and make these areas more approachable. Certainly there’s a lot of room for improvement in any one of these areas when building an app for production (take a look at this AWS best practices article as an example), but hopefully this will be helpful in a broad-strokes way :)

The first step is to open up an account with AWS, they offer a great free tier, which is good for a year and allows you to play around with all sorts of goodies. Be sure that you go to the IAM section and set up another user account — you don’t want to use your account root user credentials because of security issues. Once you have that, log in under that non-root-user account and head over to the VPC section.

What the heck is a VPC, you might ask? In short, it’s a virtual network where we can provision servers and databases, and we get to harness the benefits of AWS being all over the globe with their physical data-centers (where banks of servers are located) being located in “availability zones,” which is a geographical area. By putting an EC2 instance in one availability zone (or AZ), and another in another AZ, you have backup in case of some sort of natural disaster - like a flood. In the VPC, you create ‘subnets’, each of which are associated with one availability zone; we’re going to keep it simple and create a single subnet within a single availability zone, but in production you’d probably want to have more, just in case.

So, let’s set up a custom VPC. There’s a default VPC that comes to you out of the gate, which is user-friendly by allowing you to immediately deploy instances since all the subnets have a route out to the internet, but to take away the mystery and get our hands dirty, let’s build our own, hmm? You can also follow along with Amazon’s tutorial, which is pretty good too, but feels like it’s a little more “black box” to me.

Up on the menu on the top of your AWS console, select ‘Services’ and then ‘VPC’. Ignore the ‘VPC Wizard’, we’ll do this by hand, and instead, click ‘Your VPCs’ on the left-hand side of the screen, and then click ‘Create VPC.’

Go ahead and click the ‘Create VPC’ button

Go ahead and give your VPC a spiffy name, I named mine 360enlight because that’s an application I’m working on, then select ‘’ for the IPv4 CIDR block, because that gives you the widest range of values (65,536 of them). The IPv6 is the latest version of IP addresses, which is good to have in hand, so click that button for ‘Amazon provided IPv6 CIDR block.’. Then hit ‘Yes, Create’.

Here we are setting for ourselves the IP address range that will be ours in the AWS Cloud

Now, we are going to create a subnet. Click ‘Subnets’ on the left-hand side of your AWS console, and then click ‘Create Subnet.’ You’ll see that there are already a couple of subnets there (in us-west region, there are three availability zones, so there are three corresponding subnets here; that number will be different depending on the region you are choosing), those are the default ones that come with the default VPC.

Go ahead and click ‘Create subnet’, For the name tag, one strategy that I use is to name it for the IPv4 CIDR block and the subnet. Looking ahead, there are different strategies for dividing up the main IP addresses that you were assigned, but, using this great resource, let’s plan to create a subnet that uses the IPv4 range of (which gives us 256 private IPv4 addresses). I’m going to use the us-west-2a availability zone because that’s closest to where I live. Thus, with that knowledge, I create the subnet with the name tag of ‘’, then I select the VPC that I just made from the drop-down menu, select the ‘us-west-2a’ Availability Zone from the drop-down menu, and then enter ‘’ in the space for IPv4 CIDR block. Go ahead and leave IPv6 CIDR block as default(‘Don’t Assign Ipv6'). Go ahead and click ‘Create’. It will be created, you should get a success page, now click ‘Close’.

Create subnet #1 with a range of IP addresses and where the data will be located

Great job! At this point we’ve created a subnet, which has it’s own IP address range. It should be added to the list of subnets on the main screen under ‘Subnets.’

Up next, we need a way of connecting our subnet to the internet, since by default a custom VPC is completely locked-down, so (still on the VPC services page within your AWS console) look on the left-hand side of the page for where a link for Internet Gateways is located. Find it? Click that link, and then when the new page comes up, click ‘Create Internet Gateway.’ Give it a fancy name tag (I called mine ‘360insightIG’). Then click ‘Create’, and when the success page comes up, click ‘Close’. Once that is done, you’ll see a list of your internet gateways — the default one that came with your default VPC, and the newly-created internet gateway that you just made.

There’s the internet gateway that was just made

Select that newly-created internet gateway (and deselect the default one), and at the top of the page you’ll see a drop-down menu labeled ‘Actions.’ Select ‘Attach to VPC’ from that menu, and on the next page (‘Attach to VPC’), select the custom VPC that you’ve been so diligently working on (I selected ‘360insight’) from the drop-down menu. Now, click on the ‘Attach’ button. Great! You can visualize attaching an internet router to your network, though it’s not actually linked up to any of your subnets or your server or anything. That’s what we’ll be getting to next.

The way that we connect (no pun intended) the subnets to the internet gateway is by use of a route table; you use this to describe routing, controlling whether it can route outbound traffic to the internet, for example. In this example, we’re going to create our server (or, EC2 instance per AWS) within the subnet (in my case, us-west-2a), and attach that subnet to a routing table that will allow connectivity to the internet via the internet gateway. It’s recommended to create new route tables if you are building a custom VPC so that you have greater control of what is entering and leaving the subnet. So let’s create one for our new subnet.

Again on the left-hand side of the VPC Dashboard, click on the link for ‘Route Tables.’ You should see a default route table that was created when you fist made the VPC, but ignore that, as we want to build a new one (for reasons explained above). Now, click on the ‘Create Route Table’ button on the top of the page, and when the ‘Create Route Table’ form pops up, give it a name tag (I used ‘360enlightRT’), and then select the custom VPC that you’ve been working on from the drop down menu, then click ‘Yes, Create’ button.

Creating a Route Table that will allow us to connect to www

Now, you’ll see that there are two route tables associated with your custom VPC, one that says ‘yes’ under ‘Main’ label, and one that says ‘no’ under that label. We can leave the main (default) route table alone and focus on just the route table that we just made. So select the route table that you just created.

Next up, we want to attach that route table to the internet gateway, by creating a new route attached to that table that points to the internet gateway. So, select the route table that you just made (the one that says ‘no’ under the Main label), and then look down at the five tabs in the middle of the page, labeled ‘Summary’, ‘Routes’, ‘Subnet Associations’, ‘Route Propagation’, and ‘Tags’. Go ahead and select the ‘Routes’ tab.

select the route table you just made, and the ‘Routes’ tab

Let’s add that route pointing to the internet gateway. Click the ‘Edit’ button, then the ‘Add another route’ button. A new line will display, add ‘’ to the first space under ‘Destination’, and if you click inside the second input space (under ‘Target’), your internet gateway should automatically display, so just select it, then click ‘Save’. We’re going to add another route for our IPv6, so click ‘Edit’ again, then ‘Add another route’, and this time add ‘::/0’ to the ‘Destination’ input, and once again if you click inside the second input area, your internet gateway will populate, so select that, then click ‘Save’. Great! Now, any subnets associated with that route table will have internet access :)

Add two routes, an IPv4 and an IPv6, pointing to internet gateway

So, let’s associate the subnet with the route table. Go ahead and click the ‘Subnet Associations’ tab (it’s the middle one of the five tabs we talked about a minute ago), then click ‘Edit’, and select the subnet that you want to connect to the internet (that’s called a “public facing” subnet. For this exercise, we only created one subnet, so there should only be one choice, but in a more robust application, you’d be thinking about maybe two or three subnets, for redundancy. In that scenario, you might have one subnet as public and connected to the internet, but another subnet as private and only connected to the first subnet, and not the internet. In that case, you could put your database in that second subnet and nobody could access it except for your instance in the first subnet. A little diversion there, but something to think about for scaling up your application). Then click ‘Save’. You should see a ‘Save Successful’ message pop up, briefly.

select the subnet that you want to be public-facing

Excellent! Now, there’s one more thing we need to do to configure our custom VPC — we need to auto-assign IP addresses for our public-facing subnet. So, back over the left-hand side of the VPC Dashboard and click ‘Subnets’ again, then select the subnet that you want to be public facing (in my case, the one titled ‘’), then click the ‘Actions’ dropdown menu, and select ‘Modify auto-assign IP settings’.

In the ‘Modify auto-assign IP settings’ screen that comes up next, select the checkbox under ‘Auto-assign IPv4’, then hit the ‘Save’ button. This will redirect you to the Subnets main page.

What we’ve just done is create a custom VPC which has an internet gateway attached to it, with a subnet located in an availability zone, and we also created a route table that explicitly allowed our subnet to route/connect to the internet via the internet gateway. One can dive really deep into the AWS aspect of any of those areas, by beefing up security, creating a private subnet where one’s database resides without access to the public, or exploring the microservices that are available. But, we’re keeping it simple :)

Okay, so woohoo! We have successfully created the VPC, with a public subnet hosting an (soon-to-be-created EC2 instance, a security group allowing SSH, HTTP, HTTPS, a custom route table, and an internet gateway. Here’s what that looks like:

Our VPC, Subnet, EC2 instance, Security Group, Router, Internet Gateway

Now we can create our server (EC2 instance) within the public-facing subnet within the VPC. So let’s do this!

Go to Services dropdown menu at the top of the page, select ‘EC2’, click ‘Launch Instance’, then scroll down until you see ‘Ubuntu Server 16.04 LTS’ (there’s a number of similarly-worded instances, so pay attention that it’s the ‘free tier eligible’ one, and is not the .Net, 14.04, or 18.04 versions), and click ‘Select.’ Next, you’ll need to select an ‘Instance Type’, which is where you have the ability to choose the configuration for what your server instance has (memory, CPU, etc). Go ahead and select the ‘t2.micro’ option, it’s part of the free tier. Then click the ‘Next: Configure Instance Details’ button.

t2.micro is free-tier eligible

Now, we have a ton of configuration options to choose from, these are all aspects of our server instance. Under ‘Number of Instances’, we just want one. Under ‘Network,’ select the custom VPC that we just made (mine is 360enlight; giving labels really helps out), then select the public-facing subnet from the dropdown menu (this is the one we attached the internet gateway to, mine is titled ‘’), and you should see that next to ‘Auto-assign Public IP’ is ‘Use subnet setting (Enable)’. This is kinda confusing, but that translates to that it’s already enabled (that’s what we did a few steps back). Leave everything else default values, then click ‘Next: Add Storage’.

Config for our shiny-new EC2 instance

We’re not going to add storage, so click ‘Next: Add Tags’, you can add tags if you want (optional. I’m going to create one with key of ‘Name’ and ‘Value’ of ‘360EnlightServer’), then click ‘Next: Configure Security Group’. We’re going to create a new security group (so select that option), and give it a name under ‘Security Group Name’ — I’m going to title it ‘WebDMZ’. Now, for the description add something like ‘SSH HTTP HTTPS’. Great! Now, click the ‘Add Rule’ button, and from the dropdown menu, select HTTP. Now, click the ‘Add Rule’ button again, and this time add HTTPS. Do the same thing for adding SSH. Now click ‘Review and Launch.’

(* note, you’ll get a notification that warns you that your config settings aren’t secure because you’re opening your EC2 instance to anyone on the internet. You can and should look into locking this down more by adjusting who can access your server if that’s a concern)

This is the security group that allows access to our EC2 instance, we are green lighting SSH, HTTP, HTTPS

Now, you’ll get the ‘Review Instance Launch’ splash page, giving all the details of what we’ve added so far — the instance type, the security group describing what can access the instance, instance details and so on..

Go ahead and click ‘Launch.’

Now, we need a key pair that will allow us to access this instance. Select ‘Create a new key pair’, and give it a name. Then click the ‘Download Key Pair’ button. You want to move that file somewhere safe on your computer, but remember where you put it because you’ll need to navigate to that file pretty soon. Finally, click the ‘Launch Instance’ button.

Download a key pair that will allow you to access the instance

Congratulations, we have now spun up a new EC2 instance within our public-facing subnet, nestled cozily within our custom VPC. Nice job! :)

You can click the ‘View instance’ button, and it will give you all the details of your instance, including the IPv4 Public IP address, under the ‘Description’ tab. While you’re there, go ahead and hover next to that, and copy the IP address- you’ll need that in a sec. Because we allowed HTTP, HTTPS, and SSH access via our security group configuration, we’ll be able to SSH into the EC2 instance and build up our code, and then we’ll be able to access that server output via HTTP or HTTPS.

Now, let’s SSH into the instance (if you have Windows, you’ll need a client like Puddy). Now, back on your command line interface (CLI), navigate to the folder where your downloaded key-pair is residing (make sure that you are “inside” that folder where the file is.) Now, we have to change the permissions so that we can access the file, by running this command:

$ chmod 400 ./THE_NAME_OF_YOUR-KEY_PAIR.pem

Next, we’ll use that ‘IPv4 Public IP’ address for your EC2 insance (that you copied) with the following command:


After you do that, you’ll get a bunch of messages as Ubutnu welcomes you, gives you some specific info, and then gives you a command prompt. We’ve got our instance up and running, now it’s time to configure it — we’ll need to install Nginx, node.js, and then we’ll import the repo that spins up an express server and connects it to a react front-end.

Let’s start with Nginx. This was a great tutorial for installing nginx.

First, let’s update the server:

$ sudo apt-get update && sudo apt-get upgrade -y

Next, let’s install Nginx:

$ sudo apt-get install nginx -y

Third, check the status:

$ sudo systemctl status nginx

Fourth, let’s start it:

$ sudo systemctl start nginx

** Now you might an error of ‘sudo: unable to resolve host ip-xx-x-x-xx’, which I did because I opted to use an Elastic IP address, and wound up spending too much time building new instances and VPCs and troubleshooting (which was good practice, but still..) I finally found this to be helpful- I did $sudo nano /etc/hosts and then within that file, I added ‘ my-ec2-machine-name’, [ note, you will find your machine name in the CLI prompt for your EC2 instance, right after the ‘ubuntu@’] then Ctrl-X to exit, and ‘yes’ to save, and ‘return’. Now, try it again and it should work.

And enable Nginx start on startup:

$ sudo systemctl enable nginx

Now to set up Node.js:

a quick update:

 $ sudo apt-get update

install packages:

$ sudo apt-get install build-essential libssl-dev

(If you get a prompt asking if you want to continue, just say ‘y’ for yes)

Now, we want to install nvm, which will allow us to install node and npm. If you go to this website: https://github.com/creationix/nvm, scroll down a bit, then you’ll see a script like this:

This is the script you’ll copy and paste into your EC2 instance command line

Copy that script and paste it into the CLI that you’re using within your EC2 instance. When you do that, the terminal will spit out a bunch of stuff, but at the end you’ll see an export line that you can copy and paste into the CLI:

Copy this whole thing and paste into the EC2 command line

So that part is all done, let’s move on to installing Node.js. Now, if you go to Node.js website, you can see what the current LTS node.js version is — that’s the one that you want, because it’s not too cutting edge to be bleeding, but is community-adopted.

Note the 10.13.0 LTS, we will want to use that version to download

Now, let’s install nvm. Type this into your EC2 Command line interface (CLI):

$ nvm install 10.13.0$ nvm use 10.13.0

Next, let’s test it out:

 $ node -v (should display 10.13.0 on CLI)

And also test out npm (which comes bundled):

$npm -v (should display a version)

Great! That means that we have Nginx configured, and we have node.js installed! Now, if you wanted, you could install npm modules and build up a server and application using vim or nano, but it’s easier to do your work locally on your computer, and just clone your repo into the instance. We’ll get to that after this next part.

Now, we’ll add a domain, so that if you have a domain registered under AWS, then you can have it resolve to this EC2 instance. Let’s start with what’s needed within Nginx. First, let’s remove the original default config file with:

$  sudo rm /etc/nginx/sites-available/default

Then, let’s add the config information that we’ll be needing:

$ sudo nano /etc/nginx/sites-available/default

Within that empty file that pops up, add the following:

server {listen 80;server_name your_domain.com www.your_domain.com;location / {proxy_pass;proxy_http_version 1.1;proxy_set_header Upgrade $http_upgrade;proxy_set_header Connection 'upgrade';proxy_set_header Host $host;proxy_cache_bypass $http_upgrade;proxy_redirect off;}}

You’ll notice that we are setting the port for the server to be 8080. We’ll be using that within the create-react-app when we proxy to the server port.

So, if you do have a domain within Amazon, then go ahead to ‘Route 53’ under ‘Services’ within the AWS Console. Click on ‘Hosted Zone’, and then click the domain that you want associated with this EC2 instance — you’ll see ‘Go To Record Sets’ at the top of the page, that’s what you want, so go ahead and click that, too.

Two record sets should already be there, the NS and SOA files. Now, what we want is a record set that is type A, and that points to your EC2 IPv4 public IP address. Assuming you don’t have a type A record set yet, go ahead and click ‘Create Record Set’. You’ll see something like this:

Creating a Record Set that will your domain to the EC2

With all those fields, you’re just going to update the ‘Value’ field, so leave the ‘Name’ field alone, keep ‘Type’ as ‘A — IPv4 address’, keep ‘TTL (Seconds) as 300, and then in Value, type or paste your EC2 IPv4 address. In case you’re not sure what that is, you can go back to ‘Services’ → EC2 — Instances → and select the one you’ve been working on. You’ll see the IPv4 Public IP address at the bottom of the page, under ‘Description’. Leave ‘Routing Policy’ as Simple. Go ahead and click ‘Save Record Set.’

Now, to reload Nginx to update what we’ve just done:

$ sudo systemctl reload nginx

At this point, if you go to your domain URL (if you did have a domain that you pointed to the EC2), or the Public IPv4 IP address, then you should get something like this (because our server isn’t actually serving anything yet):

Our Nginx server is alive, which is good, but just not serving anything yet

So, the good news is that we’ve built the framework for building (and serving) our code! Nginx is configured, our EC2 instance is nestled safely in our public-facing subnet within the VPC, and our domain name is pointing to the IP address that our server is running on. So we are making good progress!

Just to keep things simpler, you can clone this simple app within your EC2 instance. If you look at this code, it basically has a server.js file where necessary modules are imported, express is instantiated, one route is set so that a GET call to “api/data” will result in the server sending JSON data (thus acting as a RESTful API endpoint), and all other requests will revert to the static files created by create-react-app.

$ git clone https://github.com/davideliason/KISS_MERN.git

Now, cd into that repo:


Install the dependencies

$ npm install

move into the ‘client’ sub-directory:

$ cd client/

And once again install dependencies:

$ npm install

The approach that I took in first approaching this was to go ahead and build a production-ready create-react-app set of files, by typing this while still within ‘client’ directory:

$ npm run build

Not only does it keep things relatively simple for our express server to hook into the CRA static files, but also (at this point in my thinking), I’m imagining that this could be easily pushed up to a S3 bucket, where you can display static files. If you look at the server.js file, you can see where we use that build:

app.use(express.static(path.join(__dirname, 'client/build')));

In any case, by building the production build, we can serve those files via the express server, which is the main point for this article.

Now, if you move back into the main project folder:

$ cd ..

Then you should be within the parent KISS_MERN project folder. Now, let’s spin up the express server:

$ node server.js

You should see ‘server at port 8080’ logged onto your EC2 CLI. And with that, if you refresh the page with your domain or IPv4 address, we have React rendering the JSON data served by the express server. Yay!

Our server is sending JSON data and React and capturing and rendering it

The react front-end is serving up the JSON-formatted data from the express server, which is hard-coded as an array within server.js. If you wanted to have real data persistence, you could use a Database-as-a-service such as mLab, or you could follow Keith Weaver’s instructions on attaching MongoDB to your MERN instance as found in his tutorial here. I was going to go step by step into doing that, but I think that’s been covered already pretty well, and wanted to focus on the bigger picture of linking things together. I hope this article was helpful towards that!

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store