We’re building Qwell, a better way to book medical appointments online. We’re a small team taking on a big problem. If you have any thoughts we’d love to hear for you, send us a note at “hello at qwell.com”.
In order to best serve our customers we need our infrastructure to be Secure, Reliable, and Scalable. Our infrastructure is a combination of Node Web Servers, Node Background Jobs, Postgres Databases, Redis Clusters, and React Clients all built on top of AWS.
This article providers an overview of our infrastructure but mainly focuses on our Network Access Controls as we feel this is the most interesting piece to talk about. Future posts will go into other aspects of our application.
Our Web Servers are hosted via Elastic Beanstalk. Elastic Beanstalks makes it easy to securely configure and scale application servers. Our Web Servers are made up of EC2 instances which sit behind an Elastic LoadBalancer, all of which live inside of a Virtual Private Cloud (VPC).
Using a VPC we can create finely grained networking rules to allow/deny inbound & outbound networking traffic. Our VPC consists of Private & Public Subnets. Subnets are made up of Route Tables which define networking access rules via Security Groups. Subnets aren’t inherently public or private but are given that distinction via their network access defined in their Route Tables & Security Groups. Our Public Subnets allow all IPv4 & IPv6 inbound and outbound traffic. Our Private Subnets allow limited inbound & outbound traffic. All of the AWS Services we use live inside of our VPC.
As stated above our Web Servers consist of EC2 instances sitting behind an Elastic Load Balancer (ELB). Our ELB lives in our Public Subnet. This means that anything from the internet can talk to it and it can talk to anything on the internet. Our EC2 instances live in our Private Subnet allowing only inbound networking traffic from our Load Balancers, restricting all other inbound traffic, they don’t even have a public IP!
Our ELB’s Security Group looks like this:
This configuration allows TCP traffic on ports 80 and 443 (http/https) from all IPv4 & IPv6 address.
Our EC2 Security Group looks like this:
This configuration only allows TCP traffic on port 80 (http) from the Load Balancer (notice the Source Security Group) and TCP traffic on port 22 (ssh) from Security Group
sg-019a87245f1d4bb8 (more on this below).
Even though our EC2 instance don’t have a Public IP and have limited inbound network access we still want to be able to SSH into our servers. To do this we’ve created a single purpose EC2 instance aka a Bastion Host. This Bastion Host lives inside of our Public Subnet, has a public IP, and allows SSH access via public/private key pair. It’s Security Group looks like this:
This configuration allows all TCP traffic on port 22 (ssh). Using an authenticated key-pair we can SSH into our Bastion Host and now, inside of our VPC, we can SSH into our Web Server EC2 instances as defined by the Security Group rules shown above.
The last piece of the puzzle is outbound network traffic, which our servers need to fetch 3rd party data or download security updates. Since we restrict inbound network access we can not simply allow all outbound traffic via a Security Group and call it a day, any responses to our communication would be rejected. To achieve this we use a Network Address Translation (NAT) Gateway which has a static IP. All outbound traffic is funneled through the NAT Gateway and all response traffic is funneled through the NAT Gateway back to our servers, all of which is complete managed by the NAT Gateway. By using a NAT Gateway we hide the origins of the request yet still allow back-and-forth communication for conversations that we start, all with limited configuration.
Our Application Workers are Nodejs-based Bull.js workers. They work by listening to and processing data on Redis queues. Like our Web Servers they are a collection of EC2 instances, configured via Elastic Beanstalk. Since there is no need to communicate directly to the Application Workers we don’t need a Load Balancer and we can restrict all TCP traffic apart from SSH (port 22) as we still want the ability to SSH into our servers. Our Worker Security Groups look like this:
This configuration allows TCP traffic on port 22 (ssh) only from our Bastion Host. As with the Web Servers all outbound traffic is funneled through the NAT Gateway.
We use Amazon’s Relational Database Service (RDS) to host our PostgreSQL databases. RDS is a managed PostgreSQL solution that allows us to easily and securely configure and scale our databases with built-in reliability. Our databases need very limited network access so they live inside of our Private Subnet and allow limited inbound TCP access. Our RDS Security Group looks like this:
This configuration allows TCP traffic on port 5432 (postgresql) only from our Workers and Web Servers as defined by the Source Security Groups. As with the Web Servers all outbound traffic is funneled through the NAT Gateway.
We use Amazon’s ElasticCache Servers to host our Redis clusters. ElasticCache makes it easy to create secure and highly-available Redis clusters. We use Redis primarily as the communication/storage protocol for our Application Workers and secondarily as a key/value cache. Our Redis Clusters need limited network access, we only communicate to our Redis Clusters via our EC2 instances, so they live inside of our Private Subnet and allow limited inbound TCP access. our Redis Security Group looks like this:
This configuration allows TCP traffic on port 6379 (redis) only from our Workers and Web Servers as defined by the Source Security Groups. As with the Web Servers all outbound traffic is funneled through the NAT Gateway.
Our React Clients are served statically, via a CDN. To achieve this we use a combination of AWS S3 & CloudFront. Our React apps are built into static content and uploaded to a bucket on S3 which has a public
GetObject Bucket Policy (see below) allowing anyone Read access. Sitting in front of our S3 buckets are CloudFront Distributions, a CDN offered by AWS, which gives us high-availability across the entire country.
That’s all we’ve got for now. We are a small team taking on a big, hairy industry. If you’re interested in joining the team shoot us an email at “hello at qwell.com”. If you have any thoughts or question please comment below!