How to launch a fast, cost-optimized serverless site with continuous deployments using Bitbucket, AWS S3, Route53, and CloudFront

Paul Kamau
teamRuggedIO
Published in
12 min readJan 21, 2019
source: unsplash

Intro

Traditional architecture for static websites involved acquiring a domain through popular name registrars, signing up for hosting packages based on guesses of disk space, or provisioning your own server, making estimates on storage need and network capacity and then loading your own code or installing the latest version of WordPress to run your site.

This approach to architecture lacked scale and growth primarily at its core.

What this means is the success for the product would have been a disadvantage for the website. For example, if a blog skyrocketed in traffic and attracted hundreds of thousands of users, then servers would likely crash or cause a significant interruption in user experience on the site. Additionally, time would be focused on maintaining infrastructure and upgrading it rather than core competencies applied to features and site content. Moreover, maintenance costs would grow and content distribution would be restricted to a single location.

Why Serverless architectures work

Let’s quickly start with some definitions.

Serverless architectures refer to designs in applications that abstract backend functionality and state management to cloud service providers.

For our tutorial, we’re taking advantage of Amazon S3’s ability to render objects inside its buckets into a realized static site and integrating that within the existing eco-system of services offered by AWS.

Other models leverage different aspects of cloud services like AWS Lambda, Google’s Firebase or azure functions to run full serverless applications in the cloud.

There are plenty of reasons to adopt a serverless architecture for your application:

  • Speed to market: Less time is spent on developing infrastructure and more time dedicated to building the actual features and content of the site or application.
  • Enhanced scalability: Infrastructure bottlenecks that are unable to handle user traffic are automatically addressed.
  • Accelerated performance: Latency due to throttled computing resources and network capacity are non-existed.
  • Focus on core competencies: Developers and business can focus establish expertise on their product and less on infrastructure development.
  • Optimized cost: Operational expenses are applied directly to product research, development and market release and less on infrastructure.

Here’s what we’re building

We’ll create a CI/CD pipeline that begins with a single commit of your code into your master branch, this will begin a cascade of events as follows:

  1. Bitbucket pipelines will be configured to automatically start deployments every time there is a change in the master branch
  2. Your code and assets will be stored on Amazon S3 eliminating any guesses for storage capacity and availability
  3. Route53 will be used to ensure that all requests to your domain are routed to the designated AWS services and infrastructure.
  4. Amazon CloudFront service will ensure that your site has global distribution and easily accessible from the closest Edge location to the user.

Here’s how that looks like:

Let’s begin

Bitbucket | Pipeline configurations and deployments

Bitbucket pipelines is an integrated CI/CD services that allow developers to build, test and deploy their code based on the configuration.

Here’s what we’ll need to do to set up the pipeline integration and deployment

  1. Prepare our repository with site content and the .yml files
  2. Modify the global account variables

Let’s quickly set this up

  1. Prepare our repository with site content and the .yml files

To set up the continuous integration and deployment with bitbucket, we’ll create and configure 2 .yml files at the root folder of our repository. These files are:

1. bitbucket-pipelines.yml:
2. s3_website.yml

the .yml files live at the root of the folder “static_site”.

Then we’ll have the main code for our site inside the main “public” folder. This folder contains your index.html and other site assets.

Separating code like this will ensure that with every deployment, only the public folder contents are viewable when we hit the domain URL. Your .yml configurations will remain on your repo and will not be deployed along with your site content.

Here’s the bitbucket-pipleline.yml file configuration:

bitbucket_pipelines.yml file

This is the main .yml configuration file that tells bitbucket what to do with your code. Here’s a summary of what these keywords mean, for details, see the bitbucket documentation.

image: this refers to the docker container that runs our builds.
pipelines: contains all your pipeline definitions.
branches: specifies the branch that the pipeline should build from.
default: contains the steps that run on every push.
step: each step starts a new Docker container with a clone of your repository.
script: a list of commands that are executed in sequence. Here we define that we are using AWS S3, the bucket region, then we’re syncing content only from the “public” folder, then the actual bucket name.

Here’s the s3_website.yml file configuration:
This .yml file will tell the build where to find the s3 credentials needed to deploy your repository content to S3 within the bitbucket global variables.

The AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in the s3_webiste.yml file must match the key name on your account variables for bitbucket key name.

s3_website.yml file

2. Modify the global account variables

To access the “Account Variables” for bitbucket, navigate to: bitbucket settings> scroll to the bottom section on named “Pipelines” > click on “Account Variables” > add the key name and value for your S3 access credentials from AWS

We’ll be creating the S3 Access Key and ID in the AWS IAM section.

pipeline configs for bitbucket

Stage your code

All the pieces are set up, but not completely ready. Go right ahead and commit your code to the master branch. The pipeline will fail because the S3 bucket and credentials are currently not set up yet.

failed deployments due to missing components

After we’ve built the S3 buckets and created the S3 access key and ID, we’ll be able to have the pipeline automatically deploy our code. For now, we’ll complete the other critical pieces first.

Continuous Deployments courtesy of Bitbucket Pipelines

After your bucket and S3 access is granted to bitbucket and the branch deployment tag set to automatic, then the bitbucket pipeline will automatically begin as soon as your code is committed to your master branch.

Bitbucket pipeline deployment status

Amazon Web Services (AWS)

AWS provides on-demand delivery of computing power, database storage, applications, and other IT resources on a pay-as-you-go pricing.

To leverage these services, you’ll need to create an account on AWS. Each account comes with 12 months of free Tier access to multiple services like Amazon EC2, Amazon S3, Dynamo DB among others. Details on these can be found on AWS.

These are the services that will be used to set up the serverless site:

  1. Identity and Access Management (IAM)
  2. S3
  3. Route 53
  4. Certificate Manager
  5. CloudFront
  6. Cost Explorer

Identity and Access Management

AWS IAM enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources.

To enable Bitbucket to have access to our S3 bucket, you need an IAM user with the access key and secret key and assign the “S3 full Access” role to this user or add them to a group containing this role access.

User creation is a simple as navigating to: services > IAM > users > add user

Once created, you’ll be provided the access key and secret key. You’ll use these keys to update the bitbucket environment variables mentioned above.

Role creation is done by navigating to: services > IAM > roles > create role > select S3 > search for “AmazonS3FullAccess” > hit next > fill out the role name as “S3 Full Access” and description > click on create role.

User and role association is done by navigating to: services > IAM > users > select the newly created user > select “Attach existing policies directly” > search for the s3 role created above > hit “Next” > then hit “Add permissions”.

Note:
Best practice on users and role assignment is to create a group, add users to that group, then assign the proper role to that group.

AWS S3

S3 is the oldest and most popular service from AWS. It is an object storage service that offers industry-leading scalability, data availability, security, and performance.

S3 will store all our code that will be deployed from Bitbucket every time there is a new change in your master branch. Here’s how we’ll do it.

  1. create an S3 bucket
  2. Configure the bucket properties to allow for “Static web hosting”
  3. Acquire the Endpoint URL that will be used in CloudFront

Let’s quickly build what we need

  1. Creating buckets on S3 is a breeze. To do this, navigate to services > s3 > click on “create a bucket” > enter the bucket name e.g. “mysite.com” (note: this is a unique value across all of AWS) > select a region closest to your audience for example “US East (N. Virginia) > click “create”.
S3 bucket for “mysite.com”

2. Configure the bucket properties to allow for “Static web hosting”, click on > your bucket name, e.g “mysite.com’ > select “properties” > select “static web hosting” > type in “index.html” for the index document > type in “404.html” for the error document > Hit “Save”.

Copy the Endpoint URL: http://mysite.com.s3-website-us-east-1.amazonaws.com. We will need this for a couple of things.

Route 53

Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. Amazon Route 53 effectively connects user requests to infrastructure running in AWS — in our case — S3 buckets — and can also be used to route users to infrastructure outside of AWS.

Route 53 will enable us to take your domain, say example.com and ensure resources in the S3 buckets are served up while taking advantage of CloudFront’s global distribution.

In order to set up Route53 to work seamlessly as part of our serverless architecture, here’s what we’ll do:

  1. Create a hosted zone for our domain
  2. Create the Record set types that will work with CloudFront

Let’s begin.

  1. Create a hosted zone for our domain

A hosted zone tells Route 53 how to respond to DNS queries for a domain such as mysite.com.

There are two ways to create a hosted zone:

  1. register a new domain through AWS Route 53 that will automatically be added to the hosted zones list
  2. Manually add your existing domain to the hosted zone list on Route53

For this tutorial, we’ll assume you already have an existing domain and you’d like to adopt AWS Route53 capabilities into your architecture. If you’re registering with AWS for your domain, then this part will technically be done for you.

To create a Route53 Hosted zone for your domain, e.g. mysite.com, we’ll navigate to: services > Route53 > click on “Hosted Zones” > click “Create Hosted Zone. Complete the form as follows:

Domain Name: mysite.com
Comment: hosted zone manually created by on 1/20/2019
Type: Public Hosted Zone

2. Create the Record set types that will work with CloudFront

Once complete, you’ll see 2 record sets created with the “NS” and “SOA” record set types. We’ll need to create 2 more record set types which are: “CNAME” and “A”.

Let’s quickly run through this:

How to create the Record Set Type: CNAME

navigate to: services > Route53 > click on “Hosted Zones” > click “Create Record Set. Complete the form as follows:

Name: www
Type: CNAME — Canonical name
Alias: Yes
Alias Target: This will be the CloudFront distribution name which we will build later. For now, you use this placeholder: 11111abcdef8.cloudfront.net
Routing Policy
: Simple
Evaluate Target Health: No

How to create the Record Set Type: A

navigate to: services > Route53 > click on “Hosted Zones” > click “Create Record Set. Complete the form as follows:

Name: “leave blank
Type: A— IPV4 address
Alias: Yes
Alias Target: This will be the CloudFront distribution name which we will build later. For now, you use this placeholder: 11111abcdef8.cloudfront.net
Routing Policy
: Simple
Evaluate Target Health: No

That is it. Remember, we’ll go back to update the “Alias Target” once we build the CloudFront distribution name.

Certificate Manager

AWS Certificate Manager is a service that lets you easily provision, manage, and deploy public and private SSL/TLS certificates for use with AWS services and your internal connected resources.

To request and install an SSL cert from AWS for your domain, we’ll navigate to: services > Certificate Manager> click on “Request a certificate” > select “Request a public certificate” > In the “domain name” input box, type the name of the domain, e.g. example.com and click “Next” > Select “Email Validation” and click “review” > Verify all information is correct then hit “Confirm and request”

Note: This email validation will be sent to the site admin, e.g. admin@mysite.com.

The site admin will receive an email similar to this:

SSL Cert Email from AWS

Complete the steps highlighted in the email and you should have the cert installed and ready to go.

CloudFront

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.

In order to set up CloudFront to work seamlessly as part of our serverless architecture, here’s what we’ll do:

  1. Create a CloudFront distribution
  2. Redirect your domain from your name providers to AWS route53 and CloudFront

Let's begin

  1. Create distribution
CloudFront dashboard

To create a CloudFront distribution, navigate to: services > CloudFront > click on “Create Distribution” > select the “Web” delivery method and click on “Get Started”

This configurations page contains 3 sections:
1. Origin settings
2. Default cache behavior settings
3. Distribution settings.

As a general rule, we’ll leave many of the selected defaults as is with the exception of the settings that we explicitly modify. See the info icon “i” for details on each of these configurations.

For each section, here’s what we’ll configure:

  1. Origin settings

Origin Domain Names | mysite.com.s3.amazonaws.com
Origin Path | leave blank
Origin ID | s3-webiste-mysite.com.s3.amazonaws.com
Origin Custom HeadersHeader NameValue | Leave blank

2. Default cache behavior settings

Path Pattern Default (*)
Viewer Protocol Policy | Redirect HTTP to HTTPS
Allowed HTTP Methods | GET, HEAD

3. Distribution settings

Alternate Domain Names (CNAMEs) | mysite.com, www.mysite.com
SSL Certificate | Custom SSL Certificate (mysite.com)

Once complete, hit “create distribution”. This will take about 10 minutes to complete.

the CloudFront distribution for your content

Copy the CloudFront “Domain name”. This is the value that starts with “d*****.cloudfront.com” and update the Route53 record set names for your domain.

2. Redirecting your domain from your name providers to CloudFront

Update your DNS settings on your domain to point to the CloudFront distribution name

To get your existing domain to work with CloudFront, Route53 and AWS S3, you’ll need to update the DNS records by adding/modifying the record type and host by setting the values to the CloudFront domain name.

Create two “CNAME” record types and set the host as “@” and “www” for the other. Update the value as directed. That’s it

Bringing it all together

So finally, with all these pieces in place, the final orchestration from the initial request made to the end user viewing the site content would look like this:

Managing cost over time with CostExplorer

spend summary for AWS, CloudFront, S3, and Route53 over months

The monthly cost of utilizing the combination of resources depends on network usage and on average usually falls within the free tier offer for AWS.

Whether you have 30 users or 300, 000 thousand users hitting your site, the serverless approach is built to handle user traffic and scale automatically.

Conclusion

The rise in adoption of serverless architectures, with inbuilt scale, cost optimization and performance acceleration, among other benefits, will have a transformational impact on businesses, developers and architects across a wide range of systems and applications

Thanks for reading my article.

if you’d like to take a look at my work, you can find it here: paulkamau.com

https://paulkamau.com

--

--

Paul Kamau
teamRuggedIO

TAM @google | I write about tips, tutorials and impact AI & Machine Learning | https://bio.link/paulkamau