How to Deploy a Static Website to AWS with GitLab CI

Codica Team
Codica Journal
Published in
8 min readFeb 17, 2020

This article was originally published on Codica Blog.

Suppose you need to place your static website on a hosting platform like AWS. The deployment process can be automated with the help of GitLab and Amazon Web Services.

At Codica, we use GitLab CI (Continuous Integration) to deploy the project static files to Amazon S3 (Simple Storage Service) served by Amazon CloudFront. To get SSL-based encryption we use ACM (Certificate Manager).

In this article, our DevOps specialists will provide you with a better insight into AWS web hosting.

More specifically, we will share our best practices on how to deploy static sites to Amazon Web Services (store files on S3 and distribute them with CloudFront).

Glossary of terms

Before we proceed with our guide, let’s start with the definition of the key terms that you will frequently see in the article.

Simple Storage Service (S3) is a web service provided by AWS. Basically, it is cloud object storage that allows uploading, storing, and downloading almost any file or object. At Codica, we upload files of static websites using this service.

CloudFront (CF) is known as Amazon’s CDN (Content Delivery Network) based on S3 or another file source. Distribution is created and fixed on the S3 bucket or another source set by a user.

Amazon Certificate Manager (ACM) is a service by AWS that lets deploy and manage free private and public SSL/TLS certificates. As for our development practice, this is a helpful tool for deploying static files on Amazon CloudFront distributions which enables us to secure all network communications.

Identity and Access Management (IAM) is an entity created in AWS that represents the person or application that uses it to interact with AWS. We create IAM users to permit GitLab access and uploading data to our S3 bucket.

Configuring AWS account (S3, CF, ACM) and GitLab CI

We proceed from the assumption that you have already registered the GitLab account. Now you need to sign up/in to your AWS profile to take advantage of previously mentioned tools.

When creating a new profile, you automatically go under Amazon’s free tier which allows deploying to S3 during the first year. However, be aware of certain limitations in the trial period usage.

1. Setting up an S3 Bucket

To set up S3, go to the S3 management console, create a new bucket, fill in any name (i.e., yourdomainname.com) and the region. Finish the creation leaving the default settings.

After that, set permissions to the public access in a new bucket. This step is crucial for making the website files accessible to users.

When permissions are set to public, move to Properties tab and select the Static website hosting card. Tick the box “Use this bucket to host a website” and type your root page path (index.html by default) into the “Index document” field. Also, make sure you filled in the “Error document” field.

Finally, make a website visible and accessible to users by offering read permissions to your S3 bucket. Go to the Permissions tab and click Bucket policy. Insert the following code block in the editor that appears:

{
“Version”: “2012–10–17”,
“Statement”: [{“Sid”: “PublicReadForGetBucketObjects”,“Effect”: “Allow”,“Principal”: “*”,“Action”: “s3:GetObject”,“Resource”: “arn:aws:s3:::yourdomainname.com/*”}]}

2. Creating an IAM user that will upload content to the S3 bucket

At this stage, IAM user is being created to access and upload data to your bucket. For this purpose, go to the IAM management console and press the ‘Add User’ button to create a new policy with the chosen name.

After that, add the following code. Always be sure to replace the ‘Resource’ field value with the name created. It allows users to receive data from your bucket.

{“Version”: “2012–10–17”,“Statement”: [{“Sid”: “VisualEditor0”,“Effect”: “Allow”,“Action”: [“s3:GetObject”,“s3:PutObject”,“s3:DeleteObject”],“Resource”: “arn:aws:s3:::yourdomainname.com/*”},{“Sid”: “VisualEditor1”,“Effect”: “Allow”,“Action”: “s3:ListBucket”,“Resource”: “*”}]}

After that, you can create a new user. Tick the Programmatic access in the access type section and assign it to the newly created policy.

Finally, click the ‘Create user’ button. You will get two important values: AWS_ACCES_KEY_ID and AWS_SECRET_ACCESS_KEY variables. If you close the page, you will no longer be able to access the AWS_SECRET_ACCESS_KEY. That is why we recommend that you write down the key or download the .csv file.

3. Setting up GitLab CI configuration

The next step of AWS web hosting will be establishing the deployment process of your project to the S3 bucket. For this purpose, you need to set up GitLab CI correctly. Sign in to your GitLab account and navigate to the project. Click Settings, then go to the CI / CD section and press the ‘Variables’ button in the dropdown menu. Here enter all the necessary variables, namely:

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • AWS_REGION
  • S3_BUCKET_NAME
  • CDN_DISTRIBUTION_ID.

You do not have a CDN_DISTRIBUTION_ID variable yet, but it does not matter. You will get it after creating CloudFront distribution.

At this stage, you need to tell GitLab how your website should be deployed to AWS S3. You can do this by adding the file .gitlab-ci.yml to your app’s root directory. Simply put, GitLab Runner executes the scenarios described in this file.

Let’s now take a look at .gitlab-ci.yml and discuss its content step by step:

image: docker:latestservices:- docker:dind

An image is a read-only template containing the instructions for creating a Docker container. So, we specify the image of the latest version as a basis for executing jobs.

stages:- build- deployvariables:# CommonAWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEYAWS_REGION: $AWS_REGIONS3_BUCKET_NAME: $S3_BUCKET_NAMECDN_DISTRIBUTION_ID: $CDN_DISTRIBUTION_ID

On the above code block, we specify the steps to pass during our CI/CD process (build and deploy) with the variables they require.

cache:key: $CI_COMMIT_REF_SLUGpaths:- node_modules/

Here we cache the content of /node_modules to easily get the necessary packages from it later, without downloading.

######################## BUILD STAGE ########################Build:stage: buildimage: node:11script:- yarn install- yarn build- yarn exportartifacts:paths:- build/expire_in: 1 day

At the build stage, we gather and save the results in the build/ folder. The data are kept in the directory for 1 day.

######################## DEPLOY STAGE ########################Deploy:stage: deploywhen: manualbefore_script:- apk add — no-cache curl jq python py-pip- pip install awscli- eval $(aws ecr get-login — no-include-email — region $AWS_REGION | sed ‘s|https://||')

In the before_script parameter, we specify the necessary dependencies to install for the deployment process.

script:- aws s3 cp build/ s3://$S3_BUCKET_NAME/ — recursive — include “*”- aws cloudfront create-invalidation — distribution-id $CDN_DISTRIBUTION_ID — paths “/*”

Script parameter lets deploy project changes to your S3 bucket and update the CloudFront distribution.

In our case, there are two steps to pass during our CI/CD process: build and deploy. During the first stage, we make changes in the project code and save results in the /build folder. At the deployment stage, we upload the building results to the S3 bucket that updates the CloudFront distribution.

4. Creating CloudFront Origin

When uploading the necessary changes to S3, the final goal is to distribute content through your website pages by means of CloudFront. Let’s define how this service works.

When users visit your static website, CloudFront offers them a cached copy of an application stored in different data centres all over the world.

Suppose, users come to your website from the east coast of the USA. CloudFront will deliver the website copy from one of the servers located there (New York, Atlanta, etc). This way, the service decreases the page load time and improves the overall performance.

To start with, navigate to the CloudFront dashboard and click the ‘Create Distribution’ button. Then enter your S3 bucket endpoint in the ‘Origin Domain Name’ field. Origin ID will be generated automatically by autocompletion.

After that, move to the next section and tick ‘Redirect HTTP to HTTPS’ option under the Viewer Protocol Policy section. This way, you ensure serving the website over SSL.

Then, type your real domain name within Alternate Domain Names (CNAMEs) field. For instance, www.yourdomainname.com.

At this stage, you get a default CloudFront SSL certificate so that your domain name will contain the .cloudfront.net domain part.

If you want to get a custom SSL, click the “Request or Import a Certificate with the ACM” button.

Change your region to us-east-1, navigate to Amazon Certification Manager and add a preferable domain name.

To confirm that you are the owner of the domain name, navigate to the settings of DNS, and specify CNAME there.

Once you have generated an SSL certificate, choose the “Custom SSL Certificate” in this section.

At last, leave the remaining parameters set by default and click the ‘Create Distribution’ button.

This way, you create a new CloudFront origin that will be added to all the AWS edge networks within 15 minutes. You can navigate to the dashboard page and take a look at the State field which shows two conditions: pending or enabled.

As soon as the provisioning process is finished, you will see that the State field’s value is changed to Enabled. After that, you can visit the website by entering the created domain name address in an address bar.

Final thoughts

We provided our insight into AWS web hosting. Now you have got a better idea of how to deploy static sites to Amazon (storing files on S3 and distributing them with CloudFront). In this regard, GitLab CI has become a very helpful tool.

We hope this guide was useful as it offers examples from our software development practice and it will help you enhance your development and operations skills.

This article was originally published on Codica Blog.

--

--

Codica Team
Codica Journal

Software development consultancy. We are passionate about innovations and create great online marketplaces with Ruby on Rails, React, Vue and Angular.