How I Built an AWS App Without Spending a Penny — The Frontend

Abhishek Chaudhuri
17 min readSep 22, 2023

--

AWS logo with dollar sign crossed out

This is part 2 of a 6-part series. See Part 1 where we introduce the task at hand.

I started with the front end. I first created the infrastructure using CloudFormation. Then I used CodeBuild to automatically test, build, and deploy the app to S3. I chose React for my front-end framework.

In the CloudFormation template, we begin by defining the template version and a description of the stack:

AWSTemplateFormatVersion: "2010–09–09"
Description: >-
A stack to host the frontend of the AWS Shopping App

Side note: Throughout the code snippets, I use YAML syntax to split strings into multiple lines. There are many ways to split a string into multiple lines, but the two to remember are | and >. | preserves the newline characters, whereas > keeps the string in a single line. The — after means we’re omitting the last newline character at the end of the string. The way I like to distinguish the two is to think about a wall blocking the string with | acting as the barrier, forcing the string to take up multiple lines. For >, think of an arrow going in one direction. The string will ultimately follow the arrow in that direction and take up only one line. We only split up the string into multiple lines for neatness in a text editor.

We define all the resources in the Resources block. This is the only part of the CloudFormation template that’s required. First, we create the S3 bucket:

Resources:
ReactAppBucket:
Type: AWS::S3::Bucket
# Can't delete an S3 bucket until all its objects are deleted
DeletionPolicy: Retain
UpdateReplacePolicy: Retain
Properties:
# Buckets and objects are encrypted by default
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: AES256
LoggingConfiguration:
DestinationBucketName: !Ref S3LoggingBucket
# Disable ACLs
OwnershipControls:
Rules:
- ObjectOwnership: BucketOwnerEnforced
# Some access settings need to be enabled to add bucket policies
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true # set to false when creating a bucket policy
IgnorePublicAcls: true
RestrictPublicBuckets: true # set to false when creating a bucket policy
VersioningConfiguration:
Status: Enabled
# Save money by deleting older copies of objects
LifecycleConfiguration:
Rules:
- Id: DeleteOldVersionsRule
Status: Enabled
AbortIncompleteMultipartUpload:
DaysAfterInitiation: 1
ExpiredObjectDeleteMarker: true
NoncurrentVersionExpiration:
NoncurrentDays: 1
ReactAppBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref ReactAppBucket
PolicyDocument:
Version: "2012-10-17"
Statement:
# Only allow CloudFront to access the S3 bucket
# https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html#create-oac-overview-s3
- Sid: AllowCloudFrontServicePrincipalReadOnly
Effect: Allow
Principal:
Service: cloudfront.amazonaws.com
Action: s3:GetObject
Resource: !Sub "${ReactAppBucket.Arn}/*"
Condition:
ArnEquals:
"aws:SourceArn": !Sub "arn:${AWS::Partition}:cloudfront::${AWS::AccountId}:distribution/${CloudFrontDistribution.Id}"
- Sid: AllowSSLRequestsOnly
Effect: Deny
Principal: "*"
Action: "s3:*"
Resource:
- !GetAtt ReactAppBucket.Arn
- !Sub "${ReactAppBucket.Arn}/*"
Condition:
Bool:
"aws:SecureTransport": false
S3LoggingBucket:
Type: AWS::S3::Bucket
# Can't delete an S3 bucket until all its objects are deleted
DeletionPolicy: Retain
UpdateReplacePolicy: Retain
Properties:
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: AES256
# Disable ACLs
OwnershipControls:
Rules:
- ObjectOwnership: BucketOwnerEnforced
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
VersioningConfiguration:
Status: Enabled
# Temporarily store logs for 1 day
LifecycleConfiguration:
Rules:
- Id: DeleteRule
Status: Enabled
AbortIncompleteMultipartUpload:
DaysAfterInitiation: 1
ExpirationInDays: 1
NoncurrentVersionExpiration:
NoncurrentDays: 1
# Enable WORM (write once, read many)
ObjectLockEnabled: true
ObjectLockConfiguration:
ObjectLockEnabled: Enabled
Rule:
DefaultRetention:
Mode: GOVERNANCE # laxer than compliance mode
Days: 1
LoggingBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref S3LoggingBucket
PolicyDocument:
Version: "2012-10-17"
Statement:
# Allow the frontend bucket to store server access logs in this bucket
- Sid: AllowS3Logs
Effect: Allow
Principal:
Service: logging.s3.amazonaws.com
Action: s3:PutObject
Resource: !Sub "${S3LoggingBucket.Arn}/*"
Condition:
ArnLike:
"aws:SourceArn": !GetAtt ReactAppBucket.Arn
StringEquals:
"aws:SourceAccount": !Ref AWS::AccountId
- Sid: AllowSSLRequestsOnly
Effect: Deny
Principal: "*"
Action: "s3:*"
Resource:
- !GetAtt S3LoggingBucket.Arn
- !Sub "${S3LoggingBucket.Arn}/*"
Condition:
Bool:
"aws:SecureTransport": false

The S3 bucket will hold the website code (HTML, CSS, JS, static assets, etc.), which will be hosted using CloudFront. AWS provides detailed docs for every resource, for example, an S3 bucket, describing every property, what properties they return (to be referenced later), and sample code snippets. (To make viewing the docs easier, I recommend filtering by YAML code only.) This code snippet happens to be big since we’re creating 4 resources: the S3 bucket, its bucket policy, a logging bucket, and its bucket policy.

When creating every resource, I try to follow the best practices recommended by AWS. Although S3 buckets are now encrypted by default, it’s still good to explicitly enable encryption in the template. In this case, we’re using SSE-S3 encryption (aka S3 server-side encryption), so AWS encrypts all objects in the bucket using its own managed key. You can also encrypt using KMS or your own key, but that requires higher maintenance and could introduce additional costs.

I enabled server access logs to be written to a separate bucket, so we can keep track of requests to S3, in addition to those logged by default to CloudTrail. The logging bucket follows the same best practices as the React bucket, but we don’t enable logging to prevent an infinite loop.

All objects are versioned, so we can keep track of changes to the static assets, like git, and roll back changes as needed. (Note that once versioning is enabled, you can’t fully disable it, only suspend it.)

Finally, I added a lifecycle configuration to each object. These can be used to transition objects to a lower storage tier, such as moving infrequently accessed objects to Glacier. But in our case, we use this as a cost-saving measure to ensure we don’t store redundant objects indefinitely in S3. When versioning is enabled and we update an object, a copy of the previous version is still stored in S3. Also, when objects are deleted, a delete marker is kept in S3, in case of recovery. It’s like a recycle bin for S3 objects. The lifecycle rule in the React bucket states that after one day, delete all incomplete multipart uploads, delete markers, and noncurrent versions of all objects. This will leave only one version of each static object in the bucket without leaving behind any delete markers. The rules may take a day or two to kick in but will save us some storage in the long term. For the logging buckets, we differ the rule slightly to expire objects after 1 day. Since logs are only temporary, we can delete them entirely for a clean bucket. To also save on costs, we don’t enable bucket replication, which can be used to make objects highly available across multiple regions or accounts.

We also make sure the bucket is private and can only be accessible by CloudFront. First, we disable all public policies and ACLs from the bucket, which was a legacy way of restricting access to objects before IAM policies came into play. The bucket is now fully private. Next, we create a bucket policy to define who has access to the bucket. This is an example of a resource-based policy. Unlike identity-based policies which are attached to an IAM user, these policies define permissions for an S3 bucket. They also contain principals specifying who can assume the policy. Other services, such as SNS and SQS, also support resource-based policies. For all policy documents, I include the version since IAM defaults to the older “2008–10–17” version, which lacks certain features such as policy variables.

The PolicyDocument block is a normal IAM policy translated from JSON to YAML. We define 2 policies. One allows the CloudFront distribution to read all objects from the React bucket. The other enforces encryption in transit by only allowing secure requests to the bucket and its objects. I ordered each statement so it’s easier to understand what they do by reading from top to bottom. For example, in the first statement of the React bucket policy, we’re allowing CloudFront (principal) to call the GetObject API on any object in the React bucket if the ARN matches the CloudFront distribution we’ll create in a little bit. For the logging bucket, instead of allowing CloudFront, we enable S3’s logging service to write objects to the bucket if they come from the React bucket under our account. (Exam tip: The S3 bucket and its objects are listed separately under Resource. To apply a policy to the bucket, only reference the bucket’s ARN. To apply a policy to an object, append /* to the bucket’s ARN. You can specify a particular object, or use * to apply the policy to all objects in the bucket.) It’s good practice to add a Sid (or statement ID) to policies with multiple statements, not only for documentation but also since some services require them in their policies.

Another feature AWS recommends for S3 buckets is Object Lock. Object Lock ensures that objects are immutable for a certain amount of time, such as for auditing purposes. This also promotes a write-once-read-many (WORM) model for objects that are expected to be read much more frequently than written to, which reduces the chances of any race conditions. We don’t enable this feature for the React bucket since the website updates frequently thanks to the pipeline, but we do enable Object Lock for logs since they’ll be kept for 1 day anyway.

One key field we omit is the bucket name. Where possible, it’s recommended to avoid naming a resource yourself. CloudFormation will name the resource for you and guarantee its uniqueness. This is important for S3 since the bucket name must be globally unique across ALL of AWS. So, if someone named their bucket “test-bucket-123”, no one else in the world can use that same name. Typically, CloudFormation will use the stack name and append random characters to it. This creates a unique string that can still be identified from where it was created. The other downside to specifying a name is that if you rename the resource, CloudFormation will replace it entirely. In other words, it will delete the entire resource first and then create a brand-new resource with a different name. If the deletion fails, the stack is rolled back, and the new name isn’t applied. This can happen, again, to S3 since buckets can’t be deleted if they contain any objects.

Next, we add a CloudFront distribution in front of S3:

CloudFrontDistribution:
Type: AWS::CloudFront::Distribution
Properties:
DistributionConfig:
Origins:
- DomainName: !GetAtt ReactAppBucket.DomainName
Id: ReactAppS3Origin
S3OriginConfig:
OriginAccessIdentity: ""
OriginAccessControlId: !GetAtt CloudFrontOriginAccessControl.Id
Enabled: true # required
HttpVersion: http2and3
ViewerCertificate:
CloudFrontDefaultCertificate: true
DefaultRootObject: index.html
DefaultCacheBehavior: # required
AllowedMethods:
- GET
- HEAD
- OPTIONS
TargetOriginId: ReactAppS3Origin
Compress: true
# TTL: min = 1s, default = 1d, max = 1y; brotli+gzip compression
# No cookies, headers (except Accept-Encoding), or queries in the cache key
CachePolicyId:
!FindInMap [CloudFront, CachePolicies, CachingOptimized]
ViewerProtocolPolicy: redirect-to-https
ResponseHeadersPolicyId: !Ref CloudFrontResponseHeadersPolicy
PriceClass: PriceClass_100 # only use edge locations in North America, Europe, and Israel
CustomErrorResponses:
- ErrorCachingMinTTL: 300 # 5 minutes
# Redirect 403 errors (forbidden) to a 404 page (not found)
ErrorCode: 403
ResponseCode: 404
ResponsePagePath: /index.html
# Logging:
# Bucket: !GetAtt CloudFrontLoggingBucket.DomainName
# IncludeCookies: false
CloudFrontOriginAccessControl:
Type: AWS::CloudFront::OriginAccessControl
Properties:
OriginAccessControlConfig:
Description: AWS Shopping App Origin Access Control
Name: ReactAppOAC
OriginAccessControlOriginType: s3
SigningBehavior: always
SigningProtocol: sigv4
CloudFrontResponseHeadersPolicy:
Type: AWS::CloudFront::ResponseHeadersPolicy
Properties:
ResponseHeadersPolicyConfig:
Name: ReactAppHeaders
# CustomHeadersConfig:
# Items:
# - Header: Content-Security-Policy-Report-Only
# Override: true
# Value: >-
SecurityHeadersConfig:
# Add the necessary security headers to pass Mozilla Observatory
ContentSecurityPolicy:
# Test using Content-Security-Policy-Report-Only
ContentSecurityPolicy: >-
default-src 'none';
img-src 'self';
script-src 'self';
style-src 'self' 'unsafe-inline' https://fonts.googleapis.com;
object-src 'none';
font-src 'self' https://fonts.gstatic.com;
manifest-src 'self';
connect-src 'self' https://dovshfcety3as.cloudfront.net;
frame-ancestors 'self';
base-uri 'self';
form-action 'self'
Override: true
# X-Content-Type-Options
ContentTypeOptions:
Override: true # set to nosniff
# X-Frame-Options
FrameOptions:
FrameOption: SAMEORIGIN
Override: true
ReferrerPolicy:
Override: true
ReferrerPolicy: strict-origin-when-cross-origin
# Strict-Transport-Security: max-age=31536000; includeSubDomains
StrictTransportSecurity:
AccessControlMaxAgeSec: 31536000 # 1 year
IncludeSubdomains: true
Override: true
Preload: false # not standard
# X-XSS-Protection is non-standard and can cause vulnerabilities

While S3 supports website hosting, it only does so through HTTP, which is insecure. CloudFront enables HTTPS by using its certificate, with additional bonuses such as caching and edge locations. This makes our website load faster for all users across the globe. We create 3 things in CloudFront: the distribution, an origin access control (OAC), and a response headers policy.

The distribution is the resource we want CloudFront to cache. Under Origins, we point to the domain name of the React bucket’s URL. We allow CloudFront to use its certificate to enable HTTPS on the website, with support for automatic renewal. Under DefaultRootObject, we specify index.html as the object to open by default if we visit the root path of our website. For DefaultCacheBehavior, we only allow HTTP methods that permit read access to our website. We don’t currently enable users to dynamically change the contents for other users. For the caching policy, we define a mapping above the Resources section:

Mappings:
CloudFront:
# Managed cache policies: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-managed-cache-policies.html
CachePolicies:
Amplify: 2e54312d-136d-493c-8eb9-b001f22f67d2
CachingDisabled: 4135ea2d-6df8-44a3-9df3-4b5a84be39ad
CachingOptimized: 658327ea-f89d-4fab-a63d-7e88639e58f6
CachingOptimizedForUncompressedObjects: b2884449-e4de-46a7-ac36-70bc7f1ddd6d
ElementalMediaPackage: 08627262-05a9-4f76-9ded-b50ca2e3a84f

Mappings allow you to define constants in a CloudFormation template. They must have 3 levels of depth. A pattern I like to follow is to start with the name of the AWS service, then create a group of constants, and then define the key-value pairs. There are several cache policies that AWS provides depending on your use case. For S3, they recommend using the CachingOptimized policy, which caches data for 1 day, utilizes Brotli & Gzip caching, and caches all requests regardless of cookies, headers, or query parameters. But we need to specify the ID of the cache policy, which is a random UUID. Rather than typing that in by itself, we can use a mapping to make it clear what kind of caching policy we’re trying to apply.

The ViewerProtocolPolicy automatically redirects all HTTP requests to HTTPS to ensure all requests are made securely. (This will be important when we add health checks.) To save money, we only utilize edge locations across North America, Europe, and Israel. And under CustomErrorResponses, we redirect all 403 errors to a 404 page, since these normally occur if the user visits an invalid page on our website. It’s good practice to enable CloudFront logging, but I commented this out for now to avoid adding too many objects to the logging bucket. (I’ll elaborate once we talk about health checks.)

As of August 2022, AWS recommends creating an Origin Access Control (OAC) as opposed to an Origin Access Identity (OAI) for S3 origins. (So ChatGPT couldn’t help me in this case. 😜) OAC provides additional benefits and security over OAI and doesn’t require too much to set up. First, we specify that the OAC always signs API requests to S3 using the standard sigv4 algorithm. Then, we add the bucket policy shown earlier to allow the CloudFront distribution access to the S3 bucket.

AWS provides managed response headers policies, like the cache policies, but I created my policy to satisfy additional security best practices. These are several security headers all websites should include to protect against XSS, clickjacking, and other malicious attacks. You can visit Mozilla Observatory to see how your website is graded for security. We add the following response headers to every request:

  • Content-Security-Policy (CSP): This is a list of rules specifying where we can load JS, CSS, fonts, images, etc. for our website. Like with IAM, we start by denying all access and adding the minimum number of sources so our website loads correctly. This includes allowing API requests from the back end, which we’ll discuss in a later section. Before updating the policy, you can test to see if it will produce any errors by using the Content-Security-Policy-Report-Only header, which I commented on above.
  • X-Content-Type-Options: This is set to nosniff to ensure the MIME type in every request is followed.
  • X-Frame-Options: This is set to SAMEORIGIN to prevent clickjacking attacks, where attackers will embed content hidden on a webpage. The frame-ancestors directive in the CSP supersedes this header but is still good to add until the header is deprecated (like with X-XSS-Protection).
  • Referrer-Policy: This enforces the Referrer header on cross-origin requests to only contain the website’s origin, not any path or query string.
  • Strict-Transport-Security (aka HSTS): This specifies how long the website should enforce HTTPS for all requests. We enforce this for all subdomains as well.

Now we need to create a CodeBuild project that will clone the GitHub repository, test the app, build the React code, and upload the artifacts to S3 so CloudFront can host the website:

# Allow CodeBuild to upload objects to S3
# https://docs.aws.amazon.com/codebuild/latest/userguide/setting-up.html#setting-up-service-role
CodeBuildPolicy:
Type: AWS::IAM::ManagedPolicy
Properties:
ManagedPolicyName: CodeBuildAccess
PolicyDocument:
Version: "2012-10-17"
Statement:
- Sid: CloudWatchLogsPolicy
Effect: Allow
Action:
- "logs:CreateLogGroup"
- "logs:CreateLogStream"
- "logs:PutLogEvents"
Resource: "*"
- Sid: S3ObjectPolicy
Effect: Allow
Action:
- "s3:DeleteObject"
- "s3:GetObject"
- "s3:ListBucket"
- "s3:PutObject"
Resource:
- !GetAtt ReactAppBucket.Arn
- !Sub "${ReactAppBucket.Arn}/*"
CodeBuildRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Action: sts:AssumeRole
Effect: Allow
Principal:
Service: codebuild.amazonaws.com
ManagedPolicyArns:
- !Ref CodeBuildPolicy
CodeBuildProject:
Type: AWS::CodeBuild::Project
Properties:
Description: A project that builds and deploys the AWS Shopping App
Source:
Auth:
Type: OAUTH
# Directions to let GitHub authorize CodeBuild:
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-codebuild-project-source.html#cfn-codebuild-project-source-location
Location: https://github.com/Abhiek187/aws-shop.git
Type: GITHUB
ReportBuildStatus: true
ServiceRole: !GetAtt CodeBuildRole.Arn
# Upload the build to the S3 bucket
Artifacts:
Location: !Ref ReactAppBucket
Name: "/" # store the build in the root directory of the bucket (not zipped)
Type: S3
# Don't override S3's default SSE-S3 encryption with a KMS key
# (will prevent the website from being accessible)
EncryptionDisabled: true
Environment:
# Make sure the image supports the runtime in the buildspec:
# https://docs.aws.amazon.com/codebuild/latest/userguide/available-runtimes.html
Type: ARM_CONTAINER
Image: aws/codebuild/amazonlinux2-aarch64-standard:3.0 # Amazon Linux 2023 ARM image
ComputeType: BUILD_GENERAL1_SMALL # free tier eligible
EnvironmentVariables:
- Name: CI
Value: "true"
- Name: ARTIFACT_BUCKET
Value: !Ref ReactAppBucket
BadgeEnabled: true # show a badge of the build status on GitHub
TimeoutInMinutes: 5 # important to limit build minutes/month (default: 60)

In the properties of the CodeBuild project, we point the source code to our GitHub repo and handle authentication using OAuth. We need to go to the CodeBuild console to authorize CodeBuild to access our GitHub repo before we can establish the OAuth connection in CloudFormation. (I linked the docs above that explain how to do this.)

The service role is required to give CodeBuild the required access to S3 and publish logs to CloudWatch. (Side note: Logs can be sent to either S3 or CloudWatch. S3 is cheaper, but CloudWatch provides additional features like Insights to query the logs. Some services integrate better with one or the other, but as long as we don’t publish too many logs at once, we’ll be fine either way with the free tier. To play it safe, you can allow the logs to expire after a certain period.) We specify the artifacts to be uploaded to the React bucket created earlier. We disable encryption to ensure that all objects are encrypted using the default SSE-S3 encryption. Otherwise, CodeBuild will use its KMS key to encrypt the objects, which will cause the objects to be inaccessible. I was getting this error: The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access.

For the environment, we specify that CodeBuild should run in an ARM Linux container. ARM is cheaper than x86 and the BUILD_GENERAL1_SMALL compute type will satisfy the free tier. CodeBuild supports 2 compute types: EC2 and Lambda, and they share the same free tier of 100 build minutes. Although Lambda is cheaper, I found through testing that builds can take 2–3x longer since the containers only have 1 GB of memory, whereas the EC2 instances have 3 GB of memory. Plus, Lambda doesn’t support build badges or custom timeouts, among other limitations. Since this is already a tight free tier, I opted to stick with EC2 compute types, but feel free to experiment with which compute type works best for your project and budget.

We set two environment variables. One sets CI to true. This is a common environment variable to indicate that we’re running the code in a CI environment. That was needed when I initially created the front end using Create React App (CRA), to prevent the tests from requiring any input. However, I migrated to Vite after learning that CRA wasn’t being maintained. We also save the name of the S3 bucket to delete all objects as part of the buildspec. We enable the badge so it can be viewed in our README on GitHub, like other GitHub Actions badges. And most importantly, we set a timeout of 5 minutes. We only have 60 build minutes within the free tier. And this code will regularly get updated as part of the pipeline. So, if the build hangs for whatever reason, we make sure to terminate it after a few minutes.

If we wanted to trigger CodeBuild to run every time a Pull Request (PR) is created or code is merged to main, we can add a Triggers block to the project. I omitted it in this case to conserve on build minutes. CodeBuild will only be triggered on demand when code is merged to main.

This is the buildspec file that CodeBuild will use to test and build our frontend:

version: 0.2

phases:
install:
runtime-versions:
nodejs: 18
pre_build:
commands:
- cd shop-app # the shell instance is saved every build command
- echo Build started on `date`
- npm ci
build:
commands:
- echo Running tests
- npm test
- echo Building app
- npm run build
# Don't go to post_build if the build failed
on-failure: ABORT
post_build:
commands:
# Delete the old code in S3 before uploading the new code
- aws s3 rm s3://$ARTIFACT_BUCKET --recursive
- echo Build completed on `date`
artifacts:
base-directory: "shop-app/dist"
files:
- "**/*"

I won’t go over the React code in depth since it’s out of the scope of this article but just know that we’re using Node 18 (make sure to verify what runtime versions are supported in your specified container), installing all dependencies, running the tests, generating the build in the “dist” directory, and removing all objects in S3 before uploading the artifacts. (post_build will always run even if the build step fails. We need to tell CodeBuild to abort on failure, otherwise, it will take the whole website down.) This is where the bucket name environment variable comes into play. The reason we delete all the objects first is to remove any files that aren’t part of the new set of artifacts. If we only publish the artifacts, it will either create new objects or update the existing ones. Remember that we want to conserve space in S3. And having too many objects may cause issues with the website. Now yes, this will cause a little bit of downtime between deleting and creating the new objects. But CloudFront’s cache should hide that downtime from users. To minimize downtime further, you could simulate a blue-green deployment by using separate prefixes or buckets in S3, but for a personal project, I don’t feel this is necessary.

Finally, we can output the following at the end of our CloudFormation template:

Outputs:
WebsiteURL:
Description: URL for the website hosted on S3
Value: !GetAtt ReactAppBucket.WebsiteURL
CloudFrontURL:
Description: The URL of the React app hosted over HTTPS using CloudFront
Value: !Sub "https://${CloudFrontDistribution.DomainName}"

This provides easy access to our website’s URL. Although we didn’t enable website hosting on the S3 bucket, we can still confirm that we get a 404 error if we tried accessing S3 directly. You can customize the URL using Route 53, but there’s a monthly charge for domain registration. It’s also recommended to protect CloudFront using WAF, but that has a monthly charge. Since CloudFront distributions are spread globally, it can take a few minutes to deploy any changes using CloudFormation.

It took me a while to understand every single property I needed to define for each resource. But it felt very satisfying once it all started working. Here were some of the other issues I ran into along the way:

  • No permission to run the S3:PutBucketPolicy API: Blocking all public access from the S3 bucket prevented me from adding a bucket policy to allow CloudFront connections. I had to enable some access settings related to the bucket policy to allow access. But once the S3 bucket and policy were set up, S3 knew the bucket wasn’t publicly accessible.
  • SSM SecureStrings aren’t supported in CodeBuild: As mentioned previously, I couldn’t use Secrets Manager under our budget. But from reading the docs, Parameter Store SecureStrings are only supported in limited use cases, such as RDS. At least in this case, I didn’t need to store the Personal Access Token (PAT) since CodeBuild could establish an OAuth connection to GitHub.
  • The S3 bucket that you specified for CloudFront logs does not enable ACL access: Despite AWS recommending you disable ACLs for all S3 buckets, CloudFront logs seem to be an exception. CloudFront will add an external account with canonical ID c4c1ede66af53448b93c283ce9448c4ba468c9432aa01d700d3878632f77d2d0 (CloudFront’s awslogsdelivery account) to the bucket’s ACL.

In the next part, we will look at the AWS CLI and how it can help automate our tasks.

The full GitHub repo can be found here: https://github.com/Abhiek187/aws-shop

--

--