5 Fun Projects to learn AWS

Bubu Tripathy
19 min readJan 2, 2024

--

Introduction

Embarking on the journey to learn Amazon Web Services (AWS) is an exciting and rewarding endeavor, offering numerous possibilities for those eager to enhance their cloud computing skills. To make the learning experience more engaging, I have curated 5 fun projects, to showcase the versatility of AWS and gain hands-on experience in deploying real-world solutions.

Project # 1 — Launch a static website on Amazon S3

Discover an affordable and efficient way to host your website by deploying a static site on Amazon S3 — an ideal introduction to key AWS services. Create an S3 bucket to leverage simplicity and scalability, and enhance global accessibility using Amazon CloudFront as your CDN. Seamlessly manage your domain with Amazon Route 53, and implement SSL/TLS security through AWS Certificate Manager for a secure HTTPS connection, ensuring optimal performance and security for your website.

Before deploying, make sure your static website files (HTML, CSS, JavaScript, images, etc.) are ready in a local directory.

Create an S3 Bucket

  1. Go to the AWS Management Console.
  2. Navigate to S3 and click on “Create bucket.”
  3. Enter a unique bucket name, select a region, and click “Next.”
  4. On the “Configure options” page, click “Next.”
  5. On the “Set permissions” page, you can configure bucket policies if needed. Click “Next” to review, then click “Create bucket.”

OR

aws s3api create-bucket --bucket YOUR_UNIQUE_BUCKET_NAME --region YOUR_REGION

Upload Your Website to the S3 Bucket

  1. Open your newly created S3 bucket.
  2. Click on the “Upload” button and select all your website files.
  3. Ensure that the files are set to be publicly accessible. You can do this by selecting each file, clicking on “Actions,” and choosing “Make public.”

OR

aws s3 sync YOUR_LOCAL_WEBSITE_DIRECTORY s3://YOUR_UNIQUE_BUCKET_NAME --acl public-read

Enable Static Website Hosting on the S3 Bucket

  1. In the S3 bucket, go to the “Properties” tab.
  2. Click on “Static website hosting.”
  3. Select “Use this bucket to host a website.”
  4. Set the “Index document” to your main HTML file (e.g., index.html) and optionally set the "Error document."
  5. Save the changes.

OR

aws s3 website s3://YOUR_UNIQUE_BUCKET_NAME --index-document index.html

Configure Amazon CloudFront Distribution

  1. Go to the CloudFront console.
  2. Click “Create Distribution.”
  3. Choose “Web” distribution.
  4. Set the following configurations:
  • Origin Domain Name: Select your S3 bucket from the dropdown.
  • Default Root Object: Set it to your main HTML file (e.g., index.html).
  • Leave other settings as default or adjust based on your requirements.

5. Click “Create Distribution.”

OR

import boto3

cloudfront_client = boto3.client('cloudfront')

distribution_config = {
'CallerReference': 'your-unique-caller-reference',
'Origins': {
'Quantity': 1,
'Items': [
{
'Id': 'S3-origin',
'DomainName': 'YOUR_UNIQUE_BUCKET_NAME.s3.amazonaws.com',
'S3OriginConfig': {
'OriginAccessIdentity': ''
}
}
]
},
'DefaultCacheBehavior': {
'TargetOriginId': 'S3-origin',
'ForwardedValues': {
'QueryString': False,
'Cookies': {'Forward': 'none'},
'Headers': {'Quantity': 0}
},
'TrustedSigners': {'Enabled': False, 'Quantity': 0},
'ViewerProtocolPolicy': 'allow-all',
'MinTTL': 0
},
'Comment': 'Your CloudFront Distribution Comment',
'Enabled': True
}

response = cloudfront_client.create_distribution(DistributionConfig=distribution_config)
distribution_id = response['Distribution']['Id']

print(f"CloudFront Distribution ID: {distribution_id}")

Update Amazon Route 53 for Your Domain

  1. Go to the Route 53 console.
  2. Create a new hosted zone for your domain if you haven’t already.
  3. Note the four nameservers provided for your hosted zone.
  4. In your domain registrar’s settings, update the nameservers to the ones provided by Route 53.

Obtain an SSL/TLS Certificate using AWS Certificate Manager (ACM)

  1. Go to the ACM console.
  2. Click “Request a certificate.”
  3. Enter your domain name and follow the instructions to validate ownership.
  4. Once validated, select the option to create a new CloudFront distribution.
  5. Choose the CloudFront distribution you created earlier and complete the certificate request.

OR

import boto3

acm_client = boto3.client('acm')

certificate_arn = acm_client.request_certificate(
DomainName='yourdomain.com',
ValidationMethod='DNS'
)['CertificateArn']

print(f"Certificate ARN: {certificate_arn}")

Update CloudFront Distribution to Use SSL/TLS

  1. Go to the CloudFront console.
  2. Select your distribution and click “Edit.”
  3. Under the “General” tab, change the “Viewer Protocol Policy” to “Redirect HTTP to HTTPS.”
  4. Save the changes.

OR

cloudfront_client = boto3.client('cloudfront')

cloudfront_client.update_distribution(
DistributionConfig={
'DistributionConfig': {
'DefaultCacheBehavior': {
'ViewerProtocolPolicy': 'redirect-to-https'
}
},
'Id': 'YOUR_CLOUDFRONT_DISTRIBUTION_ID',
'IfMatch': 'your-distribution-config-if-match'
}
)

Wait for Changes to Propagate

It may take some time for changes to propagate. Once complete, your static website should be accessible via the custom domain over HTTPS globally through CloudFront.

Congratulations! You’ve successfully deployed a static website on Amazon S3, configured CloudFront for global content delivery, and set up DNS with Route 53 and SSL/TLS with ACM.

Project # 2 — Use CloudFormation to Launch an Amazon EC2 Web Server

In this project, we will explore how to deploy a web server on Amazon EC2 using AWS CloudFormation. CloudFormation allows us to define our infrastructure as code, automating the process of creating and managing resources. By the end, you’ll have a running EC2 instance serving a simple web page.

Set Up Your AWS Account

Make sure you have an AWS account. If you don’t have one, you can sign up here.

Create a CloudFormation Template

Create a file named ec2-web-server-template.yml. This YAML file will contain the CloudFormation template.

AWSTemplateFormatVersion: '2010-09-09'
Resources:
MyEC2Instance:
Type: 'AWS::EC2::Instance'
Properties:
ImageId: 'ami-xxxxxxxxxxxxxxxxx' # Specify your desired Amazon Machine Image (AMI)
InstanceType: 't2.micro'
KeyName: 'your-key-pair' # Specify your key pair
SecurityGroupIds:
- sg-xxxxxxxxxxxxxxxxx # Specify your security group ID
UserData:
Fn::Base64: !Sub |
#!/bin/bash
echo "Hello from your EC2 instance!" > /var/www/html/index.html
yum install -y httpd
service httpd start
chkconfig httpd on

Replace ami-xxxxxxxxxxxxxxxxx, your-key-pair, and sg-xxxxxxxxxxxxxxxxx with your desired values. The UserData script installs Apache and starts the web server.

The CloudFormation template provisions an Amazon EC2 instance as a web server. It defines the instance's properties, including the Amazon Machine Image (AMI), instance type, key pair, and security group. Additionally, it uses the UserData script to install Apache, start the web server, and create a basic HTML page, providing a complete configuration for launching a functioning EC2-based web server.

Deploy the CloudFormation Stack

Option 1: AWS Management Console

  1. Log in to the AWS Management Console.
  2. Open the CloudFormation service.
  3. Click “Create Stack.”
  4. Select “Upload a template file” and upload the ec2-web-server-template.yml file.
  5. Click “Next.”
  6. Provide a stack name (e.g., EC2WebServerStack).
  7. Click “Next” through the subsequent pages, leaving the default settings.
  8. Review the settings and click “Create Stack.”

Option 2: AWS CLI

  1. Open a terminal and ensure the AWS CLI is installed.
  2. Run the following command to create the CloudFormation stack:
aws cloudformation create-stack --stack-name EC2WebServerStack --template-body file://ec2-web-server-template.yml

Access the Web Server

  1. Once the stack is created, go to the EC2 service in the AWS Management Console.
  2. Find the newly created EC2 instance.
  3. Note the Public IP or Public DNS of the instance.
  4. Open a web browser and enter the Public IP or DNS in the address bar.

You should see the “Hello from your EC2 instance!” message, indicating that your web server is running.

That’s it! You’ve successfully created an Amazon EC2 instance as a web server using AWS CloudFormation.

Project # 3 — Add a CI/CD pipeline for the S3 bucket

Let’s add a CI/CD pipeline for the S3 bucket we created in Project # 1.

Create AWS CodeBuild Project

AWS CodeBuild is a fully managed continuous integration and continuous delivery (CI/CD) service provided by Amazon Web Services (AWS). CodeBuild simplifies the process of building, testing, and deploying your applications by automating the build and release phases of your software development projects.

To create an AWS CodeBuild project, you can use either the AWS Management Console, AWS CLI, or AWS SDKs. Here’s an example using the AWS CLI:

aws codebuild create-project --name YourCodeBuildProjectName \
--source "type=NO_SOURCE" \
--artifacts "type=NO_ARTIFACTS" \
--environment "type=LINUX_CONTAINER,image=aws/codebuild/standard:5.0,computeType=BUILD_GENERAL1_SMALL" \
--service-role YourCodeBuildServiceRoleArn \
--region YourRegion

Replace placeholders like YourCodeBuildProjectName, YourCodeBuildServiceRoleArn, and YourRegion with your actual values. This example creates a simple CodeBuild project with no source or artifact settings, using a Linux container image. Adjust the configuration according to your project's requirements, specifying source, artifacts, environment, and other parameters as needed.

Remember to ensure that the specified service role (YourCodeBuildServiceRoleArn) has the necessary permissions to access resources required by your build process. Here is as an example. Replace YourCodeBuildServiceRoleName with your desired role name:

aws iam create-role \
--role-name YourCodeBuildServiceRoleName \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "codebuild.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}'

Set Up AWS CodeStar Project

AWS CodeStar is a fully-managed service that accelerates the development and deployment of applications on AWS. It provides a unified platform for managing and automating various development tasks, including source code management, build, and deployment. With CodeStar, teams can quickly set up a continuous integration/continuous deployment (CI/CD) pipeline and collaborate on projects more efficiently.

Let’s set up the initial configuration for an AWS CodeStar project, providing essential information for the project’s source code repository and associated settings.

aws codestar create-project \
--name YourCodeStarProject \
--id YourCodeStarProjectID \
--description "Your CodeStar Project Description" \
--repository YourRepository \
--repository-url YourRepositoryURL \
--code {
"BranchName": "main",
"Repository": {
"CodeCommit": {
"Name": "YourCodeCommitRepositoryName"
}
}
} \
--region YourRegion

The AWS CLI command aws codestar create-project is used to create an AWS CodeStar project. In this example, the command specifies the project name (YourCodeStarProject), project ID (YourCodeStarProjectID), a description, the source code repository name (YourRepository), repository URL (YourRepositoryURL), and code-related details such as the main branch name (main) and the CodeCommit repository name (YourCodeCommitRepositoryName). Additionally, the command specifies the AWS region where the CodeStar project will be created (YourRegion).

Connect S3 Bucket to CodeStar Project

aws codestar create-deployment-pipeline \
--pipeline-name YourPipelineName \
--pipeline-settings file://pipeline-settings.json \
--output json \
--region YourRegion

AWS CLI command, aws codestar create-deployment-pipeline, creates a deployment pipeline in AWS CodeStar. It specifies the pipeline name (YourPipelineName), imports pipeline settings from a JSON file (pipeline-settings.json) (see below), specifies the output format as JSON, and designates the AWS region as YourRegion.

{
"pipeline": {
"stages": [
{
"name": "Source",
"actions": [
{
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"provider": "YourSourceProvider",
"version": "1"
},
"name": "SourceAction",
"configuration": {
"Branch": "main",
"OutputArtifactFormat": "CODEBUILD_CLONE_REF"
}
}
]
},
{
"name": "Beta",
"actions": [
{
"actionTypeId": {
"category": "Build",
"owner": "AWS",
"provider": "CodeBuild",
"version": "1"
},
"name": "BuildAction",
"configuration": {
"ProjectName": "YourCodeBuildProject"
}
}
]
},
{
"name": "Prod",
"actions": [
{
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "S3",
"version": "1"
},
"name": "DeployAction",
"configuration": {
"BucketName": "YourS3BucketName",
"Extract": "true"
}
}
]
}
]
}
}

Replace placeholders like YourSourceProvider, YourCodeBuildProject, and YourS3BucketName with your actual values. This configuration represents a simple three-stage pipeline with source, build, and deployment actions. Adjust it based on your specific use case and requirements.

YourSourceProvider typically refers to the version control system or source code repository provider you are using. Common source providers include AWS CodeCommit, GitHub, Bitbucket, and others. Here’s an example using CodeCommit as the source provider:

"Source": {
"type": "CODECOMMIT",
"location": "YourCodeCommitRepositoryName"
}

Replace YourCodeCommitRepositoryName with the actual name of your CodeCommit repository. If you’re using GitHub, the configuration would look like:

"Source": {
"type": "GITHUB",
"location": "https://github.com/yourusername/yourrepository",
"gitCloneDepth": 1
}

Replace https://github.com/yourusername/yourrepository with the URL of your GitHub repository.

Configure AWS CodePipeline

AWS CodePipeline is a fully managed continuous integration and continuous delivery (CI/CD) service that automates the build, test, and deployment phases of your release process. It allows you to define and visualize your application’s workflow, providing a streamlined path for code changes to move from source to production.

aws codepipeline create-pipeline \
--pipeline-name YourPipelineName \
--role-arn YourIAMRoleARN \
--output json \
--region YourRegion

Test the Pipeline

Make a change to your website code and commit it to the repository. The pipeline will automatically detect the code change, trigger a build, and deploy the updated code to your S3 bucket.

Project # 4 — Publish Amazon CloudWatch metrics to a CSV file using AWS Lambda

Publishing Amazon CloudWatch metrics to a CSV file using AWS Lambda involves a few steps, including creating a Lambda function, configuring permissions, and writing code to fetch CloudWatch metrics and store them in a CSV file.

Create an IAM Role for AWS Lambda

# Create a trust policy document (trust-policy.json)
echo '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}' > trust-policy.json

# Create the IAM role
aws iam create-role \
--role-name YourRoleName \
--assume-role-policy-document file://trust-policy.json

# Attach the AWSLambdaBasicExecutionRole policy to the role
aws iam attach-role-policy \
--role-name YourRoleName \
--policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole

After running these commands, you will have an IAM role named YourRoleName with the necessary permissions for Lambda execution. Ensure that you have the AWS CLI installed and configured with the necessary permissions to execute these commands.

Create a Lambda Function

Create a file named lambda_function.py and paste the following code to retrieve CloudWatch metrics and stores them in a CSV file and store it in a S3 bucket. Replace the placeholders like YourNamespace and YourMetricName with your specific CloudWatch namespace and metric name.

import boto3
import csv
from datetime import datetime, timedelta

def lambda_handler(event, context):
# Set up CloudWatch client
cloudwatch = boto3.client('cloudwatch')

# Set up S3 client
s3 = boto3.client('s3')

# Set up CSV file and writer
csv_columns = ['MetricName', 'Timestamp', 'Value']
csv_data = []

# Define CloudWatch parameters
namespace = 'YourNamespace'
metric_name = 'YourMetricName'
start_time = datetime.utcnow() - timedelta(days=1)
end_time = datetime.utcnow()
period = 300 # 5-minute intervals

# Get CloudWatch metrics
response = cloudwatch.get_metric_data(
MetricDataQueries=[
{
'Id': 'm1',
'MetricStat': {
'Metric': {
'Namespace': namespace,
'MetricName': metric_name,
},
'Period': period,
'Stat': 'Average',
'Unit': 'Count',
},
'ReturnData': True,
},
],
StartTime=start_time,
EndTime=end_time,
)

# Prepare data for CSV
for result in response['MetricDataResults'][0]['Timestamps']:
csv_data.append({
'MetricName': metric_name,
'Timestamp': result['Timestamp'].strftime('%Y-%m-%d %H:%M:%S'),
'Value': result['Value'],
})

# Write CSV data to S3 bucket
s3_bucket = 'your-s3-bucket-name'
s3_key = 'cloudwatch_metrics.csv'

s3.put_object(
Bucket=s3_bucket,
Key=s3_key,
Body='\n'.join([','.join(map(str, row.values())) for row in csv_data]),
ContentType='text/csv'
)

print(f"CSV file stored in S3 bucket: s3://{s3_bucket}/{s3_key}")
# Create a deployment package (assuming your Python code is in a file named lambda_function.py)
zip deployment-package.zip lambda_function.py

# Create the Lambda function
aws lambda create-function \
--function-name YourFunctionName \
--runtime python3.8 \
--role arn:aws:iam::YourAWSAccountID:role/YourRoleName \
--handler lambda_function.lambda_handler \
--zip-file fileb://deployment-package.zip

To add CloudWatch permissions to the IAM role associated with your Lambda function, you can use the following command:

aws iam attach-role-policy \
--role-name YourRoleName \
--policy-arn arn:aws:iam::aws:policy/CloudWatchReadOnlyAccess

This command attaches the CloudWatchReadOnlyAccess policy to the specified IAM role, granting it the necessary permissions to read CloudWatch metrics.

Test the Lambda function

  1. Click on the “Test” button in the Lambda console to manually trigger your function.
  2. Check CloudWatch Logs and the Lambda console for any errors or issues.

Set up CloudWatch Events (Optional)

Amazon CloudWatch Events is a fully-managed service that allows users to respond to system events and automate workflows in their AWS environment. It enables event-driven architectures by providing a scalable and flexible way to route events from various AWS services to different targets, such as AWS Lambda functions or SNS topics.

To set up CloudWatch Events using the AWS CLI, you can use the following commands. This assumes you want to create a rule that triggers the Lambda function on a scheduled basis (e.g., daily).

# Create the rule
aws events put-rule \
--name YourRuleName \
--schedule-expression "rate(1 day)"

# Add Lambda function as a target for the rule
aws events put-targets \
--rule YourRuleName \
--targets "Id"="1","Arn"="arn:aws:lambda:your-region:your-account-id:function:YourFunctionName"

# Enable the rule
aws events enable-rule --name YourRuleName

These commands do the following:

  1. Create a CloudWatch Events rule with a schedule expression that triggers the rule every day.
  2. Add your Lambda function as a target for the rule.
  3. Enable the rule to activate it.

Make sure that your Lambda function has the necessary permissions to be triggered by CloudWatch Events.

# Replace YourFunctionName with the actual name of your Lambda function
lambda_function_name="YourFunctionName"

# Get the current IAM role attached to the Lambda function
lambda_role_arn=$(aws lambda get-function-configuration --function-name $lambda_function_name --query 'Role' --output text)

# Attach the AWSLambdaRole policy to the Lambda role
aws iam attach-role-policy \
--role-name ${lambda_role_arn##*/} \
--policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaRole

This script does the following:

  1. Retrieves the current IAM role attached to the Lambda function.
  2. Attaches the AWSLambdaRole policy to the Lambda role.

Project # 5 — Deploy a Simple React Web Application using AWS Amplify

Prerequisites

  • AWS Account: Make sure you have an AWS account set up.
  • Node.js and npm installed on your local machine.
  • AWS CLI installed and configured with your AWS credentials.

Set up a React Application

Create a new React application using Create React App:

npx create-react-app my-react-app
cd my-react-app

AWS Amplify

AWS Amplify is a comprehensive development platform that simplifies the process of building and deploying full-stack web and mobile applications. It provides a set of tools and services to streamline tasks like authentication, API integration, storage, and hosting. With Amplify, developers can quickly create scalable and secure applications, leveraging various AWS services with minimal configuration.

Before initializing the Amplify project, make sure you have the Amplify CLI installed. You can install it globally using npm:

npm install -g @aws-amplify/cli

Now, navigate to the root directory of your React application in the terminal and run the following command:

amplify init

The amplify init command initiates the Amplify project setup and guides you through a series of prompts:

  • Enter a name for the project: Provide a unique name for your project.
  • Select the environment: Choose the default environment (usually dev).
  • Choose your default editor: Select your preferred code editor.
  • Choose the type of app that you’re building: Choose javascript for a React application.
  • What JavaScript framework are you using: Choose react.
  • Source Directory Path: Keep the default (src).
  • Distribution Directory Path: Keep the default (build).
  • Build Command: Keep the default (npm run-script build).
  • Start Command: Keep the default (npm run-script start).

After providing these details, Amplify will initialize your project and set up the necessary configuration files and directories.

Authentication with Amazon Cognito

Amazon Cognito is a fully managed service by AWS that provides secure user identity and access management. It enables developers to easily add user sign-up, sign-in, and access control to web and mobile apps. With features like multi-factor authentication and social identity federation, Amazon Cognito simplifies the implementation of robust authentication and authorization mechanisms.

Following the initialization, you can add backend services such as authentication, API, storage, and hosting using the amplify add command. For example, to add authentication, run:

amplify add auth

Follow the prompts to configure your authentication settings, including the authentication providers and advanced settings.

Once you have configured the backend services, deploy them to the cloud using the following command:

amplify push

Amplify will present a summary of the changes to be deployed. Confirm the deployment to provision the necessary AWS resources for your project.

To view and manage your Amplify project, open the Amplify Console using the following command:

amplify console

This will open the Amplify Console in your default web browser, providing a visual interface to monitor and manage your deployed services.

Set up AWS AppSync for API

AWS AppSync is a managed service that simplifies the development of scalable and secure GraphQL APIs. It allows developers to easily connect their applications to various data sources, including AWS services like DynamoDB, Lambda, or any HTTP data source. AppSync provides real-time data synchronization and offline capabilities, making it efficient for building responsive and collaborative applications.

Open your terminal and run the following command to add a new API using the Amplify CLI:

amplify add api
  • Choose GraphQL as the API type.
  • Provide a name for your API.
  • Choose an authorization type. For simplicity, you can choose API key for now.
  • Configure additional settings, such as the schema and resolvers.

The Amplify CLI will prompt you to edit the GraphQL schema. Open the generated schema.graphql file in the amplify/backend/api/{API_NAME}/schema directory and define your GraphQL schema.

After configuring your API, deploy it using the following command:

amplify push
  • Amplify will ask if you want to generate code for your GraphQL API. You can choose to generate code for GraphQL operations, which will create necessary files in your project for interacting with the API.
  • Confirm that you want to continue with the deployment.

Amplify will provision the necessary resources in AWS, including the AppSync API, DynamoDB tables (if needed), and the related IAM roles.

Once the deployment is complete, you can explore the newly created API in the AWS AppSync console. Run the following command to open the console:

amplify console api

This will open the AWS AppSync console in your default web browser. Here, you can view your GraphQL schema, configure additional settings, and test your API using the built-in query editor.

To use the API in your React application, install the necessary dependencies:

npm install aws-amplify @aws-amplify/ui-react

In your React app, configure Amplify with your AppSync API:

// src/index.js
import Amplify from 'aws-amplify';
import config from './aws-exports';
Amplify.configure(config);

You can now start using the generated GraphQL operations in your React components.

// Example in a React component
import { API } from 'aws-amplify';

async function fetchData() {
try {
const result = await API.graphql({ query: /* Your GraphQL query */ });
console.log(result);
} catch (error) {
console.error(error);
}
}

You’ve successfully set up AWS AppSync for your API, deployed it, and connected your React application to interact with the API using Amplify.

Set up Amazon DynamoDB for Database

Amazon DynamoDB is a fully managed NoSQL database service provided by AWS, designed for high-performance and scalable applications. It offers seamless and automatic scaling of throughput and storage, with low-latency access to data. DynamoDB supports both document and key-value data models and is suitable for a wide range of use cases, from small applications to large-scale, globally distributed systems.

In your terminal, run the following command to add a new DynamoDB table to your Amplify project:

amplify add storage
  • Choose NoSQL Database as the storage type.
  • Select Amazon DynamoDB as the NoSQL database.
  • Provide a name for your table.
  • Define the primary key for your table. For example, you can have a partition key named id with the type String.

Amplify will prompt you to configure additional settings for your DynamoDB table:

  • Add indexes: You can choose to add global or local secondary indexes based on your application’s querying needs.
  • Specify advanced settings: Set additional configurations such as read and write capacity units, or use default values.

After configuring your DynamoDB table, deploy the changes to provision the DynamoDB table in the AWS cloud:

amplify push

Confirm the deployment when prompted.

Now that DynamoDB is set up, you can access it in your React application. Amplify automatically generates a set of GraphQL mutations and queries for your DynamoDB table. You can find these in the src/graphql directory of your project. For example, if you added a Todo table, you might have mutations like createTodo, updateTodo, and queries like listTodos.

You can use these generated queries and mutations in your React components to interact with DynamoDB:

// Example: Create a new Todo
import { API, graphqlOperation } from 'aws-amplify';

async function createTodo() {
const todoDetails = {
name: 'New Todo',
description: 'Description of the new todo',
};

try {
const result = await API.graphql(graphqlOperation(createTodo, { input: todoDetails }));
console.log('Todo created:', result.data.createTodo);
} catch (error) {
console.error('Error creating todo:', error);
}
}

You can also explore your DynamoDB table in the AWS Management Console. Run the following command to open the DynamoDB console:

amplify console storage

This will open the DynamoDB console in your default web browser, allowing you to view and manage your DynamoDB table.

You’ve successfully set up Amazon DynamoDB for your Amplify project and integrated it into your React application. You can now use the generated GraphQL operations to perform CRUD operations on your DynamoDB table.

Set up Amazon S3 and CloudFront

In your terminal, run the following command to add hosting to your Amplify project:

amplify add hosting
  • Choose Amazon S3 and Amazon CloudFront as the hosting service.
  • Provide a name for the hosting environment.
  • Configure additional settings, such as the index and error documents.

After configuring the hosting environment, deploy it to provision the necessary S3 bucket and CloudFront distribution:

amplify publish

Confirm the deployment when prompted.

To use Amazon S3 for file storage (e.g., storing user uploads), modify your Amplify project settings. Open the aws-exports.js file in your project's src directory and add the following configurations:

// aws-exports.js

const awsmobile = {
...
storage: {
AWSS3: {
bucket: 'your-s3-bucket-name',
region: 'your-region',
},
},
};

Replace 'your-s3-bucket-name' with a unique S3 bucket name and 'your-region' with the AWS region where you want to create the S3 bucket.

With S3 and CloudFront set up, you can now use them to serve static assets, host files, and provide content delivery in your React application.

For example, to upload a file to S3:

// Example: Upload a file to S3
import { Storage } from 'aws-amplify';

async function uploadFile(file) {
try {
await Storage.put('path/to/file', file, {
contentType: 'image/jpeg', // Adjust the content type based on your file type
});
console.log('File uploaded successfully!');
} catch (error) {
console.error('Error uploading file:', error);
}
}

You can also explore your S3 bucket in the AWS Management Console. Run the following command to open the S3 console:

amplify console storage

This will open the S3 console in your default web browser, allowing you to view and manage the files in your S3 bucket.

After deploying your hosting environment, Amplify will provide you with a CloudFront distribution URL. You can find this URL in the Amplify Console or in the output after running amplify publish. Use this CloudFront URL to access your deployed React application globally.

You’ve successfully set up Amazon S3 and CloudFront for storage and content delivery in your Amplify project. Your React application is now hosted, and static assets are delivered efficiently using CloudFront.

Integrate Amplify with React Application

In your terminal, install the necessary Amplify libraries for React:

npm install aws-amplify @aws-amplify/ui-react

In your React application, configure Amplify by adding the following code to the entry point of your application (typically src/index.js or src/index.tsx):

// src/index.js

import React from 'react';
import ReactDOM from 'react-dom';
import Amplify from 'aws-amplify';
import awsConfig from './aws-exports'; // Ensure this file exists in your project

Amplify.configure(awsConfig);

ReactDOM.render(
<React.StrictMode>
{/* Your main application component */}
</React.StrictMode>,
document.getElementById('root')
);

Make sure to replace aws-exports with the actual configuration file generated by Amplify during the project initialization.

Amplify provides pre-built UI components for common authentication flows. You can integrate these components into your React application for user authentication. For example, you can use the withAuthenticator to wrap your main application component:

// src/App.js

import React from 'react';
import { withAuthenticator } from '@aws-amplify/ui-react';

function App() {
return (
<div>
<h1>Your React App</h1>
{/* Your app content */}
</div>
);
}

export default withAuthenticator(App, { includeGreetings: true });

This will add authentication features to your app, such as sign-up, sign-in, and sign-out.

To access information about the authenticated user, you can use the Auth module provided by Amplify:

// Example: Accessing user information
import { Auth } from 'aws-amplify';

async function getUserInfo() {
try {
const user = await Auth.currentAuthenticatedUser();
console.log('Authenticated user:', user);
} catch (error) {
console.error('Error getting user information:', error);
}
}

Amplify provides a wide range of features beyond authentication, such as API interactions, storage, and real-time data synchronization. You can explore these features in the Amplify documentation.

Run your React application locally to test the integration with Amplify:

npm start

Once satisfied with your application, deploy it to the cloud using the following command:

amplify publish

This will deploy your React application along with the configured Amplify services to AWS.

Congratulations! You’ve successfully integrated Amplify with your React application, enabling features like authentication and providing access to other Amplify services. Your app is now ready for deployment with a seamless integration of AWS Amplify capabilities.

Conclusion

The handpicked collection of five engaging projects serves as a pathway to not only explore but also showcase the diverse capabilities of AWS. By the end of this journey, my aspiration is that you not only grasp the multifaceted nature of AWS but also feel empowered to leverage its capabilities effectively.

--

--