<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Rojan Sedhai on Medium]]></title>
        <description><![CDATA[Stories by Rojan Sedhai on Medium]]></description>
        <link>https://medium.com/@rojansedhai01?source=rss-fd868de3edf1------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Fri, 15 May 2026 19:10:19 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@rojansedhai01/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Building a Serverless Meme Engine on AWS]]></title>
            <link>https://medium.com/@rojansedhai01/building-a-serverless-meme-engine-on-aws-9ed7e28543f4?source=rss-fd868de3edf1------2</link>
            <guid isPermaLink="false">https://medium.com/p/9ed7e28543f4</guid>
            <dc:creator><![CDATA[Rojan Sedhai]]></dc:creator>
            <pubDate>Mon, 11 May 2026 05:38:14 GMT</pubDate>
            <atom:updated>2026-05-11T05:40:05.649Z</atom:updated>
            <content:encoded><![CDATA[<p><em>Drop a photo into a bold web app and get a fully composed meme back in seconds. No servers. No infrastructure babysitting. Just pure event-driven chaos powered by AWS and Gemini AI.</em></p><p>I have attached the GitHub Repo link for full code ;)</p><p><a href="https://github.com/rojansedhai/auto-meme-generator">GitHub - rojansedhai/auto-meme-generator</a></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*LvDhoORRZGCNGycI5Jv5dw.png" /></figure><p>The internet runs on memes. There’s a strange kind of joy in over-engineering something ridiculous. Memes are ephemeral, low-stakes, and a cultural phenomenum.</p><p>So naturally, I decided to build a fully serverless meme-generation pipeline on AWS that can:</p><ul><li>Accept an uploaded image</li><li>Detect objects inside it using AI</li><li>Generate sarcastic meme captions using Gemini AI</li><li>Render the final meme automatically</li><li>Serve everything through a sleek Neobrutalist frontend</li></ul><p>And the best part?</p><p>The entire stack is provisioned with Terraform and runs without managing a single server.</p><p>In this article, I’ll walk you through how to build an <strong>Auto Meme Generator</strong> using:</p><ul><li>AWS Lambda</li><li>Step Functions</li><li>EventBridge</li><li>DynamoDB</li><li>CloudFront + OAC</li><li>Amazon Rekognition</li><li>Google Gemini AI</li><li>Sharp Image Processing</li><li>Terraform</li></ul><h3>What We’re Building</h3><p>Here’s the flow:</p><ol><li>User uploads an image through a web app</li><li>Browser uploads directly to S3 using a presigned URL</li><li>S3 triggers an EventBridge event</li><li>Step Functions orchestrates the workflow</li><li>Rekognition analyzes the image</li><li>Gemini AI creates meme captions</li><li>Sharp overlays the text onto the image</li><li>Final meme is stored and returned to the frontend</li></ol><p>The result is a fully automated AI meme factory.</p><h3>Final Architecture</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*A61Pe7XaXYcaNRFVM1F3FA.png" /><figcaption>Architectural Diagram</figcaption></figure><h3>Event-Driven Design</h3><p>The workflow is triggered entirely through events:</p><ul><li>S3 upload event</li><li>EventBridge rule</li><li>Step Functions orchestration</li></ul><p>This creates a clean and loosely coupled system.</p><h3>AI-Powered Workflow</h3><p>The app combines:</p><ul><li>Amazon Rekognition for image understanding</li><li>Gemini AI for caption generation</li></ul><p>This creates surprisingly funny meme captions with almost zero logic on your side.</p><h3>Prerequisites</h3><p>Before starting, make sure you have:</p><ul><li>AWS Account with sufficient permissions</li><li>AWS CLI Configured locally</li><li>Terraform v1.5+</li><li>Node.js v20 or higher LTS</li><li>Gemini API Key (Free tier works fine and You can obtain a Gemini API key from Google AI Studio)</li></ul><h3>Project Structure</h3><p>Here’s the recommended layout:</p><pre>auto-meme-generator/<br>├── layer/<br>│   └── nodejs/<br>│       └── package.json       ← Sharp dependency for the Lambda layer<br>├── src/<br>│   ├── analyze/index.mjs      ← Step 1: Rekognition<br>│   ├── caption/index.mjs      ← Step 2: Gemini AI (Top/Bottom text)<br>│   ├── compose/               ← Step 3: Sharp image composer<br>│   │   ├── index.mjs          <br>│   │   └── fonts/Anton.ttf    ← Embedded meme font<br>│   ├── status/index.mjs       ← Polling API Endpoint<br>│   ├── upload/index.mjs       ← Presigned URL API Endpoint<br>│   └── frontend/index.html    ← The Web UI<br>└── terraform/<br>    ├── main.tf                ← S3, DynamoDB, Secrets<br>    ├── lambda.tf              ← Lambda Provisioning<br>    ├── iam.tf                 ← Least-Privilege Roles<br>    ├── sfn.tf                 ← Step Functions<br>    ├── events.tf              ← EventBridge<br>    ├── api.tf                 ← API Gateway v2 routes<br>    ├── cloudfront.tf          ← OAC &amp; CloudFront Distribution<br>    ├── frontend_sync.tf       ← Auto-syncs frontend code to S3 &amp; Invalidates Cache<br>    └── outputs.tf             ← Deployment URLs<br>    └── variables.tf           ← The AWS region to deploy the infrastructure to</pre><p>Keeping infrastructure and application logic separated makes the project significantly easier to maintain.</p><h3>Building the Sharp Lambda Layer</h3><p>The meme composition step uses the Sharp library for image processing.</p><p>Because Sharp includes native binaries, it must be compiled for the Lambda Linux runtime.</p><h3>Create package.json</h3><p>Inside layer/nodejs/package.json:</p><pre>{<br>  &quot;name&quot;: &quot;nodejs&quot;,<br>  &quot;version&quot;: &quot;1.0.0&quot;,<br>  &quot;dependencies&quot;: {<br>    &quot;sharp&quot;: &quot;^0.34.5&quot;<br>  }<br>}</pre><h3>Install Linux-Compatible Dependencies</h3><p>Run:</p><pre>npm install --platform=linux --arch=x64 --libc=glibc</pre><h3>Zip the Layer</h3><pre>cd ../layer<br>zip -r ../terraform/sharp_layer.zip nodejs/</pre><p>This ZIP file becomes the Lambda Layer attached to the compose function.</p><h3>Upload API Lambda</h3><p>The first Lambda handles uploads.</p><p>Its responsibilities:</p><ul><li>Generate a UUID (memeId)</li><li>Create a presigned S3 PUT URL</li><li>Insert a PENDING record into DynamoDB</li></ul><p>This allows the frontend to upload images directly to S3 without exposing AWS credentials.</p><h3>Status API Lambda</h3><p>The frontend continuously polls for meme completion.</p><p>This Lambda:</p><ul><li>Reads the meme status from DynamoDB</li><li>Returns a presigned GET URL when the meme is complete</li></ul><p>This creates a lightweight async workflow without needing WebSockets.</p><h3>Step Functions Pipeline</h3><p>This is where the magic happens.</p><h3>Step 1: Analyze the Image</h3><p>The Analyze Lambda:</p><ul><li>Extracts the memeId</li><li>Calls Amazon Rekognition</li><li>Detects labels inside the image</li></ul><p>Example labels:</p><pre>[<br>  &quot;Cat&quot;,<br>  &quot;Laptop&quot;,<br>  &quot;Person&quot;<br>]</pre><p>These labels become the context for Gemini AI.</p><h3>Step 2 : Generate Meme Captions with Gemini</h3><p>This Lambda:</p><ul><li>Reads the Gemini API key from AWS Secrets Manager</li><li>Sends Rekognition labels to Gemini</li><li>Requests a meme caption in a strict format</li></ul><p>Example prompt:</p><pre>Create a sarcastic meme caption using these labels:<br>Cat, Laptop, Person</pre><pre>Return:<br>TOP TEXT|BOTTOM TEXT</pre><p>Example response:</p><pre>WORKING FROM HOME|SUPERVISOR DETECTED</pre><h3>Step 3: Compose the Meme</h3><p>This Lambda handles image rendering using Sharp.</p><p>It:</p><ul><li>Downloads the uploaded image</li><li>Loads the Anton.ttf meme font</li><li>Creates SVG overlays</li><li>Places text at the top and bottom</li><li>Uploads the final meme to S3</li><li>Updates DynamoDB status to COMPLETED</li></ul><p>This entire process runs inside Lambda in just a few seconds.</p><h3>Secure Frontend Hosting with CloudFront + OAC</h3><p>One of the best upgrades in this project is secure frontend hosting.</p><p>Instead of making the S3 bucket public:</p><ul><li>CloudFront sits in front</li><li>Origin Access Control (OAC) is enabled</li><li>S3 blocks all public access</li></ul><p>This means users can only access content through CloudFront.</p><p>Benefits:</p><ul><li>HTTPS by default</li><li>Better caching</li><li>Improved security</li><li>Production-grade architecture</li></ul><h3>Deploying Everything with Terraform</h3><p>After your application code is ready:</p><h3>Install Dependencies</h3><pre>cd src/upload &amp;&amp; npm install<br>cd ../status &amp;&amp; npm install</pre><h3>Deploy Infrastructure</h3><pre>cd ../../terraform<br>terraform init<br>terraform apply -auto-approve</pre><p>Terraform provisions:</p><ul><li>S3 Buckets</li><li>Lambda Functions</li><li>IAM Roles</li><li>DynamoDB</li><li>API Gateway</li><li>Step Functions</li><li>EventBridge</li><li>CloudFront Distribution</li><li>Secrets Manager</li></ul><h3><strong>Automating the Frontend Deployment</strong></h3><p>In a typical serverless project, you might deploy infrastructure with Terraform and then manually run aws s3 sync to upload your frontend files. Here we want a true &quot;one-command deploy&quot;.</p><p>To achieve this, we added a frontend_sync.tf configuration that:</p><ol><li>Uses the aws_s3_object resource with a for_each loop to automatically upload all files from the src/frontend directory to the S3 bucket.</li><li>Computes the MD5 hash of the frontend files.</li><li>Uses a null_resource to trigger a local AWS CLI command (aws cloudfront create-invalidation) <em>only</em> when the frontend file contents change.</li></ol><p>Now, when you run terraform apply, Terraform detects changes to your HTML/CSS/JS, uploads them, and clears the CloudFront cache automatically. Magic!</p><h3>Important Final Step</h3><p>Terraform creates the secret automatically, but with a placeholder value.</p><p>You must replace:</p><pre>REPLACE_ME_IN_AWS_CONSOLE</pre><p>with your real Gemini API key in AWS Secrets Manager.</p><p>Without this, caption generation will fail.</p><h3>The Frontend</h3><p>The frontend is intentionally over-the-top.</p><p>Features include:</p><ul><li>Thick black borders</li><li>Bright accent colors</li><li>Heavy drop shadows</li><li>Smooth hover animations</li><li>Mobile responsiveness</li><li>Dependency-free JavaScript</li></ul><p>It gives the project personality instead of feeling like another generic AWS dashboard clone.</p><h3>The Result?</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*uOorWTtdfp-FNjQZkuIh8g.png" /><figcaption>The meme generated in the website!</figcaption></figure><h3>Why This Project Is Great for Learning</h3><p>This single project teaches:</p><ul><li>Serverless architecture</li><li>Event-driven design</li><li>AI integration</li><li>Infrastructure as Code</li><li>Presigned URL workflows</li><li>CloudFront security best practices</li><li>Image processing in Lambda</li></ul><p>It’s also an excellent portfolio project because it combines:</p><ul><li>Frontend</li><li>Backend</li><li>AI</li><li>Cloud Engineering</li></ul><p>into one deployable application.</p><h3>Tearing Down the Infrastructure</h3><p><strong>Cleaning Up</strong> Serverless is cheap, but leaving resources lying around is bad practice. To tear down the entire project, simply run:</p><p>terraform destroy -auto-approve</p><p><em>Pro-Tip:</em> By default, Terraform will refuse to delete an S3 bucket that has objects inside it. To make teardown seamless, we added force_destroy = true to our S3 bucket configurations in Terraform, allowing it to wipe the buckets clean automatically during destruction.</p><h3>Potential Improvements</h3><p>Here are some ideas for taking it further:</p><h4>1. Add User Authentication</h4><h4>2. Generate Multiple Meme Styles</h4><h4>3. Add Queueing</h4><h4>4. Store Meme History</h4><h4>5. Add Social Sharing</h4><h3>Final Thoughts</h3><p>This project turned out to be one of the most fun serverless builds I’ve worked on.</p><p>It combines:</p><ul><li>AI</li><li>Serverless</li><li>Image processing</li><li>Modern frontend design</li><li>Infrastructure automation</li></ul><p>into something genuinely entertaining.</p><blockquote>However, there are rough edges. The Gemini API key setup requires a manual console step that breaks the “one command deploy” story slightly.</blockquote><blockquote>While we completely automated the frontend upload and cache invalidation via Terraform, pasting in the API Gateway URL into the index.html is the kind of thing you&#39;d want to automate in a more polished version (perhaps using Terraform&#39;s templatefile function).</blockquote><p>And at the end of it, you have a meme generator. Which is its own reward.</p><p>If you’re learning AWS serverless architecture, this is the kind of project that teaches real-world patterns while still being fun enough to actually finish.</p><p>Happy meme generating!!! :D</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9ed7e28543f4" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[I built a Serverless AI Bot to Read AWS News So I Don’t Have To]]></title>
            <link>https://medium.com/@rojansedhai01/i-built-a-serverless-ai-bot-to-read-aws-news-so-i-dont-have-to-0ccf42f8ff68?source=rss-fd868de3edf1------2</link>
            <guid isPermaLink="false">https://medium.com/p/0ccf42f8ff68</guid>
            <dc:creator><![CDATA[Rojan Sedhai]]></dc:creator>
            <pubDate>Mon, 22 Dec 2025 15:23:12 GMT</pubDate>
            <atom:updated>2025-12-24T02:33:32.591Z</atom:updated>
            <content:encoded><![CDATA[<h3>I built a Serverless AI Bot to Read AWS News So I Don’t Have To!!!</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*HV31oYbpKuyFSyf55BQzpQ.png" /><figcaption>AI Cloud Curator</figcaption></figure><p>As someone who uses AWS on a daily basis, keeping up with AWS announcements feels like drinking from a firehose. The “What’s New” feed updates dozens of times a day. I didn’t want to subscribe to <em>another</em> newsletter; I wanted an executive assistant who could read the feed, filter out what I’ve already seen, and send me a concise summary of the things that matter.</p><p>In this guide, I’ll show you how to build <strong>The AI Cloud Curator</strong> , a fully automated, serverless system that scrapes AWS news, summarizes it using <strong>Amazon Bedrock (Claude 3 Haiku)</strong>, and delivers it to your inbox.</p><p>I will build this using <strong>Infrastructure as Code (Terraform)</strong>, while keeping your credentials safe.</p><h3>The Architecture</h3><p>We are going 100% Serverless. This ensures the project is low-maintenance and fits almost entirely within the AWS Free Tier or with very low cost in case you don’t have Free Tier.</p><ol><li><strong>EventBridge Scheduler:</strong> Triggers the workflow every morning at 8:00 AM.</li><li><strong>AWS Lambda (Python):</strong> It fetches the RSS feed and orchestrates the logic.</li><li><strong>Amazon DynamoDB:</strong> Acts as the memory state. It stores the IDs of articles we’ve already processed so we don’t get duplicate emails.</li><li><strong>Amazon Bedrock (Claude 3 Haiku):</strong> It reads the technical announcement and rewrites it into a 2-sentence executive summary.</li><li><strong>Amazon SNS:</strong> The delivery service that pushes the email to your inbox.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/969/1*7EG2Hl9zaNQzJ5ThGMCP1A.png" /><figcaption>Architectural Diagram for the AI Cloud Curator!</figcaption></figure><h3>Part 1: Prerequisites &amp; Setup</h3><p>Before writing code, we need to set the stage.</p><h4>1. AWS CLI &amp; Credentials</h4><p>If you haven’t already, install the <a href="https://aws.amazon.com/cli/">AWS CLI</a> and configure it with a user that has Admin permissions.</p><pre>aws configure</pre><ul><li><strong>AWS Access Key ID:</strong> [Enter your key]</li><li><strong>AWS Secret Access Key:</strong> [Enter your secret]</li><li><strong>Default region name:</strong> us-east-1 (We will use us-east-1 as it has the widest Bedrock model availability).</li></ul><h4>2. Install Terraform</h4><p>We will use Terraform to deploy our infrastructure. Download and install it from <a href="https://developer.hashicorp.com/terraform/downloads">HashiCorp’s website</a>.</p><p>Verify it works:</p><pre>terraform -version</pre><h4>3. Enable Amazon Bedrock Models</h4><p>This is a critical step. By default, you do not have access to Bedrock models.</p><ul><li>Log in to the AWS Console and search for <strong>Amazon Bedrock</strong>.</li><li>In the left sidebar, scroll down to <strong>Model access</strong>.</li><li>Click the orange <strong>Modify model access</strong> button.</li><li>Check the box for <strong>Anthropic</strong> -&gt; <strong>Claude 3 Haiku</strong>.</li><li>Fill the case and <strong>Submit</strong>. Access is usually granted instantly.</li></ul><h3>Part 2: The Project Structure</h3><p>Create a new folder for your project. Good organization is key to clean code.</p><pre>mkdir ai-cloud-curator<br>cd ai-cloud-curator<br>mkdir src</pre><p>Create three empty files to start:</p><ul><li>src/index.py (The Python application logic)</li><li>main.tf (The Infrastructure definition)</li><li>terraform.tfvars (Your secret configuration variables)</li></ul><h3>Part 3: The Application Logic (Python)</h3><p>Open src/index.py. We are using Python’s standard libraries (urllib, xml) to avoid the complexity of managing external pip dependencies and Lambda Layers.</p><pre>import boto3<br>import json<br>import os<br>import urllib.request<br>import xml.etree.ElementTree as ET<br>from datetime import datetime<br><br># Initialize clients<br>bedrock = boto3.client(service_name=&#39;bedrock-runtime&#39;, region_name=os.environ[&#39;BEDROCK_REGION&#39;])<br>dynamodb = boto3.resource(&#39;dynamodb&#39;)<br>sns = boto3.client(&#39;sns&#39;)<br>TABLE_NAME = os.environ[&#39;TABLE_NAME&#39;]<br>SNS_TOPIC_ARN = os.environ[&#39;SNS_TOPIC_ARN&#39;]<br>RSS_URL = &quot;https://aws.amazon.com/about-aws/whats-new/recent/feed/&quot;<br>def lambda_handler(event, context):<br>    print(&quot;Fetching AWS RSS Feed...&quot;)<br>    <br>    # 1. Fetch RSS Feed<br>    try:<br>        with urllib.request.urlopen(RSS_URL) as response:<br>            rss_data = response.read()<br>    except Exception as e:<br>        print(f&quot;Error fetching RSS: {e}&quot;)<br>        return<br>        <br>    root = ET.fromstring(rss_data)<br>    # AWS feed items are usually standard RSS item tags<br>    items = root.findall(&quot;.//item&quot;)<br>    <br>    # Process only the top 3 newest items to save cost/time per run<br>    processed_count = 0<br>    table = dynamodb.Table(TABLE_NAME)<br>    <br>    for item in items[:3]:<br>        title = item.find(&quot;title&quot;).text<br>        link = item.find(&quot;link&quot;).text<br>        guid = item.find(&quot;guid&quot;).text<br>        description = item.find(&quot;description&quot;).text<br>        <br>        # 2. Check Deduplication (DynamoDB)<br>        response = table.get_item(Key={&#39;article_id&#39;: guid})<br>        if &#39;Item&#39; in response:<br>            print(f&quot;Skipping existing article: {title}&quot;)<br>            continue<br>            <br>        print(f&quot;Processing new article: {title}&quot;)<br>        <br>        # 3. Summarize with Bedrock (Claude 3 Haiku)<br>        summary = generate_summary(title, description)<br>        <br>        # 4. Send Notification<br>        message = f&quot;☁️ **AWS New Announcement**\n\n**{title}**\n\n{summary}\n\nRead more: {link}&quot;<br>        sns.publish(<br>            TopicArn=SNS_TOPIC_ARN,<br>            Message=message,<br>            Subject=f&quot;AWS News: {title[:50]}...&quot;<br>        )<br>        <br>        # 5. Save to DynamoDB<br>        table.put_item(<br>            Item={<br>                &#39;article_id&#39;: guid,<br>                &#39;processed_at&#39;: str(datetime.now()),<br>                &#39;title&#39;: title<br>            }<br>        )<br>        processed_count += 1<br>        <br>    return {<br>        &#39;statusCode&#39;: 200,<br>        &#39;body&#39;: json.dumps(f&#39;Processed {processed_count} new articles.&#39;)<br>    }<br>def generate_summary(title, text):<br>    prompt = f&quot;&quot;&quot;<br>    You are a Cloud Architect assistant. Summarize this AWS announcement in 2 clear sentences. Focus on the value prop and technical benefit.<br>    <br>    Title: {title}<br>    Content: {text}<br>    &quot;&quot;&quot;<br>    <br>    body = json.dumps({<br>        &quot;anthropic_version&quot;: &quot;bedrock-2023-05-31&quot;,<br>        &quot;max_tokens&quot;: 150,<br>        &quot;messages&quot;: [<br>            {&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: prompt}<br>        ]<br>    })<br>    <br>    try:<br>        response = bedrock.invoke_model(<br>            modelId=&quot;anthropic.claude-3-haiku-20240307-v1:0&quot;,<br>            body=body<br>        )<br>        response_body = json.loads(response.get(&quot;body&quot;).read())<br>        return response_body[&quot;content&quot;][0][&quot;text&quot;]<br>    except Exception as e:<br>        print(f&quot;Bedrock Error: {e}&quot;)<br>        return &quot;Summary unavailable.&quot;</pre><h3>Part 4: Infrastructure as Code (Terraform)</h3><p>Open main.tf. This configuration sets up everything: IAM permissions, the database, and the scheduler.</p><p><strong>Note:</strong> Notice I am <em>not</em> hardcoding the email address here. We define it as a sensitive variable.</p><pre>provider &quot;aws&quot; {<br>  region = &quot;us-east-1&quot;<br>}<br><br># --- Variables ---<br>variable &quot;email_address&quot; {<br>  description = &quot;The email to receive notifications&quot;<br>  type        = string<br>  sensitive   = true <br>}<br># --- 1. DynamoDB for Deduplication ---<br>resource &quot;aws_dynamodb_table&quot; &quot;news_tracker&quot; {<br>  name           = &quot;aws-news-tracker&quot;<br>  billing_mode   = &quot;PAY_PER_REQUEST&quot;<br>  hash_key       = &quot;article_id&quot;<br>  attribute {<br>    name = &quot;article_id&quot;<br>    type = &quot;S&quot;<br>  }<br>}<br># --- 2. SNS Topic for Email ---<br>resource &quot;aws_sns_topic&quot; &quot;daily_briefing&quot; {<br>  name = &quot;aws-daily-briefing&quot;<br>}<br>resource &quot;aws_sns_topic_subscription&quot; &quot;email_sub&quot; {<br>  topic_arn = aws_sns_topic.daily_briefing.arn<br>  protocol  = &quot;email&quot;<br>  endpoint  = var.email_address<br>}<br># --- 3. IAM Role &amp; Permissions ---<br>resource &quot;aws_iam_role&quot; &quot;lambda_role&quot; {<br>  name = &quot;ai_curator_role&quot;<br>  assume_role_policy = jsonencode({<br>    Version = &quot;2012-10-17&quot;<br>    Statement = [{<br>      Action = &quot;sts:AssumeRole&quot;<br>      Effect = &quot;Allow&quot;<br>      Principal = { Service = &quot;lambda.amazonaws.com&quot; }<br>    }]<br>  })<br>}<br>resource &quot;aws_iam_role_policy&quot; &quot;lambda_policy&quot; {<br>  name = &quot;ai_curator_policy&quot;<br>  role = aws_iam_role.lambda_role.id<br>  policy = jsonencode({<br>    Version = &quot;2012-10-17&quot;<br>    Statement = [<br>      {<br>        Effect = &quot;Allow&quot;,<br>        Action = [&quot;logs:CreateLogGroup&quot;, &quot;logs:CreateLogStream&quot;, &quot;logs:PutLogEvents&quot;],<br>        Resource = &quot;arn:aws:logs:*:*:*&quot;<br>      },<br>      {<br>        Effect = &quot;Allow&quot;,<br>        Action = [&quot;dynamodb:GetItem&quot;, &quot;dynamodb:PutItem&quot;],<br>        Resource = aws_dynamodb_table.news_tracker.arn<br>      },<br>      {<br>        Effect = &quot;Allow&quot;,<br>        Action = &quot;sns:Publish&quot;,<br>        Resource = aws_sns_topic.daily_briefing.arn<br>      },<br>      {<br>        Effect = &quot;Allow&quot;,<br>        Action = &quot;bedrock:InvokeModel&quot;,<br>        Resource = &quot;arn:aws:bedrock:*:*:foundation-model/anthropic.claude-3-haiku-20240307-v1:0&quot;<br>      }<br>    ]<br>  })<br>}<br># --- 4. The Lambda Function ---<br>data &quot;archive_file&quot; &quot;lambda_zip&quot; {<br>  type        = &quot;zip&quot;<br>  source_file = &quot;${path.module}/src/index.py&quot;<br>  output_path = &quot;${path.module}/lambda_function.zip&quot;<br>}<br>resource &quot;aws_lambda_function&quot; &quot;curator_lambda&quot; {<br>  filename      = data.archive_file.lambda_zip.output_path<br>  function_name = &quot;ai-cloud-curator&quot;<br>  role          = aws_iam_role.lambda_role.arn<br>  handler       = &quot;index.lambda_handler&quot;<br>  runtime       = &quot;python3.9&quot;<br>  timeout       = 30 <br>  source_code_hash = data.archive_file.lambda_zip.output_base64sha256<br>  environment {<br>    variables = {<br>      TABLE_NAME     = aws_dynamodb_table.news_tracker.name<br>      SNS_TOPIC_ARN  = aws_sns_topic.daily_briefing.arn<br>      BEDROCK_REGION = &quot;us-east-1&quot;<br>    }<br>  }<br>}<br># --- 5. EventBridge Scheduler (Runs daily at 8 AM UTC) ---<br>resource &quot;aws_cloudwatch_event_rule&quot; &quot;daily_trigger&quot; {<br>  name                = &quot;daily-news-trigger&quot;<br>  schedule_expression = &quot;cron(0 8 * * ? *)&quot;<br>}<br>resource &quot;aws_cloudwatch_event_target&quot; &quot;lambda_target&quot; {<br>  rule      = aws_cloudwatch_event_rule.daily_trigger.name<br>  target_id = &quot;SendToLambda&quot;<br>  arn       = aws_lambda_function.curator_lambda.arn<br>}<br>resource &quot;aws_lambda_permission&quot; &quot;allow_eventbridge&quot; {<br>  statement_id  = &quot;AllowExecutionFromEventBridge&quot;<br>  action        = &quot;lambda:InvokeFunction&quot;<br>  function_name = aws_lambda_function.curator_lambda.function_name<br>  principal     = &quot;events.amazonaws.com&quot;<br>  source_arn    = aws_cloudwatch_event_rule.daily_trigger.arn<br>}</pre><h3>Part 5: Security &amp; Deployment</h3><h4>1. Configure Secrets</h4><p>Instead of putting your email in the code (which might end up on GitHub!), we use a terraform.tfvars file.</p><p>Open terraform.tfvars and add:</p><pre>email_address = &quot;your-actual-email@gmail.com&quot;</pre><p><strong><em>Tip:</em></strong><em> If you are pushing this to GitHub, create a </em><em>.gitignore file and add </em><em>terraform.tfvars to it. This prevents your email from being exposed publicly.</em></p><h4>2. Deploy</h4><p>Run the following commands in your terminal:</p><pre># Initialize Terraform<br>terraform init<br><br># Plan the deployment (Check for errors)<br>terraform plan<br><br># Deploy the resources<br>terraform apply</pre><p>Type yes when prompted.</p><h4>3. Confirm Subscription</h4><p>Wait a moment, then check your email inbox. You will receive an email from “AWS Notifications.” <strong>You must click the ‘Confirm subscription’ link</strong> inside that email. If you don’t, the system cannot send you summaries.</p><h3>Part 6: Testing &amp; Validation</h3><p>You don’t have to wait for 8:00 AM tomorrow to see if it works. Let’s force a run now.</p><p>You can do this via the AWS Console (Lambda -&gt; Test), or via the CLI:</p><pre>aws lambda invoke --function-name ai-cloud-curator response.json</pre><p><strong>The Result:</strong></p><ul><li>The script will fetch the RSS feed.</li><li>It will summarize the latest 3 articles.</li><li><strong>Check your inbox:</strong> You should receive 3 new emails.</li><li><strong>Idempotency Test:</strong> Run the command again. You should receive <strong>0 emails</strong>. This confirms that DynamoDB is successfully tracking which articles you’ve already seen.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-rZmkjVxLB0uFaZU_627NA.png" /><figcaption>Summarized Email 1</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*eKuWGQxmYhI8WI4-Q3giFA.png" /><figcaption>Summarized Email 2</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Xd2SuEBKNnNEN3wyQM6OGw.png" /><figcaption>Summarized Email 3</figcaption></figure><blockquote><strong><em>Critical Note: </em></strong><em>You may get an error like this </em>“An error occurred (AccessDeniedException) when calling the InvokeModel operation: Model access is denied due to IAM user or service role is not authorized to perform the required AWS Marketplace actions (aws-marketplace:ViewSubscriptions, aws-marketplace:Subscribe) to enable access to this model.”<em> when trying to call the Claude Model.</em></blockquote><blockquote><em>Confirm that you have subscribed and then try testing again.</em></blockquote><blockquote><strong><em>Note: By default, AWS EventBridge schedules run in UTC (Coordinated Universal Time), not your local time so do configure for your local time.</em></strong></blockquote><h3>Part 7: Cost Analysis</h3><p>Is this expensive? No, not really.</p><ul><li>The total cost should be around~$0.03 - $0.10 USD per month. Although, this can vary depending upon your usage, so do set up a <a href="https://medium.com/p/8b0c221ccfeb">billings alarm</a> for just in case!</li></ul><h3>Part 8: Cleanup (Destroy)</h3><p>If you want to tear down the project to ensure no future costs or clutter in your AWS account, Terraform makes this easy.</p><p>Run:</p><pre>terraform destroy</pre><p>Type yes. This will remove the Lambda, the IAM roles, the SNS topic, and the DynamoDB table.</p><h3>Conclusion</h3><p>By combining <strong>Terraform</strong>, <strong>Serverless</strong>, and <strong>Generative AI</strong>, we’ve built a practical tool that solves a real problem (okay, maybe not that real, but still). This project demonstrates how easy it is to integrate powerful AI models like Claude 3 into standard AWS workflows using Bedrock.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=0ccf42f8ff68" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Sleep Soundly: Automating AWS Cost Alarms]]></title>
            <link>https://medium.com/@rojansedhai01/sleep-soundly-automating-aws-cost-alarms-8b0c221ccfeb?source=rss-fd868de3edf1------2</link>
            <guid isPermaLink="false">https://medium.com/p/8b0c221ccfeb</guid>
            <dc:creator><![CDATA[Rojan Sedhai]]></dc:creator>
            <pubDate>Tue, 16 Dec 2025 09:46:40 GMT</pubDate>
            <atom:updated>2025-12-16T09:46:40.444Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*WcxcH4YiasWrEjG2aTLaAA.png" /></figure><p>The scariest part of learning AWS isn’t the steep learning curve, or the vast amount of services. It’s the surprise bill at the end of the month.</p><p>By now we all have probably heard about the horror stories or seen on LinkedIn (as posts that say something like: “Unexpected AWS Costs” or “The Hidden Costs of AWS” and so on… :D), where a user leaves a high-performance EC2 instance running over the weekend, or a loop in a Lambda function triggers millions of requests. Suddenly, a $5 hobby project turns into a $500 debt.</p><p>As cloud engineers, we often focus heavily on <strong>DevOps</strong> (deployment and operations) but neglect <strong>FinOps</strong> (financial operations). In this guide, I will show you how to use Terraform to build an automated budget watchdog that alerts you the moment your spending trends in the wrong direction.</p><h3>The Goal</h3><p>We are going to build a reusable Terraform module that deploys:</p><ol><li>An <strong>AWS Budget</strong> to monitor monthly spending.</li><li>An <strong>SNS Topic</strong> to handle alert notifications.</li><li><strong>Dual-Trigger Alerts</strong>: One for <em>actual</em> spend and one for <em>forecasted</em> spend.</li></ol><h3>Step 0: Prerequisites</h3><p>Before writing a single line of Terraform, we need to set up a secure environment. Running Terraform with your root account or “AdministratorAccess” is a security anti-pattern. We will follow the <strong>Principle of Least Privilege</strong>.</p><h4>1. Create a “Terraform User”</h4><p>Instead of giving our bot the keys to the kingdom, we will create a dedicated IAM user (terraform-finops-bot) with permission <em>only</em> to manage the services of Budgets and SNS.</p><ol><li>Log in to the AWS Console and go to <strong>IAM</strong> -&gt; <strong>Users</strong> -&gt; <strong>Create user</strong>.</li><li>Name: terraform-finops-bot</li><li>Select <strong>Attach policies directly</strong> -&gt; <strong>Create policy</strong> -&gt; <strong>JSON</strong>.</li><li>Paste the following policy. This JSON grants only the exact permissions needed to deploy and manage our specific resources (including reading tags, which Terraform requires):</li></ol><pre>{<br>    &quot;Version&quot;: &quot;2012-10-17&quot;,<br>    &quot;Statement&quot;: [<br>        {<br>            &quot;Sid&quot;: &quot;ManageBudgets&quot;,<br>            &quot;Effect&quot;: &quot;Allow&quot;,<br>            &quot;Action&quot;: [<br>                &quot;budgets:ViewBudget&quot;,<br>                &quot;budgets:ModifyBudget&quot;,<br>                &quot;budgets:ListTagsForResource&quot;,<br>                &quot;budgets:TagResource&quot;,<br>                &quot;budgets:UntagResource&quot;<br>            ],<br>            &quot;Resource&quot;: &quot;*&quot;<br>        },<br>        {<br>            &quot;Sid&quot;: &quot;ManageSNS&quot;,<br>            &quot;Effect&quot;: &quot;Allow&quot;,<br>            &quot;Action&quot;: [<br>                &quot;sns:CreateTopic&quot;,<br>                &quot;sns:SetTopicAttributes&quot;,<br>                &quot;sns:GetTopicAttributes&quot;,<br>                &quot;sns:ListTopics&quot;,<br>                &quot;sns:DeleteTopic&quot;,<br>                &quot;sns:Subscribe&quot;,<br>                &quot;sns:Unsubscribe&quot;,<br>                &quot;sns:GetSubscriptionAttributes&quot;,<br>                &quot;sns:ListTagsForResource&quot;,<br>                &quot;sns:TagResource&quot;,<br>                &quot;sns:UntagResource&quot;<br>            ],<br>            &quot;Resource&quot;: &quot;*&quot;<br>        }<br>    ]<br>}</pre><h4>2. Configure AWS CLI (No Hardcoded Keys!)</h4><p><strong>Never</strong> paste your access keys into your Terraform files. If you commit them to GitHub, bots will find them in seconds. Instead, use the AWS CLI to store them locally.</p><p>Run this in your terminal:</p><pre>aws configure --profile finops-project</pre><p>Paste your new user’s Access Key ID and Secret Access Key when prompted.</p><h3>Step 1: Project Structure</h3><p>Organizing your Infrastructure as Code is just as important as writing it. Here is the folder structure we will use:</p><pre>aws-finops-budget/<br>├── main.tf           # Provider configuration<br>├── variables.tf      # Input variables (email, budget limit)<br>├── sns.tf            # Notification infrastructure<br>├── budget.tf         # The budget logic and thresholds<br>├── .gitignore        # Exclude .terraform and .tfstate</pre><blockquote><strong>Version Control Note:</strong> Even though this guide focuses on running Terraform locally and doesn’t explicitly require pushing to GitHub, I have included a .gitignore file.</blockquote><blockquote>This is a critical safety habit. It ensures that if you <em>do</em> decide to initialize a Git repository later, you won&#39;t accidentally commit sensitive local files (like terraform.tfstate or .tfvars) to the public internet.</blockquote><h3>Step 2: The Infrastructure as Code</h3><h4>1. Provider Configuration (main.tf)</h4><p>We configure Terraform to use the secure profile we created in Step 0.</p><pre>terraform {<br>  required_providers {<br>    aws = {<br>      source  = &quot;hashicorp/aws&quot;<br>      version = &quot;~&gt; 5.0&quot;<br>    }<br>  }<br>}<br><br>provider &quot;aws&quot; {<br>  region  = var.aws_region<br>  profile = &quot;finops-project&quot;<br>}</pre><h4>2. Variables (variables.tf)</h4><p>Here, the email is marked as sensitive so it doesn&#39;t show up in plain text in our logs.</p><pre>variable &quot;aws_region&quot; {<br>  description = &quot;AWS Region to deploy resources&quot;<br>  type        = string<br>  default     = &quot;us-east-1&quot;<br>}<br>variable &quot;billing_email&quot; {<br>  description = &quot;The email address to receive budget alerts&quot;<br>  type        = string<br>  sensitive   = true <br>}<br>variable &quot;budget_limit&quot; {<br>  description = &quot;The monthly budget limit in USD&quot;<br>  type        = string<br>  default     = &quot;10&quot;<br>}</pre><h4>3. The Communication Channel (sns.tf)</h4><p>We create an SNS topic and, an access policy that allows the AWS Budgets service to publish to it.</p><pre>resource &quot;aws_sns_topic&quot; &quot;billing_alerts&quot; {<br>  name = &quot;aws-billing-alerts-topic&quot;<br>}<br>resource &quot;aws_sns_topic_policy&quot; &quot;default&quot; {<br>  arn = aws_sns_topic.billing_alerts.arn<br>  policy = jsonencode({<br>    Version = &quot;2012-10-17&quot;<br>    Statement = [<br>      {<br>        Sid    = &quot;AWSBudgets-notification-permissions&quot;<br>        Effect = &quot;Allow&quot;<br>        Principal = {<br>          Service = &quot;budgets.amazonaws.com&quot;<br>        }<br>        Action   = &quot;SNS:Publish&quot;<br>        Resource = aws_sns_topic.billing_alerts.arn<br>      }<br>    ]<br>  })<br>}<br>resource &quot;aws_sns_topic_subscription&quot; &quot;email_target&quot; {<br>  topic_arn = aws_sns_topic.billing_alerts.arn<br>  protocol  = &quot;email&quot;<br>  endpoint  = var.billing_email<br>}</pre><h4>4. The Budget Logic (budget.tf)</h4><p>This is the core of our FinOps solution. We set up two alerts:</p><ul><li><strong>Actual Spend:</strong> Warns when we hit 80% of the budget.</li><li><strong>Forecasted Spend:</strong> Warns if AWS <em>predicts</em> we will exceed the budget based on current usage trends.</li></ul><pre>resource &quot;aws_budgets_budget&quot; &quot;monthly_cost&quot; {<br>  name              = &quot;monthly-cloud-budget&quot;<br>  budget_type       = &quot;COST&quot;<br>  limit_amount      = var.budget_limit<br>  limit_unit        = &quot;USD&quot;<br>  time_unit         = &quot;MONTHLY&quot;<br>  time_period_start = &quot;2024-01-01_00:00&quot;<br><br># Alert 1: Actual Spend &gt; 80%<br>  notification {<br>    comparison_operator        = &quot;GREATER_THAN&quot;<br>    threshold                  = 80<br>    threshold_type             = &quot;PERCENTAGE&quot;<br>    notification_type          = &quot;ACTUAL&quot;<br>    subscriber_sns_topic_arns  = [aws_sns_topic.billing_alerts.arn]<br>  }<br>  # Alert 2: Forecasted Spend &gt; 100%<br>  notification {<br>    comparison_operator        = &quot;GREATER_THAN&quot;<br>    threshold                  = 100<br>    threshold_type             = &quot;PERCENTAGE&quot;<br>    notification_type          = &quot;FORECASTED&quot;<br>    subscriber_sns_topic_arns  = [aws_sns_topic.billing_alerts.arn]<br>  }<br>}</pre><h3>Step 3: Deployment</h3><p>To deploy this, run the standard Terraform lifecycle commands in your terminal. Here, the email variable is passed via the command line to keep it out of our code.</p><pre>terraform init<br>terraform plan -var=&quot;billing_email=<strong>your_email@example.com</strong>&quot;<br>terraform apply -var=&quot;billing_email=<strong>your_email@example.com</strong>&quot;</pre><h3>The “Don’t Skip This!” Step</h3><p>After terraform apply completes, Terraform has created the subscription request, but AWS will not send emails without permission.</p><ul><li><strong>Go to your email inbox.</strong></li><li>Find the email from “AWS Notifications.”</li><li>Click the <strong>Confirm subscription</strong> link.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Ywgo0ljlEdjf8NbtAdE_4A.png" /><figcaption>SNS subscription email</figcaption></figure><p>If you skip this, the budget will trigger alerts, but you will never receive them.</p><h3>Step 4: Verification</h3><p>AWS billing data updates every 8–24 hours, so you likely won’t see a budget alert immediately. However, you should test the system to ensure the SNS topic is working.</p><ul><li>Go to the AWS Console -&gt; <strong>Simple Notification Service (SNS)</strong>.</li><li>Select aws-billing-alerts-topic -&gt; <strong>Publish message</strong>.</li><li>Enter a test subject and body, then click <strong>Publish</strong>.</li><li>Check your inbox. If the email arrives, your alert pipeline is live!</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GMhjrB4HaImNCNhfr2apDA.png" /><figcaption>Test Alert Email from SNS!</figcaption></figure><p><strong>Tip:</strong> To verify the budget logic, you can update your budget_limit variable to 0.01 and run terraform apply again. By the next day, you should receive a genuine alarm from AWS stating you have exceeded your $0.01 budget.</p><h3>Step 5: Clean Up (Destroy)</h3><p>If this was just a lab experiment and you want to remove the resources to ensure zero costs, Terraform makes cleanup easy.</p><p>Run the destroy command:</p><pre>terraform destroy -var=&quot;billing_email=your_email@example.com&quot;</pre><p>Terraform will list the resources it is about to delete (the Budget, the SNS Topic, and the Policy). Type yes to confirm.</p><blockquote><strong>Note:</strong> This command removes the infrastructure resources. It <strong>does not delete</strong> the IAM user (terraform-finops-bot) we created in Step 0. Since that user doesn&#39;t cost money, you can keep it for your next Terraform project, or manually delete it in the IAM Console if you want a completely clean slate.</blockquote><h3>Conclusion</h3><p>By automating this setup with Terraform, you can include this module in every new environment you spin up. It takes about a minute or so to deploy, follows security best practices (least privilege, no hardcoded secrets), and provides peace of mind that lasts forever.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8b0c221ccfeb" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Ultimate Guide to Secure Static Site Hosting (AWS + Terraform + GitHub Actions)]]></title>
            <link>https://medium.com/@rojansedhai01/the-ultimate-guide-to-secure-static-site-hosting-aws-terraform-github-actions-bbd426bb9b5a?source=rss-fd868de3edf1------2</link>
            <guid isPermaLink="false">https://medium.com/p/bbd426bb9b5a</guid>
            <dc:creator><![CDATA[Rojan Sedhai]]></dc:creator>
            <pubDate>Tue, 02 Dec 2025 07:53:42 GMT</pubDate>
            <atom:updated>2025-12-02T07:53:42.583Z</atom:updated>
            <content:encoded><![CDATA[<p>When you think of deploying your website using GitHub Actions, the classic way is to generate an <strong>AWS Access Key ID </strong>and a <strong>Secret Access Key</strong> and then paste them into the GitHub Secrets.</p><blockquote>Also pray you never accidentally commit them or that they don’t leak! :D</blockquote><p>So, we are going to build a static site pipeline with <strong>security in mind</strong>.</p><p>We are going to use:</p><ol><li><strong>Terraform</strong> for Infrastructure as Code (IaC).</li><li><strong>AWS OIDC (OpenID Connect)</strong> so GitHub can deploy <em>without</em> us ever creating a long-term Access Key.</li><li><strong>CloudFront Origin Access Control (OAC)</strong> to ensure our S3 bucket is totally private (no public access needed!).</li></ol><p>Let’s build this right.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*sUwPVq6QiuCXnR3f" /><figcaption>Architectural diagram of the workflow</figcaption></figure><h3>The Prerequisites</h3><p>Before we dive in, make sure you have:</p><ul><li>An <strong>AWS Account (Use IAM user, not the root account)</strong>.</li><li><strong>Terraform</strong> and <strong>AWS CLI</strong> installed on your machine.</li><li>A <strong>GitHub Account</strong>.</li></ul><h3>Step 1: The Project Structure</h3><p>Organization is key. We’re going to keep our infrastructure logic separate from our application code.</p><p>Create a folder named my-secure-site and set up the following structure:</p><pre>my-secure-site/<br>├── src/                        # Your website goes here<br>│   └── index.html<br>├── .github/<br>│   └── workflows/<br>│       └── deploy.yml          # The pipeline configuration<br>├── terraform/                  # Infrastructure configuration<br>│   ├── main.tf<br>│   ├── variables.tf<br>│   ├── s3.tf<br>│   ├── cloudfront.tf<br>│   ├── iam.tf<br>│   └── outputs.tf<br>└── .gitignore</pre><p><strong>Crucial:</strong> Create the .gitignore file immediately to prevent committing secrets or state files.</p><p><strong>Add the below to the </strong><strong>.gitignore file:</strong></p><pre>.terraform/<br>*.tfstate<br>*.tfstate.*<br>*.tfvars<br>.DS_Store</pre><h3>Step 2: Initialize Git &amp; Set Up the Repository</h3><p>Before we write the infrastructure code, let’s get our version control ready.</p><ul><li><strong>Create a Repository on GitHub:</strong></li><li>Go to GitHub.com and create a new repository (e.g., my-secure-site).</li><li>Do <em>not</em> initialize it with a README or gitignore (we have those locally).</li><li>Initialize Locally: Open your terminal in your project folder and run:</li></ul><pre># Initialize Git<br>git init<br><br># Create the main branch<br>git branch -M main<br><br># Add your files (Create a dummy index.html first if you haven&#39;t)<br>echo &quot;&lt;h1&gt;Hello World&lt;/h1&gt;&quot; &gt; src/index.html<br>git add .<br><br># First commit<br>git commit -m &quot;initial commit&quot;<br><br># Link to your GitHub repo (Replace USERNAME and REPO with your own)<br>git remote add origin https://github.com/USERNAME/my-secure-site.git<br><br># Push to GitHub<br>git push -u origin main</pre><p>Now your code is safe in GitHub, and we are ready to build the infrastructure that will deploy it.</p><h3>Step 3: The Infrastructure (Terraform)</h3><p>Let’s write the IaC. You can copy-paste as well as understand the security logic below.</p><p>a. terraform/main.tf</p><p>This sets up the provider. We also fetch the GitHub certificate thumbprint dynamically here, so we don’t have to hardcode the numbers.</p><p>Terraform:</p><pre>provider &quot;aws&quot; {<br>  region = var.aws_region<br>}<br><br># Dynamically fetch the GitHub Actions certificate<br>data &quot;tls_certificate&quot; &quot;github&quot; {<br>  url = &quot;https://token.actions.githubusercontent.com/.well-known/openid-configuration&quot;<br>}<br>terraform {<br>  required_providers {<br>    aws = {<br>      source  = &quot;hashicorp/aws&quot;<br>      version = &quot;~&gt; 5.0&quot;<br>    }<br>  }<br>}</pre><p>b. Lets now make, terraform/s3.tf .Here we are <strong>blocking all public access</strong>. We only want CloudFront to see our files, not the whole internet directly.</p><p>Terraform:</p><pre>resource &quot;aws_s3_bucket&quot; &quot;site_bucket&quot; {<br>  bucket_prefix = &quot;${var.project_name}-&quot;<br>  force_destroy = true # Good for learning, remove for prod!<br>}<br><br># 1. Block ALL public access<br>resource &quot;aws_s3_bucket_public_access_block&quot; &quot;site_bucket_block&quot; {<br>  bucket = aws_s3_bucket.site_bucket.id<br>  block_public_acls       = true<br>  block_public_policy     = true<br>  ignore_public_acls      = true<br>  restrict_public_buckets = true<br>}<br># 2. Encrypt it<br>resource &quot;aws_s3_bucket_server_side_encryption_configuration&quot; &quot;enc&quot; {<br>  bucket = aws_s3_bucket.site_bucket.id<br>  rule {<br>    apply_server_side_encryption_by_default {<br>      sse_algorithm = &quot;AES256&quot;<br>    }<br>  }<br>}<br># 3. Policy: Allow ONLY CloudFront<br>resource &quot;aws_s3_bucket_policy&quot; &quot;site_bucket_policy&quot; {<br>  bucket = aws_s3_bucket.site_bucket.id<br>  policy = jsonencode({<br>    Version = &quot;2012-10-17&quot;<br>    Statement = [<br>      {<br>        Sid       = &quot;AllowCloudFront&quot;<br>        Effect    = &quot;Allow&quot;<br>        Principal = { Service = &quot;cloudfront.amazonaws.com&quot; }<br>        Action    = &quot;s3:GetObject&quot;<br>        Resource  = &quot;${aws_s3_bucket.site_bucket.arn}/*&quot;<br>        Condition = {<br>          StringEquals = {<br>            &quot;AWS:SourceArn&quot; = aws_cloudfront_distribution.site_distribution.arn<br>          }<br>        }<br>      }<br>    ]<br>  })<br>}</pre><p>c. terraform/cloudfront.tf<em>. </em>We use <strong>Origin Access Control (OAC)</strong> here. It’s the modern, secure replacement for the old OAI method.</p><p>Terraform:</p><pre>resource &quot;aws_cloudfront_origin_access_control&quot; &quot;site_oac&quot; {<br>  name                              = &quot;${var.project_name}-oac&quot;<br>  description                       = &quot;OAC for static site&quot;<br>  origin_access_control_origin_type = &quot;s3&quot;<br>  signing_behavior                  = &quot;always&quot;<br>  signing_protocol                  = &quot;sigv4&quot;<br>}<br>resource &quot;aws_cloudfront_distribution&quot; &quot;site_distribution&quot; {<br>  enabled             = true<br>  is_ipv6_enabled     = true<br>  default_root_object = &quot;index.html&quot;<br>  origin {<br>    domain_name              = aws_s3_bucket.site_bucket.bucket_regional_domain_name<br>    origin_id                = &quot;S3-${aws_s3_bucket.site_bucket.id}&quot;<br>    origin_access_control_id = aws_cloudfront_origin_access_control.site_oac.id<br>  }<br>  default_cache_behavior {<br>    allowed_methods  = [&quot;GET&quot;, &quot;HEAD&quot;]<br>    cached_methods   = [&quot;GET&quot;, &quot;HEAD&quot;]<br>    target_origin_id = &quot;S3-${aws_s3_bucket.site_bucket.id}&quot;<br>    cache_policy_id  = &quot;658327ea-f89d-4fab-a63d-7e88639e58f6&quot; # Managed-CachingOptimized<br>    viewer_protocol_policy = &quot;redirect-to-https&quot;<br>  }<br>  restrictions {<br>    geo_restriction {<br>      restriction_type = &quot;none&quot;<br>    }<br>  }<br>  viewer_certificate {<br>    cloudfront_default_certificate = true<br>  }<br>}</pre><p>d. terraform/iam.tf. Here we create an OIDC Provider that trusts GitHub. Then, we create a role that trusts <em>your specific repository</em>.</p><p>Terraform:</p><pre># Create the link between AWS and GitHub<br>resource &quot;aws_iam_openid_connect_provider&quot; &quot;github&quot; {<br>  url             = &quot;https://token.actions.githubusercontent.com&quot;<br>  client_id_list  = [&quot;sts.amazonaws.com&quot;]<br>  thumbprint_list = [data.tls_certificate.github.certificates[0].sha1_fingerprint]<br>}<br># The Role GitHub will &quot;assume&quot;<br>resource &quot;aws_iam_role&quot; &quot;github_actions_role&quot; {<br>  name = &quot;${var.project_name}-github-role&quot;<br>  assume_role_policy = jsonencode({<br>    Version = &quot;2012-10-17&quot;<br>    Statement = [<br>      {<br>        Action = &quot;sts:AssumeRoleWithWebIdentity&quot;<br>        Effect = &quot;Allow&quot;<br>        Principal = {<br>          Federated = aws_iam_openid_connect_provider.github.arn<br>        }<br>        Condition = {<br>          StringEquals = {<br>            # Only allow YOUR repository to assume this role<br>            &quot;token.actions.githubusercontent.com:aud&quot; = &quot;sts.amazonaws.com&quot;,<br>            &quot;token.actions.githubusercontent.com:sub&quot; = &quot;repo:${var.github_repo}:ref:refs/heads/main&quot;<br>          }<br>        }<br>      }<br>    ]<br>  })<br>}<br># Permissions: Upload to S3 and Invalidate CloudFront<br>resource &quot;aws_iam_role_policy&quot; &quot;github_actions_policy&quot; {<br>  name = &quot;${var.project_name}-policy&quot;<br>  role = aws_iam_role.github_actions_role.id<br>  policy = jsonencode({<br>    Version = &quot;2012-10-17&quot;<br>    Statement = [<br>      {<br>        Action   = [&quot;s3:PutObject&quot;, &quot;s3:ListBucket&quot;, &quot;s3:DeleteObject&quot;]<br>        Effect   = &quot;Allow&quot;<br>        Resource = [aws_s3_bucket.site_bucket.arn, &quot;${aws_s3_bucket.site_bucket.arn}/*&quot;]<br>      },<br>      {<br>        Action   = &quot;cloudfront:CreateInvalidation&quot;<br>        Effect   = &quot;Allow&quot;<br>        Resource = aws_cloudfront_distribution.site_distribution.arn<br>      }<br>    ]<br>  })<br>}</pre><h3>The Supporting Files</h3><p>a. Now lets create, terraform/variables.tf</p><p>Terraform:</p><pre>variable &quot;aws_region&quot; { default = &quot;us-east-1&quot; }<br>variable &quot;project_name&quot; { default = &quot;my-secure-site&quot; }<br>variable &quot;github_repo&quot; { <br>  description = &quot;Format: organization/repo&quot; <br>  type = string <br>}</pre><p>b. Then create, terraform/outputs.tf</p><p>These are the values we will need for GitHub.</p><p>Terraform:</p><pre>output &quot;role_arn&quot; { value = aws_iam_role.github_actions_role.arn }<br>output &quot;s3_bucket&quot; { value = aws_s3_bucket.site_bucket.id }<br>output &quot;cloudfront_id&quot; { value = aws_cloudfront_distribution.site_distribution.id }<br>output &quot;website_url&quot; { value = aws_cloudfront_distribution.site_distribution.domain_name }</pre><h3>Step 4: Apply the Infrastructure</h3><p>Head to your terminal. It’s time to build.</p><ul><li><strong>Initialize Terraform:</strong></li></ul><pre>cd terraform <br>terraform init</pre><ul><li>Apply: Replace the value below with your actual GitHub username and repo name.</li></ul><pre>terraform apply -var=&quot;github_repo=USERNAME/my-secure-site&quot;</pre><blockquote><em>Note: If you get an “EntityAlreadyExists” error for the OIDC provider, it means you’ve connected GitHub to this AWS account before.</em></blockquote><blockquote><em>Just run</em> terraform import aws_iam_openid_connect_provider.github arn:aws:iam::&lt;YOUR_ACCOUNT_ID&gt;:oidc-provider/token.actions.githubusercontent.com <em>and apply again.</em></blockquote><ul><li><strong>Wait </strong>as CloudFront takes a few minutes (around 5 mins) to create.</li><li><strong>Outputs </strong>will be generated<strong>.</strong> When it finishes, Terraform will print the role_arn, s3_bucket, and cloudfront_id. Copy those values as we will need them later.</li></ul><h3>Step 5: GitHub Actions</h3><p>Go to your <strong>GitHub Repository</strong> &gt; <strong>Settings</strong> &gt; <strong>Secrets and variables</strong> &gt; <strong>Actions &gt; Repository secrets &gt; New repository secret</strong>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*TwFur-LyS7jewbr1hLQHTg.png" /><figcaption>Using Repository Secrets to add the secret values.</figcaption></figure><p>Add these Repository Secrets:</p><ul><li>AWS_ROLE_ARN: (Paste the role_arn from Terraform output)</li><li>AWS_S3_BUCKET: (Paste the s3_bucket)</li><li>CLOUDFRONT_DISTRIBUTION_ID: (Paste the cloudfront distribution id, example: EDFDVBD632BHDS5)</li><li>AWS_REGION: us-east-1</li></ul><p>Now, create the workflow file .github/workflows/deploy.yml:</p><pre>name: Deploy Static Site<br><br>on:<br>  push:<br>    branches:<br>      - main<br>permissions:<br>  id-token: write   <br>  contents: read<br>jobs:<br>  deploy:<br>    runs-on: ubuntu-latest<br>    steps:<br>      - name: Checkout Code<br>        uses: actions/checkout@v4<br>     <br>      - name: Configure AWS Credentials<br>        uses: aws-actions/configure-aws-credentials@v4<br>        with:<br>          role-to-assume: ${{ secrets.AWS_ROLE_ARN }}<br>          aws-region: ${{ secrets.AWS_REGION }}<br>      - name: Sync files to S3<br>        run: |<br>          aws s3 sync ./src s3://${{ secrets.AWS_S3_BUCKET }} --delete<br>      - name: Invalidate CloudFront Cache<br>        run: |<br>          aws cloudfront create-invalidation \<br>            --distribution-id ${{ secrets.CLOUDFRONT_DISTRIBUTION_ID }} \<br>            --paths &quot;/*&quot;</pre><h3>The Moment of Truth</h3><ul><li>Add the new Terraform files and Workflow file to git:</li></ul><pre>cd .. # Go back to root<br>git add .<br>git commit -m &quot;Add infrastructure and CI/CD&quot;<br>git push origin main</pre><ul><li>Head over to the <strong>Actions</strong> tab in GitHub. You’ll see your workflow start.</li><li>GitHub asks AWS: “Hey, I’m USERNAME/my-secure-site. Can I be the github-role?&quot;</li><li>AWS checks the OIDC trust policy.</li><li>AWS hands back a temporary token.</li><li>GitHub syncs your HTML to S3 and invalidates the cache.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/754/1*agTHVsIYETitvH9M_Bz3rw.png" /><figcaption>Live website</figcaption></figure><p><strong>Boom!</strong> Your site is live!</p><p>Do some changes to your index.html file like adding another line. Then,</p><pre>git add .<br>git commit -m &quot;updated site&quot;<br>git push origin main</pre><p>The changes should be deployed through GitHub Actions and reflected in the browser soon.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/588/1*-EN42_VZsH8nxCNozrSf7g.png" /><figcaption>Changed index.html</figcaption></figure><h3>Step 6: Teardown</h3><p>When you are done testing or showcasing your project, you should destroy the infrastructure to avoid unexpected AWS bills. Because we used Terraform, we can nuke everything with a single command.</p><h3>a. Destroying your infra</h3><p>Go to your terraform/ folder in the terminal and run:</p><pre>terraform destroy -var=&quot;github_repo=USERNAME/my-secure-site&quot;</pre><p><em>Replace the repo username with your own!</em></p><p>Terraform will list every resource it is about to delete (The S3 bucket, the CloudFront Distribution, the IAM Roles, etc.).</p><p>Type yes when prompted.</p><blockquote><em>Note:</em></blockquote><blockquote><em>Usually, Terraform </em><strong><em>fails</em></strong><em> to delete an S3 bucket if it contains files (to prevent you from accidentally deleting data).</em></blockquote><blockquote><em>However, in our s3.tf file, we included this line:</em></blockquote><blockquote>force_destroy = true</blockquote><blockquote><em>This tells Terraform to </em><strong><em>delete the bucket even if it has files in it</em></strong><em>. This is perfect for test projects, but be careful using this setting in real production environments!</em></blockquote><h3>b. Cleanup GitHub (Optional)</h3><p>The infrastructure is gone, but your GitHub repo settings remain. If you want a clean slate:</p><ul><li>Go to <strong>Settings</strong> &gt; <strong>Secrets and variables</strong> &gt; <strong>Actions</strong>.</li><li>Delete the 4 repository secrets (AWS_ROLE_ARN, etc.).</li><li>Delete the .terraform folder locally if you plan to start over completely.</li></ul><blockquote><strong>Tip:</strong> If you ever want to bring the site back online, just run terraform apply again.</blockquote><blockquote>The new Output values will be generated, you update the GitHub Secrets, and your site is live again in minutes!</blockquote><h3>Conclusion</h3><p>We just built a pipeline that is:</p><ul><li><strong>Secure:</strong> No long-lived credentials to steal.</li><li><strong>Private:</strong> The S3 bucket is locked down; only CloudFront can access it.</li><li><strong>Automated:</strong> Just push code to deploy.</li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=bbd426bb9b5a" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Cloud Resume Challenge (Using Terraform)]]></title>
            <link>https://medium.com/@rojansedhai01/the-cloud-resume-challenge-using-terraform-965fc28c5dbc?source=rss-fd868de3edf1------2</link>
            <guid isPermaLink="false">https://medium.com/p/965fc28c5dbc</guid>
            <dc:creator><![CDATA[Rojan Sedhai]]></dc:creator>
            <pubDate>Fri, 28 Nov 2025 12:51:57 GMT</pubDate>
            <atom:updated>2025-12-17T10:49:34.899Z</atom:updated>
            <content:encoded><![CDATA[<p>As many people in the IT/Tech sector are moving towards learning about AWS Cloud, one of the more popular way to demonstrate what we learned is by doing a <a href="https://cloudresumechallenge.dev/"><strong>Cloud Resume Challenge</strong></a>, created by <a href="https://forrestbrazeal.com/">Forrest Brazeal</a>.</p><p>In the challenge, people usually make showcase their site/portfolio using the popular cloud service provider (i.e. AWS, Azure, GCP) using various core services of the respective cloud service provider. In the step-by-step guide, we are going to do the challenge in the <strong>AWS Cloud</strong>.</p><p>But we aren’t just going to click around in the AWS Console until it works. We are going to build this using <strong>Terraform</strong>.</p><figure><img alt="The diagram shows the AWS Cloud -&gt; Production Environment -&gt; Frontend/Backend Zones architectural diagram." src="https://cdn-images-1.medium.com/max/955/1*o044zDOPVkkbNc6tmXdg1w.png" /><figcaption><strong>Structured Architecture Diagram</strong></figcaption></figure><blockquote><strong>Why Terraform?</strong></blockquote><blockquote><em>Writing </em><strong><em>Infrastructure as Code (IaC)</em></strong><em> is how you prove you know the </em>how<em>.</em></blockquote><blockquote>IaC allows you to provision and manage infrastructure through code rather than manual processes, making your build reproducible, versionable, and less prone to human error. <a href="https://developer.hashicorp.com/terraform/intro">Learn more about Terraform here</a>.</blockquote><p>(Plus clicking buttons in the console can get tiring pretty fast) :D</p><p>We are sticking to two rules:</p><ol><li><strong>Security:</strong> No public S3 buckets. We lock everything down.</li><li><strong>Budget Friendly:</strong> We are using the cheapest (or free) options available.</li></ol><blockquote><strong>Disclaimer: </strong>Even though I am doing this as cheaply as possible, there may still be unexpected cost, so do keep an eye on your AWS Billing Dashboard. Also, always set up an <a href="https://medium.com/@rojansedhai01/sleep-soundly-automating-aws-cost-alarms-8b0c221ccfeb">AWS Budget</a> alert before you begin.</blockquote><h3>Prerequisites</h3><ul><li>An AWS Account (<em>Root user secured with MFA!</em>).</li><li>An IAM Admin User (<em>Don’t use Root for this!</em>).</li><li>Terraform installed on your laptop.</li><li>AWS CLI installed and configured (aws configure).</li></ul><h4>📂<strong>Project Directory Tree:</strong></h4><pre>cloud-resume-challenge/<br>├── 00-bootstrap/<br>│   └── main.tf               # Creates the S3 Bucket &amp; DynamoDB for State Locking<br>│<br>└── 01-resume-infra/<br>    ├── backend.tf            # API Gateway, DynamoDB, Lambda, IAM Roles<br>    ├── main.tf               # S3 Website Bucket, CloudFront, OAC, Bucket Policy<br>    ├── providers.tf          # AWS Provider &amp; Backend Configuration<br>    ├── index.html            # Your Resume HTML<br>    └── lambda/<br>        └── func.py           # Python script to update visitor count</pre><h3>Part 1: Bootstrap</h3><p>Terraform keeps a record of everything it creates in a file called terraform.tfstate.</p><blockquote><strong>Bad practice:</strong> Keeping this file on your laptop (if you lose your laptop, you lose your infrastructure).</blockquote><blockquote><strong>Good practice:</strong> Storing this file in the cloud (AWS S3) and locking it so two people can’t edit it at once (DynamoDB).</blockquote><p>We need to create that S3 bucket before we can store the state in it. This is the “Bootstrap” phase.</p><ol><li><strong>Create the folder structure</strong>: Create a new folder for your project (let’s call it <em>cloud-resume-challenge</em>). Inside it, create two folders: 00-bootstrap and 01-resume-infra.</li></ol><p>2. <strong>The Bootstrap Code:</strong> Create 00-bootstrap/main.tf and paste the code below.</p><ul><li><strong>Critical:</strong> Change the bucket name to something unique (e.g., yourname-state-bucket-2025). Bucket names must be globally unique!</li></ul><p>Below is the Terraform code:</p><pre>provider &quot;aws&quot; {<br>  region = &quot;us-east-1&quot;<br>}<br><br>resource &quot;aws_s3_bucket&quot; &quot;terraform_state&quot; {<br>  bucket        = &quot;my-terraform-state-bucket-nepal-34324951&quot; #Change this to something unique!<br>  force_destroy = true <br>}<br>resource &quot;aws_s3_bucket_versioning&quot; &quot;enabled&quot; {<br>  bucket = aws_s3_bucket.terraform_state.id<br>  versioning_configuration { status = &quot;Enabled&quot; }<br>}<br>resource &quot;aws_s3_bucket_server_side_encryption_configuration&quot; &quot;default&quot; {<br>  bucket = aws_s3_bucket.terraform_state.id<br>  rule {<br>    apply_server_side_encryption_by_default { sse_algorithm = &quot;AES256&quot; }<br>  }<br>}<br>resource &quot;aws_s3_bucket_public_access_block&quot; &quot;public_access&quot; {<br>  bucket                  = aws_s3_bucket.terraform_state.id<br>  block_public_acls       = true<br>  block_public_policy     = true<br>  ignore_public_acls      = true<br>  restrict_public_buckets = true<br>}<br>resource &quot;aws_dynamodb_table&quot; &quot;terraform_locks&quot; {<br>  name         = &quot;terraform-locks&quot;<br>  billing_mode = &quot;PAY_PER_REQUEST&quot;<br>  hash_key     = &quot;LockID&quot;<br>  attribute {<br>    name = &quot;LockID&quot;<br>    type = &quot;S&quot;<br>  }<br>}<br>output &quot;s3_bucket_name&quot; { value = aws_s3_bucket.terraform_state.id }<br>output &quot;dynamodb_table_name&quot; { value = aws_dynamodb_table.terraform_locks.name }</pre><p>3. Deploy the Safe: Open your terminal in 00-bootstrap:</p><pre>terraform init<br>terraform apply</pre><ul><li>Type yes when asked.</li><li><strong>Write down the S3 bucket name</strong> that it outputs in a notepad. You need this for the next part.</li></ul><h3>Part 2: The Infrastructure</h3><p>Now that we have a safe place to store our state, let’s build the resume/portfolio site. Move into the 01-resume-infra folder.</p><p>1. Configure the Provider:</p><p>Create 01-resume-infra/providers.tf. This tells Terraform “Hey, don’t save the state on my laptop; save it in that bucket we just made.”</p><p>Terraform:</p><pre>terraform {<br>  required_providers {<br>    aws = { source = &quot;hashicorp/aws&quot;, version = &quot;~&gt; 5.0&quot; }<br>  }<br>  backend &quot;s3&quot; {<br>    bucket         = &quot;my-terraform-state-bucket-nepal-34324951&quot; #Same Bucket name from Part 1<br>    key            = &quot;resume-project/terraform.tfstate&quot;<br>    region         = &quot;us-east-1&quot;<br>    dynamodb_table = &quot;terraform-locks&quot;<br>    encrypt        = true<br>  }<br>}<br><br>provider &quot;aws&quot; { region = &quot;us-east-1&quot; }</pre><p>2. The Frontend (S3 + CloudFront):</p><p>Create 01-resume-infra/main.tf.</p><p>We are not using “S3 Static Website Hosting” because that’s insecure (http only). We are using CloudFront to serve it securely via HTTPS.</p><blockquote><em>Note:</em> You can replace my-awesome-resume-site-2025 with a unique name.</blockquote><p>Terraform:</p><pre># 1. The Bucket for the Website<br>resource &quot;aws_s3_bucket&quot; &quot;resume_bucket&quot; {<br>  bucket = &quot;my-awesome-resume-site-2025&quot;<br>  force_destroy = true<br>}</pre><pre># 2. Block Public Access<br>resource &quot;aws_s3_bucket_public_access_block&quot; &quot;resume_bucket_public&quot; {<br>  bucket = aws_s3_bucket.resume_bucket.id</pre><pre>block_public_acls       = true<br>  block_public_policy     = true<br>  ignore_public_acls      = true<br>  restrict_public_buckets = true<br>}</pre><pre># 3. Add an index.html file<br>resource &quot;aws_s3_object&quot; &quot;index&quot; {<br>  bucket       = aws_s3_bucket.resume_bucket.id<br>  key          = &quot;index.html&quot;<br>  content      = templatefile(&quot;${path.module}/index.html&quot;, {<br>    api_url = &quot;${aws_apigatewayv2_api.http_api.api_endpoint}/count&quot;<br>  })</pre><pre>content_type = &quot;text/html&quot;<br>  <br>etag         = md5(templatefile(&quot;${path.module}/index.html&quot;, {<br>    api_url = &quot;${aws_apigatewayv2_api.http_api.api_endpoint}/count&quot;<br>  }))<br>}</pre><pre># 4. Origin Access Control (OAC)<br><br>resource &quot;aws_cloudfront_origin_access_control&quot; &quot;resume_oac&quot; {<br>  name                              = &quot;resume-oac&quot;<br>  description                       = &quot;OAC for Resume Website&quot;<br>  origin_access_control_origin_type = &quot;s3&quot;<br>  signing_behavior                  = &quot;always&quot;<br>  signing_protocol                  = &quot;sigv4&quot;<br>}</pre><pre># ---------------------------------------------------------<br># 5. CLOUDFRONT DISTRIBUTION<br># ---------------------------------------------------------</pre><pre>resource &quot;aws_cloudfront_distribution&quot; &quot;s3_distribution&quot; {<br>  origin {<br>    domain_name              = aws_s3_bucket.resume_bucket.bucket_regional_domain_name<br>    origin_id                = &quot;my-s3-origin&quot;<br>    origin_access_control_id = aws_cloudfront_origin_access_control.resume_oac.id<br>  }</pre><pre>enabled             = true<br>  is_ipv6_enabled     = true<br>  default_root_object = &quot;index.html&quot;</pre><pre># SECURITY<br>  default_cache_behavior {<br>    allowed_methods  = [&quot;GET&quot;, &quot;HEAD&quot;]<br>    cached_methods   = [&quot;GET&quot;, &quot;HEAD&quot;]<br>    target_origin_id = &quot;my-s3-origin&quot;</pre><pre>forwarded_values {<br>      query_string = false<br>      cookies {<br>        forward = &quot;none&quot;<br>      }<br>    }</pre><pre>viewer_protocol_policy = &quot;redirect-to-https&quot; <br>    min_ttl                = 0<br>    default_ttl            = 3600<br>    max_ttl                = 86400<br>  }</pre><pre><br>price_class = &quot;PriceClass_100&quot;</pre><pre>restrictions {<br>    geo_restriction {<br>      restriction_type = &quot;none&quot;<br>    }<br>  }</pre><pre>viewer_certificate {<br>    cloudfront_default_certificate = true<br>  }<br>}</pre><pre># ---------------------------------------------------------<br># 6. S3 BUCKET POLICY (THE BOUNCER)<br># ---------------------------------------------------------</pre><pre>resource &quot;aws_s3_bucket_policy&quot; &quot;allow_cloudfront&quot; {<br>  bucket = aws_s3_bucket.resume_bucket.id<br>  policy = data.aws_iam_policy_document.allow_cloudfront.json<br>}</pre><pre>data &quot;aws_iam_policy_document&quot; &quot;allow_cloudfront&quot; {<br>  statement {<br>    sid    = &quot;AllowCloudFrontServicePrincipal&quot;<br>    effect = &quot;Allow&quot;<br>    actions = [<br>      &quot;s3:GetObject&quot;<br>    ]<br>    resources = [<br>      &quot;${aws_s3_bucket.resume_bucket.arn}/*&quot;<br>    ]</pre><pre>principals {<br>      type        = &quot;Service&quot;<br>      identifiers = [&quot;cloudfront.amazonaws.com&quot;]<br>    }</pre><pre>condition {<br>      test     = &quot;StringEquals&quot;<br>      variable = &quot;AWS:SourceArn&quot;<br>      values   = [aws_cloudfront_distribution.s3_distribution.arn]<br>    }<br>  }<br>}</pre><pre># ---------------------------------------------------------<br># 7. OUTPUTS<br># ---------------------------------------------------------</pre><pre>output &quot;website_url&quot; {<br>  description = &quot;The CloudFront URL to access your website&quot;<br>  value       = &quot;<a href="https://${aws_cloudfront_distribution.s3_distribution.domain_name">https://${aws_cloudfront_distribution.s3_distribution.domain_name</a>}&quot;<br>}</pre><p>3. The Backend Logic (Python):</p><p>We need a script to count visitors.</p><ul><li>Create a folder 01-resume-infra/lambda.</li><li>Inside it, create func.py.</li></ul><p>Python</p><pre>import json<br>import boto3<br>import os<br><br># Initialize DynamoDB client<br>dynamodb = boto3.resource(&#39;dynamodb&#39;)<br># &quot;visitor_count&quot; is the table name we will create in Terraform below<br>table = dynamodb.Table(os.environ[&#39;TABLE_NAME&#39;])<br><br>def lambda_handler(event, context):<br>    try:<br>        response = table.update_item(<br>            Key={&#39;id&#39;: &#39;count&#39;},<br>            UpdateExpression=&quot;ADD visitors :inc&quot;,<br>            ExpressionAttributeValues={&#39;:inc&#39;: 1},<br>            ReturnValues=&quot;UPDATED_NEW&quot;<br>        )<br>        <br>        visit_count = int(response[&#39;Attributes&#39;][&#39;visitors&#39;])<br>        <br>        return {<br>            &#39;statusCode&#39;: 200,<br>            &#39;headers&#39;: {<br>                &#39;Content-Type&#39;: &#39;application/json&#39;,<br>                &#39;Access-Control-Allow-Origin&#39;: os.environ[&#39;ALLOWED_ORIGIN&#39;]<br>            },<br>            &#39;body&#39;: json.dumps({&#39;count&#39;: visit_count})<br>        }<br>    except Exception as e:<br>        print(e)<br>        return {<br>            &#39;statusCode&#39;: 500,<br>            &#39;body&#39;: json.dumps(&#39;Error updating count&#39;)<br>        }</pre><p>4. The Backend Infrastructure (API Gateway + DynamoDB):</p><p>Create 01-resume-infra/backend.tf. This sets up the database, the Lambda function, and the API Gateway URL.</p><p>Terraform:</p><pre><br>resource &quot;aws_dynamodb_table&quot; &quot;counter_table&quot; {<br>  name         = &quot;visitor_count&quot;<br>  billing_mode = &quot;PAY_PER_REQUEST&quot;<br>  hash_key     = &quot;id&quot;<br><br>  attribute {<br>    name = &quot;id&quot;<br>    type = &quot;S&quot;<br>  }<br>}<br><br>data &quot;archive_file&quot; &quot;lambda_zip&quot; {<br>  type        = &quot;zip&quot;<br>  source_file = &quot;${path.module}/lambda/func.py&quot;<br>  output_path = &quot;${path.module}/lambda/func.zip&quot;<br>}<br><br>resource &quot;aws_lambda_function&quot; &quot;visitor_counter&quot; {<br>  filename         = data.archive_file.lambda_zip.output_path<br>  function_name    = &quot;visitor_counter_func&quot;<br>  role             = aws_iam_role.lambda_role.arn<br>  handler          = &quot;func.lambda_handler&quot;<br>  runtime          = &quot;python3.9&quot;<br>  source_code_hash = data.archive_file.lambda_zip.output_base64sha256<br><br>  environment {<br>    variables = {<br>      TABLE_NAME     = aws_dynamodb_table.counter_table.name<br>      ALLOWED_ORIGIN = &quot;https://${aws_cloudfront_distribution.s3_distribution.domain_name}&quot;<br>    }<br>  }<br>}<br><br>resource &quot;aws_iam_role&quot; &quot;lambda_role&quot; {<br>  name = &quot;visitor_counter_role&quot;<br><br>  assume_role_policy = jsonencode({<br>    Version = &quot;2012-10-17&quot;<br>    Statement = [{<br>      Action = &quot;sts:AssumeRole&quot;<br>      Effect = &quot;Allow&quot;<br>      Principal = { Service = &quot;lambda.amazonaws.com&quot; }<br>    }]<br>  })<br>}<br><br>resource &quot;aws_iam_policy&quot; &quot;lambda_policy&quot; {<br>  name = &quot;visitor_counter_policy&quot;<br><br>  policy = jsonencode({<br>    Version = &quot;2012-10-17&quot;<br>    Statement = [<br>      {<br>        Effect = &quot;Allow&quot;<br>        Action = [<br>          &quot;dynamodb:GetItem&quot;,<br>          &quot;dynamodb:UpdateItem&quot;,<br>          &quot;dynamodb:PutItem&quot;<br>        ]<br>        Resource = aws_dynamodb_table.counter_table.arn<br>      },<br>      {<br>        Effect = &quot;Allow&quot;<br>        Action = [<br>          &quot;logs:CreateLogGroup&quot;,<br>          &quot;logs:CreateLogStream&quot;,<br>          &quot;logs:PutLogEvents&quot;<br>        ]<br>        Resource = &quot;arn:aws:logs:*:*:*&quot;<br>      }<br>    ]<br>  })<br>}<br><br># Attach the policy to the role<br>resource &quot;aws_iam_role_policy_attachment&quot; &quot;lambda_attach&quot; {<br>  role       = aws_iam_role.lambda_role.name<br>  policy_arn = aws_iam_policy.lambda_policy.arn<br>}<br><br>resource &quot;aws_apigatewayv2_api&quot; &quot;http_api&quot; {<br>  name          = &quot;visitor_counter_api&quot;<br>  protocol_type = &quot;HTTP&quot;<br>  <br>  #Only allow your CloudFront domain to call this API<br>  cors_configuration {<br>    allow_origins = [&quot;https://${aws_cloudfront_distribution.s3_distribution.domain_name}&quot;] <br>    allow_methods = [&quot;POST&quot;, &quot;GET&quot;]<br>    allow_headers = [&quot;content-type&quot;]<br>    max_age       = 300<br>  }<br>}<br><br>resource &quot;aws_apigatewayv2_stage&quot; &quot;default&quot; {<br>  api_id      = aws_apigatewayv2_api.http_api.id<br>  name        = &quot;$default&quot;<br>  auto_deploy = true<br><br>  default_route_settings {<br>    throttling_burst_limit = 5<br>    throttling_rate_limit  = 10<br>  }<br>}<br><br># Connect API to Lambda<br>resource &quot;aws_apigatewayv2_integration&quot; &quot;lambda_integration&quot; {<br>  api_id           = aws_apigatewayv2_api.http_api.id<br>  integration_type = &quot;AWS_PROXY&quot;<br>  integration_uri  = aws_lambda_function.visitor_counter.invoke_arn<br>}<br><br>resource &quot;aws_apigatewayv2_route&quot; &quot;default_route&quot; {<br>  api_id    = aws_apigatewayv2_api.http_api.id<br>  route_key = &quot;POST /count&quot; <br>  target    = &quot;integrations/${aws_apigatewayv2_integration.lambda_integration.id}&quot;<br>}<br><br># Permission for API Gateway to invoke Lambda<br>resource &quot;aws_lambda_permission&quot; &quot;api_gw&quot; {<br>  statement_id  = &quot;AllowExecutionFromAPIGateway&quot;<br>  action        = &quot;lambda:InvokeFunction&quot;<br>  function_name = aws_lambda_function.visitor_counter.function_name<br>  principal     = &quot;apigateway.amazonaws.com&quot;<br>  source_arn    = &quot;${aws_apigatewayv2_api.http_api.execution_arn}/*/*&quot;<br>}<br><br># Output the API URL<br>output &quot;api_endpoint&quot; {<br>  value = &quot;${aws_apigatewayv2_api.http_api.api_endpoint}/count&quot;<br>}</pre><h3>Part 3: The Code &amp; Deploy</h3><p>1. Create the HTML file</p><p>Create 01-resume-infra/index.html. Copy the code below.</p><pre>&lt;!DOCTYPE html&gt;<br>&lt;html lang=&quot;en&quot;&gt;<br>&lt;head&gt;<br>    &lt;meta charset=&quot;UTF-8&quot; /&gt;<br>    &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1.0&quot; /&gt;<br>    &lt;title&gt;Cloud Engineer Portfolio&lt;/title&gt;<br>    &lt;link rel=&quot;stylesheet&quot; href=&quot;https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.4.0/css/all.min.css&quot;&gt;<br>    &lt;style&gt;<br>        :root {<br>            --primary-color: #2c3e50; <br>            --secondary-color: #3498db;<br>            --bg-light: #f4f7f6;<br>            --text-dark: #333;<br>            --text-light: #ecf0f1;<br>            --sidebar-width: 300px;<br>        }<br><br>        * {<br>            margin: 0;<br>            padding: 0;<br>            box-sizing: border-box;<br>        }<br><br>        body {<br>            font-family: &#39;Segoe UI&#39;, Tahoma, Geneva, Verdana, sans-serif;<br>            background-color: var(--bg-light);<br>            color: var(--text-dark);<br>            line-height: 1.6;<br>        }<br><br>        /* Main Layout */<br>        .container {<br>            display: flex;<br>            min-height: 100vh;<br>            max-width: 1200px;<br>            margin: 0 auto;<br>            background: #fff;<br>            box-shadow: 0 0 20px rgba(0,0,0,0.1);<br>        }<br><br>        /* Sidebar Styles */<br>        .sidebar {<br>            background-color: var(--primary-color);<br>            color: var(--text-light);<br>            width: var(--sidebar-width);<br>            padding: 30px;<br>            flex-shrink: 0;<br>            text-align: center;<br>        }<br><br>        .profile-img {<br>            width: 150px;<br>            height: 150px;<br>            background-color: #ddd; /* Placeholder gray */<br>            border-radius: 50%;<br>            border: 5px solid var(--secondary-color);<br>            margin: 0 auto 20px;<br>        }<br><br>        .sidebar h1 {<br>            font-size: 1.8rem;<br>            margin-bottom: 10px;<br>        }<br><br>        .sidebar h2 {<br>            font-size: 1.1rem;<br>            font-weight: 400;<br>            color: var(--secondary-color);<br>            margin-bottom: 30px;<br>        }<br><br>        .contact-info, .skills-list, .education-list {<br>            text-align: left;<br>            margin-bottom: 30px;<br>            list-style: none;<br>        }<br><br>        .sidebar-section-title {<br>            text-transform: uppercase;<br>            letter-spacing: 1px;<br>            margin-bottom: 15px;<br>            padding-bottom: 5px;<br>            border-bottom: 2px solid var(--secondary-color);<br>        }<br><br>        .contact-info li, .education-list li {<br>            margin-bottom: 15px;<br>            display: flex;<br>            align-items: center;<br>        }<br><br>        .contact-info i {<br>            margin-right: 10px;<br>            color: var(--secondary-color);<br>            width: 20px;<br>            text-align: center;<br>        }<br><br>        .contact-info a {<br>            color: var(--text-light);<br>            text-decoration: none;<br>            transition: color 0.3s;<br>        }<br><br>        .contact-info a:hover {<br>            color: var(--secondary-color);<br>        }<br><br>        /* Skill tags styles */<br>        .skill-tags {<br>            display: flex;<br>            flex-wrap: wrap;<br>            gap: 10px;<br>        }<br><br>        .skill-tag {<br>            background: rgba(255,255,255,0.1);<br>            padding: 5px 10px;<br>            border-radius: 5px;<br>            font-size: 0.9rem;<br>        }<br><br>        /* Main Content Styles */<br>        .main-content {<br>            flex-grow: 1;<br>            padding: 40px;<br>        }<br><br>        section {<br>            margin-bottom: 40px;<br>        }<br><br>        h3.section-title {<br>            color: var(--primary-color);<br>            font-size: 1.5rem;<br>            text-transform: uppercase;<br>            margin-bottom: 20px;<br>            padding-bottom: 10px;<br>            border-bottom: 2px solid #eee;<br>        }<br><br>        .resume-item {<br>            margin-bottom: 25px;<br>        }<br><br>        .resume-header {<br>            display: flex;<br>            justify-content: space-between;<br>            margin-bottom: 10px;<br>        }<br><br>        .resume-header h4 {<br>            font-size: 1.2rem;<br>            color: var(--primary-color);<br>        }<br><br>        .resume-header .date {<br>            color: var(--secondary-color);<br>            font-weight: bold;<br>            font-size: 0.9rem;<br>        }<br><br>        .resume-item ul {<br>            margin-left: 20px;<br>            list-style-type: square;<br>        }<br>        <br>        .project-tech-stack {<br>            font-size: 0.9em;<br>            color: #666;<br>            font-style: italic;<br>            margin-top: 5px;<br>        }<br><br>        /* Footer &amp; Counter Styles */<br>        .footer {<br>            text-align: center;<br>            padding: 20px;<br>            background: #f9f9f9;<br>            border-top: 1px solid #eee;<br>            font-size: 0.9rem;<br>            color: #777;<br>        }<br><br>        .counter-container {<br>            display: inline-block;<br>            background: var(--primary-color);<br>            color: white;<br>            padding: 10px 20px;<br>            border-radius: 50px;<br>            margin-top: 10px;<br>            box-shadow: 0 4px 6px rgba(0,0,0,0.1);<br>        }<br><br>        .counter {<br>            font-weight: bold;<br>            font-size: 1.2em;<br>            color: var(--secondary-color);<br>            margin-left: 5px;<br>        }<br><br>        /* Responsive Design */<br>        @media (max-width: 768px) {<br>            .container {<br>                flex-direction: column;<br>                margin: 0;<br>            }<br>            .sidebar {<br>                width: 100%;<br>                padding: 40px 20px;<br>            }<br>            .main-content {<br>                padding: 30px 20px;<br>            }<br>            .resume-header {<br>                flex-direction: column;<br>            }<br>        }<br>    &lt;/style&gt;<br>&lt;/head&gt;<br>&lt;body&gt;<br><br>&lt;div class=&quot;container&quot;&gt;<br>    &lt;aside class=&quot;sidebar&quot;&gt;<br>        &lt;div class=&quot;profile-img&quot; title=&quot;Put your photo here&quot;&gt;&lt;/div&gt;<br>        &lt;h1&gt;[Your Name]&lt;/h1&gt;<br>        &lt;h2&gt;Cloud/DevOps Engineer&lt;/h2&gt;<br><br>        &lt;h3 class=&quot;sidebar-section-title&quot;&gt;Contact&lt;/h3&gt;<br>        &lt;ul class=&quot;contact-info&quot;&gt;<br>            &lt;li&gt;&lt;i class=&quot;fas fa-envelope&quot;&gt;&lt;/i&gt; &lt;a href=&quot;youremail@example.com&quot;&gt;email@example.com&lt;/a&gt;&lt;/li&gt;<br>            &lt;li&gt;&lt;i class=&quot;fab fa-linkedin&quot;&gt;&lt;/i&gt; &lt;a href=&quot;#&quot; target=&quot;_blank&quot;&gt;LinkedIn Profile&lt;/a&gt;&lt;/li&gt;<br>            &lt;li&gt;&lt;i class=&quot;fab fa-github&quot;&gt;&lt;/i&gt; &lt;a href=&quot;#&quot; target=&quot;_blank&quot;&gt;GitHub Profile&lt;/a&gt;&lt;/li&gt;<br>            &lt;li&gt;&lt;i class=&quot;fas fa-map-marker-alt&quot;&gt;&lt;/i&gt; City, Country&lt;/li&gt;<br>        &lt;/ul&gt;<br><br>        &lt;h3 class=&quot;sidebar-section-title&quot;&gt;Skills&lt;/h3&gt;<br>        &lt;div class=&quot;skill-tags&quot;&gt;<br>            &lt;span class=&quot;skill-tag&quot;&gt;AWS&lt;/span&gt;<br>            &lt;span class=&quot;skill-tag&quot;&gt;Terraform (IaC)&lt;/span&gt;<br>            &lt;span class=&quot;skill-tag&quot;&gt;Python&lt;/span&gt;<br>            &lt;span class=&quot;skill-tag&quot;&gt;CI/CD Pipelines&lt;/span&gt;<br>            &lt;span class=&quot;skill-tag&quot;&gt;Docker&lt;/span&gt;<br>            &lt;span class=&quot;skill-tag&quot;&gt;Linux Admin&lt;/span&gt;<br>            &lt;span class=&quot;skill-tag&quot;&gt;Git&lt;/span&gt;<br>        &lt;/div&gt;<br><br>        &lt;br&gt;<br>        &lt;h3 class=&quot;sidebar-section-title&quot;&gt;Education &amp; Certs&lt;/h3&gt;<br>        &lt;ul class=&quot;education-list&quot;&gt;<br>            &lt;li&gt;<br>                &lt;strong&gt;AWS Certified Solutions Architect - Associate&lt;/strong&gt;&lt;br&gt;<br>                &lt;small&gt;2023&lt;/small&gt;<br>            &lt;/li&gt;<br>            &lt;li&gt;<br>                &lt;strong&gt;B.S. Computer Science&lt;/strong&gt;&lt;br&gt;<br>                &lt;small&gt;University Name, 2020&lt;/small&gt;<br>            &lt;/li&gt;<br>        &lt;/ul&gt;<br>    &lt;/aside&gt;<br><br>    &lt;main class=&quot;main-content&quot;&gt;<br>        &lt;section&gt;<br>            &lt;h3 class=&quot;section-title&quot;&gt;Summary&lt;/h3&gt;<br>            &lt;p&gt;Highly motivated Cloud Engineer with a passion for automating infrastructure and building scalable serverless applications. Proven ability to design secure AWS environments using Infrastructure as Code (Terraform). Eager to leverage skills in a challenging DevOps role.&lt;/p&gt;<br>        &lt;/section&gt;<br><br>        &lt;section&gt;<br>            &lt;h3 class=&quot;section-title&quot;&gt;Projects&lt;/h3&gt;<br>            <br>            &lt;div class=&quot;resume-item&quot;&gt;<br>                &lt;div class=&quot;resume-header&quot;&gt;<br>                    &lt;h4&gt;Serverless Cloud Resume Challenge&lt;/h4&gt;<br>                    &lt;span class=&quot;date&quot;&gt;Current Month, 202X&lt;/span&gt;<br>                &lt;/div&gt;<br>                &lt;ul&gt;<br>                    &lt;li&gt;Designed and deployed a secure, serverless portfolio website on AWS.&lt;/li&gt;<br>                    &lt;li&gt;Implemented 100% of the infrastructure using Terraform for reproducibility.&lt;/li&gt;<br>                    &lt;li&gt;Utilized S3 for static hosting, CloudFront with OAC for secure HTTPS delivery, and Route53 for DNS.&lt;/li&gt;<br>                    &lt;li&gt;Created a visitor counter backend using API Gateway, Lambda (Python), and DynamoDB with transactional updates.&lt;/li&gt;<br>                    &lt;li&gt;Set up Github Actions for CI/CD automated deployment (Optional add-on).&lt;/li&gt;<br>                &lt;/ul&gt;<br>                &lt;p class=&quot;project-tech-stack&quot;&gt;Tech: AWS (S3, CloudFront, APIGW, Lambda, DynamoDB), Terraform, Python, HTML/CSS.&lt;/p&gt;<br>            &lt;/div&gt;<br><br>            &lt;div class=&quot;resume-item&quot;&gt;<br>                &lt;div class=&quot;resume-header&quot;&gt;<br>                    &lt;h4&gt;Another Sample Project&lt;/h4&gt;<br>                    &lt;span class=&quot;date&quot;&gt;Jan 2023 - Mar 2023&lt;/span&gt;<br>                &lt;/div&gt;<br>                &lt;ul&gt;<br>                    &lt;li&gt;Developed a highly available three-tier web application architecture.&lt;/li&gt;<br>                    &lt;li&gt;Automated EC2 instance provisioning using auto-scaling groups and application load balancers.&lt;/li&gt;<br>                &lt;/ul&gt;<br>            &lt;/div&gt;<br>        &lt;/section&gt;<br><br>        &lt;section&gt;<br>            &lt;h3 class=&quot;section-title&quot;&gt;Professional Experience&lt;/h3&gt;<br>            <br>            &lt;div class=&quot;resume-item&quot;&gt;<br>                &lt;div class=&quot;resume-header&quot;&gt;<br>                    &lt;h4&gt;Junior Cloud Admin | Tech Company Inc.&lt;/h4&gt;<br>                    &lt;span class=&quot;date&quot;&gt;2021 - Present&lt;/span&gt;<br>                &lt;/div&gt;<br>                &lt;ul&gt;<br>                    &lt;li&gt;Managed daily operations of AWS cloud infrastructure, ensuring 99.9% uptime.&lt;/li&gt;<br>                    &lt;li&gt;Migrated on-premise legacy applications to AWS EC2 instances.&lt;/li&gt;<br>                    &lt;li&gt;Assisted in implementing security best practices using IAM policies and security groups.&lt;/li&gt;<br>                &lt;/ul&gt;<br>            &lt;/div&gt;<br>        &lt;/section&gt;<br>    &lt;/main&gt;<br>&lt;/div&gt;<br><br>&lt;footer class=&quot;footer&quot;&gt;<br>    &lt;p&gt;Designed &amp; Built by [Your Name] using AWS Serverless services.&lt;/p&gt;<br>    &lt;div class=&quot;counter-container&quot;&gt;<br>        Visitor Count: &lt;span id=&quot;counter&quot; class=&quot;counter&quot;&gt;Loading...&lt;/span&gt;<br>    &lt;/div&gt;<br>&lt;/footer&gt;<br><br>&lt;script&gt;<br>    const apiUrl = &quot;${api_url}&quot;; <br>    <br>    fetch(apiUrl, { method: &#39;POST&#39; })<br>        .then(response =&gt; response.json())<br>        .then(data =&gt; {<br>            document.getElementById(&#39;counter&#39;).innerText = data.count;<br>        })<br>        .catch(error =&gt; {<br>            console.error(&#39;Error:&#39;, error);<br>            document.getElementById(&#39;counter&#39;).innerText = &quot;Error&quot;;<br>        });<br>&lt;/script&gt;<br><br>&lt;/body&gt;<br>&lt;/html&gt;</pre><p>2. Deploy Everything</p><p>Open your terminal in 01-resume-infra and run:</p><pre>terraform init<br>terraform apply</pre><p>Type yes. Now, wait (CloudFront takes about 3-5 minutes to create).</p><p><em>What just happened?</em></p><blockquote>Terraform calculated the dependency graph. It realized it needed to create the API Gateway first to get the URL. Then, it used the templatefile function to inject that URL into your HTML. Finally, it uploaded the specific HTML file to S3.</blockquote><p>3. Test it!</p><p>Terraform will output a website_url. Click it.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*XGLJw0fkaiKrCqfG9PxnxA.png" /><figcaption>Deployed Resume Site Template!</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*T1Tj2PYSrVKiLv2QfhCAJA.png" /><figcaption>The website has a dynamic visitor counter!</figcaption></figure><p>You should see your site, and the visitor count (<strong>scroll to the end of the site to see it</strong>) should automatically update.</p><p>Now you should modify the site to actually look like a resume one as the above one is just a template.</p><h3>Part 4: How to Tear It Down!</h3><p>When you are done, or if you mess up and want to start over, you <strong>must</strong> destroy resources to avoid costs.</p><p><strong>Follow This Order:</strong></p><ol><li>Go to 01-resume-infra and run terraform destroy. (Wait for CloudFront to delete).</li></ol><p>2. Go to 00-bootstrap and run terraform destroy.</p><p>🎉Congratulations! You just built a serverless application using Infrastructure as Code. 👏</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=965fc28c5dbc" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Static Hosting using Amazon CloudFront and S3]]></title>
            <link>https://medium.com/@rojansedhai01/static-hosting-using-amazon-cloudfront-and-s3-18a680272c38?source=rss-fd868de3edf1------2</link>
            <guid isPermaLink="false">https://medium.com/p/18a680272c38</guid>
            <category><![CDATA[s3]]></category>
            <category><![CDATA[cloud]]></category>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[cloudfront]]></category>
            <category><![CDATA[static-websites]]></category>
            <dc:creator><![CDATA[Rojan Sedhai]]></dc:creator>
            <pubDate>Fri, 07 Feb 2025 07:48:46 GMT</pubDate>
            <atom:updated>2025-02-07T12:59:51.865Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/781/1*YMzUTFI7s--GFLQmUUS7oQ.png" /></figure><p>Are you looking for ways to host that static website but you don’t have much experience in AWS Cloud?</p><p>Well look no further, you have reached your destination! Here we will learn how to securely host your static website content in AWS (Amazon Web Services) using two services — <strong>CloudFront </strong>and <strong>S3</strong>.</p><p>First of all, you need access to <strong>AWS Management Console</strong>, which you can access through signing up for your own AWS account or get paid sandbox access using third parties (like Whizlab). It looks something like this below!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*IK8MMgR-_n9i17Av3-1Dww.png" /><figcaption>AWS Management Console</figcaption></figure><p>You will also need some <strong>static website files</strong> to upload (you can create your own or find templates online). Now that’s out of the way, let us look at simple steps to host a static website using S3 and CloudFront.</p><ol><li>After logging to the AWS Management Console, search for <strong>S3 </strong>(<strong>S3 </strong>or <strong>Simple Storage Service</strong> is an object storage service in AWS) and select it.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*TeEubahTE1xRgoejEUwT7Q.png" /><figcaption>Amazon S3</figcaption></figure><p>2. After that you will go to S3 dashboard and now you need to select “<strong>Create bucket</strong>”.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7PaJUvmp41AyEPa6ILYmdQ.png" /><figcaption>S3 Dashboard</figcaption></figure><p>3. That will bring you to next step, i.e. creating an S3 bucket as shown below. Add a unique name for in the “<strong>Bucket name</strong>” field (it has to be unique). Leave the rest of the settings as it is and hit “<strong>Create bucket</strong>” at the end.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*kpuupuKxXFu4ajcLCfP9Lw.png" /><figcaption>Creating an S3 Bucket</figcaption></figure><p>4. After creating the bucket, you will see your created bucket in the dashboard as shown below and then select your newly created bucket to enter it.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Bnpev9u4dKAPX1WvV7LApw.png" /><figcaption>View your S3 Buckets</figcaption></figure><p>5. Inside the bucket you will see a bunch of different settings like <strong>Objects</strong>, <strong>Metadata</strong>, <strong>Properties</strong>, <strong>Permissions </strong>and so on. You will also see that this is where you can upload your content in the bucket.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*nTe-P7lvx6bXuSZA4ryFpg.png" /><figcaption>Inside S3 Bucket</figcaption></figure><p>6. Click on “<strong>Upload</strong>” and that will bring you to another page where you will find that you can “<strong>drag and drop</strong>” your files and folders or you could select the “<strong>Add Files</strong>” and “<strong>Add Folder</strong>” to select the files/folders in your local system.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4aJ3Q6cawzFeiX0O1B9Q3A.png" /><figcaption>Upload page</figcaption></figure><p>7. After selecting or drag/drop your files, click on the “<strong>Upload</strong>” button in the down right corner.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*EukfCzHKA_ZenwviYe_JNQ.png" /><figcaption>Uploading your files</figcaption></figure><p>8. Once the upload is completed, you can go back to the bucket and see your files inside.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*cSQ-WMpmPGV9yORL8nun-A.png" /><figcaption>Uploaded files and folders in S3</figcaption></figure><p>9. Now leave the the <strong>S3 </strong>screen as it is and in the search bar at the top, search “<strong>CloudFront</strong>” and select it.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*mgfDnp5DtEZBMdydf_5M0g.png" /><figcaption>CloudFront</figcaption></figure><p>10. This will take you the <strong>CloudFront </strong>dashboard, where you click on “<strong>Create distribution</strong>” button.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*V7tfuvOuBY5Zk-pieJYrNA.png" /><figcaption>CloudFront Distribution Dashboard</figcaption></figure><p>11. This will take you to the <strong>configuration </strong>for the <strong>distribution </strong>of <strong>CloudFront</strong>. Now, in the <strong>origin domain section</strong>, select the S3 bucket that we create earlier in the above steps. The configuration page of CloudFront will look as shown in the below image.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*255h2PqFC50u_XZxyDIm-g.png" /><figcaption>Creating a CloudFront Distribution</figcaption></figure><p>12. In the same settings page, go to “<strong>Origin access</strong>” and select “<strong>Origin access control settings (recommended)</strong>”. When you select that, a new setting will open as highlighted in the image above. You now need to click on “<strong>Create new OAC</strong>”, which will bring you to a new pop up where you can simply click on “<strong>Create</strong>” without changing anything.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/598/1*V4FcS-K5eDFLcQH4sggzDA.png" /><figcaption>Creating an Origin Access Control (OAC)</figcaption></figure><p>13. After that, you will back to the Distribution page, where you need to scroll to the “<strong>Web Application Firewall (WAF)</strong>” section. In WAF setting, select “<strong>Do not enable security protections</strong>” if you don’t need extra security from WAF. In this demo, we will select this.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*C_iW7iT1yhXdqZJrlj9CLA.png" /><figcaption>Configuration of CloudFront Distribution</figcaption></figure><p>14. Now leave other settings as default and create the distribution. This will take you to a new page as shown below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*T5dYtOf5rvpsu1d6kks-bw.png" /><figcaption>CloudFront Distributed Created</figcaption></figure><p>15. As you can see in the image above, you will now see the “<strong>The S3 bucket policy needs to be updated</strong>” message and a “<strong>Copy policy</strong>” beside it. Hit “<strong>Copy policy</strong>”. This will copy the policy, which you will paste in your S3 bucket policy.</p><p>16. In the S3 bucket dashboard that you left open earlier, go to the “<strong>Permissions</strong>” tab and scroll to the “<strong>Bucket policy</strong>” setting.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*EspHiN13xWsfoiMF47L9XA.png" /><figcaption>S3 Bucket Policy</figcaption></figure><p>17. Hit <strong>edit </strong>in the <strong>Bucket policy</strong> and inside the empty area below “<strong>Policy</strong>”, paste the policy that you copied earlier and save it.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Oqufnmz6cwkUjCheKIP5qQ.png" /><figcaption>Editing S3 Bucket Policy</figcaption></figure><p>18. Now go back to <strong>CloudFront</strong>, and see your distribution being deployed. There you will see the domain name.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vS7MbRxJKrz8_i1DMPgGeg.png" /><figcaption>CloudFront Distributions</figcaption></figure><p>19. Copy the <strong>domain name</strong> and paste and hit it in a new tab in your browser. You can now see your website that is being hosted in the S3 from your browser.</p><p>(Note: In case you see some kind of error or message, wait for a couple of more minutes and refresh the page)</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*PlwCJ4MlCovODv27YhcmTw.png" /><figcaption>Static Website Hosted</figcaption></figure><p>So that’s it!!! You have hosted a static website using Amazon CloudFront and S3 in a secure way.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=18a680272c38" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to get free AWS Credits! (Updated!!!)]]></title>
            <link>https://medium.com/@rojansedhai01/how-to-get-free-aws-credits-ae39a45a1185?source=rss-fd868de3edf1------2</link>
            <guid isPermaLink="false">https://medium.com/p/ae39a45a1185</guid>
            <category><![CDATA[aws-credit]]></category>
            <category><![CDATA[aws]]></category>
            <category><![CDATA[credit]]></category>
            <category><![CDATA[amazon-web-services]]></category>
            <category><![CDATA[free-credit]]></category>
            <dc:creator><![CDATA[Rojan Sedhai]]></dc:creator>
            <pubDate>Thu, 30 Jan 2025 16:28:25 GMT</pubDate>
            <atom:updated>2025-12-01T05:00:23.777Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="An AI generated image of AWS with Free Credits!" src="https://cdn-images-1.medium.com/max/1024/1*zMoT0IVCd9bdhG-FEn-tfg.png" /></figure><p>There are very few words people like more than the word “<strong>Free</strong>”. The same holds true in the realm of <strong>cloud computing</strong>, especially since working in the cloud can get expensive quickly.</p><p>Be it running that <strong>Amazon EC2 instance</strong> or maybe you want to hop on the generative <strong>AI </strong>train with <strong>Amazon Bedrock</strong>, all of that can quickly empty your pockets- quite quickly might I add. So when you hear about <strong>Free Credits</strong>, you might as well take advantage of that!</p><p>And <strong>AWS </strong>is quite generous with its <strong>free credits</strong>! From a <strong>few dollars </strong>to <strong>thousands </strong>of <strong>dollars</strong>, there are quite a lot of credits that they provide for free!</p><p>Of course, there’s no such thing as free meals in this world, so what’s the catch you might ask? The catch is that the free credits probably won’t sustain you in the long run as it is generally geared towards letting the users get experience in the AWS cloud and use their services. So basically it&#39;s an incentive to attract people and businesses so that they will hopefully become a customer of AWS in the long run!</p><p>So yeah, don’t depend upon those credits to fully sustain you for the long run!</p><p>Now these pesky details are out of the way, let’s get to the good parts! There are quite a few different ways to get credits for free depending on the use cases and of course, there are different <strong>Terms and Conditions </strong>for each of these to apply so better check them out.</p><ol><li>Firstly, you have <a href="https://aws.amazon.com/partners/programs/arrc/"><strong>AWS Rapid Ramp Credits</strong></a>, where you can apply for $300 worth of AWS credits! This provides a $300 credit to small businesses to quickly get started testing AWS against their specific IT and business requirements by subsidizing a proof of concept. The instructions to apply for the credits are mentioned on the site so it’s pretty straightforward!</li><li>Then for those of you in the startup scene, you can apply for the <a href="https://aws.amazon.com/startups/credits#packages"><strong>AWS Activate Credits</strong></a>. It has two different credit packages- one is the Activate Founders ( $1,000 AWS Activate Credits) and the other is the Activate Portfolio ($100,000 AWS Activate Credits). Both of these can be a substantial help for those who want to build their startup in the AWS!</li><li>Now for those of you who are in the research field, there’s <a href="https://aws.amazon.com/government-education/research-and-technical-computing/cloud-credit-for-research/"><strong>AWS Cloud Credit for Research</strong></a>. Be it a student, faculty, or research staff, there’s something for everyone!</li><li>And for those of you in the non-profit sector feeling left behind, no need to fret too much! AWS got you covered with <a href="https://aws.amazon.com/government-education/nonprofits/nonprofit-credit-program/"><strong>The AWS Nonprofit Credit Program</strong></a>.</li><li>For those of you who want to work with Content Delivery Network on the AWS, you can use the <a href="https://pages.awscloud.com/GLOBAL-ln-GC-CloudFront-POC-Program-2021-interest.html?trk=ub"><strong>Amazon CloudFront Proof of Concept Program</strong></a>, where you can $300 worth of AWS Credits.</li><li>Lastly, you can also find some free credits, usually worth <strong>$25</strong>-<strong>$50</strong> by being active in various <strong>AWS events</strong>, and by filling out some <strong>surveys </strong>provided in those events. So that means you gotta be active in the <strong>AWS Community</strong>! I have gotten around three of these credits in the past year or so. So always be on the lookout for such events!</li></ol><p>As I have said already, these different ways to get <strong>free credits</strong> have their terms and conditions so don’t be too sad if you don’t meet their <strong>T&amp;C</strong>. Also, there may be more of these types of free credits program run by AWS so always be on a lookout for them on the internet!</p><p>Hopefully, some of these methods will come in handy for you and you will get those free credits to help build in the AWS cloud!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ae39a45a1185" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>