Data-Driven Decisions: Crafting and Deploying an Ad Sales Prediction Model on AWS with Terraform and FastAPI(Part 2: Deployment in AWS, EC2, Docker, ECR)

Andres Lopez
7 min readMar 2, 2024

--

In Part One, we tackled the ever-present challenge of optimizing advertising spend.

We all know the importance of advertising — it’s the lifeblood of brand awareness and customer acquisition. But how do you navigate the complexities of allocating budgets across different channels like TV, radio, and newspapers? How do you truly measure the effectiveness of your campaigns and maximize your return on investment (ROI)?

To address these critical questions, we turned to the power of machine learning. We leveraged a classic dataset encompassing sales figures from 200 markets, alongside their corresponding advertising expenditures for TV, radio, and newspapers. By meticulously training and evaluating several machine learning models — from ridge regression to gradient boosting — we unearthed valuable insights into the relationship between advertising spend and sales performance.

Now, in Part Two, we’ll take our powerful model and transform it into a real-world tool! We’ll embark on a deployment journey to production on AWS, leveraging the combined forces of Docker, ECR (Elastic Container Registry), and Terraform.

Here’s what awaits us:

  • Docker: We’ll package our application and its dependencies into a lightweight, portable container for seamless deployment across environments.
  • ECR (Elastic Container Registry): We’ll create a secure repository in AWS ECR to store our Docker image, ensuring easy access and management.
  • Terraform: This infrastructure-as-code tool will streamline the provisioning of our AWS resources, automating the creation and configuration of the necessary components to run our application in production.

By the end of Part Two, you’ll have a robust and scalable ad sales prediction tool deployed in AWS, ready to generate real-time predictions based on new advertising data. Stay tuned for the next installment, where we’ll dive deep into the deployment process!

Prerequisites:

Before we embark on the deployment journey, ensure you have the following in place:

  • Terraform: This infrastructure-as-code tool will be our secret weapon. Make sure you have it installed on your machine.
  • AWS CLI: We’ll interact with AWS services through the AWS CLI. Install it with admin access for full control over resource creation.
  • Basic Bash Scripting Knowledge: Familiarity with Bash scripting will be helpful for navigating the command line during deployment.
  • Understanding of AWS Architecture: Possessing a basic understanding of AWS components like EC2 instances, security groups, and iam will be beneficial.

Deployment Stage Begins!

  1. Open Your Terminal: You can do this on your local machine or on any system where you have AWS CLI installed and configured with the necessary AWS credentials.
  2. Navigate to Your Project Directory: Use the cd command to reach the directory containing your Dockerfile. Here's an example:
cd /path/to/your/project/directory

3. Create an ECR Repository:

Time to establish a secure home for your Docker image within AWS. Run the following AWS CLI command, replacing your-region with your actual AWS region (e.g., us-east-1):

aws ecr create-repository --repository-name fastapibuilderml --region your-region

4. Login to Your AWS Docker Registry:

This step authenticates Docker to interact with your ECR repository. Execute the following command:

aws ecr get-login-password --region your-region | docker login --username AWS --password-stdin aws-id.dkr.ecr.aws-region.amazonaws.com

5. Build and Push Docker Image (M1 Mac with Ubuntu Deployment):

Since I am working on a Macbook with an M1 chip, we’ll target both AMD and ARM architectures for compatibility with your Ubuntu deployment instance. Run this command:

docker buildx create --name fastapibuilderml --use
docker buildx build --builder fastapibuilderml --platform linux/amd64,linux/arm64 -t aws-id.dkr.ecr.region.amazonaws.com/your-repo-name:your-tag --push .

6. Verify Docker Image in ECR:

  • Head over to your AWS Management Console and navigate to the ECR service. You should see your newly pushed Docker image listed within the repository you created.

7. Create the Terraform Directory:

Now, let’s set up the Terraform configuration to provision our infrastructure. Navigate back to your project directory and create a new directory named terraform using the following command:

mkdir terraform
nano main.tf
provider "aws" {
region = "your-aws-region"
}

resource "aws_cloudwatch_log_group" "my_log_group" {
name = "/aws/ec2/my-fastapi-app" # Name of the log group

# Optionally, specify how long you want logs to be retained. If omitted, logs are retained indefinitely.
# The retention period is specified in days. Common values are 30, 90, 180, and 365.
retention_in_days = 90

# Optionally, add tags to the log group
tags = {
Environment = "production"
Project = "My FastAPI App"
}
}


resource "aws_iam_policy" "ec2_policy" {
name = "ec2_policy"
path = "/"
description = "EC2 policy for accessing ECR and CloudWatch"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"ecr:GetAuthorizationToken"
]
Resource = "*"
},
{
Effect = "Allow"
Action = [
"logs:CreateLogStream",
"logs:CreateLogGroup",
"logs:PutLogEvents",
"logs:DescribeLogStreams"
]
Resource = "${aws_cloudwatch_log_group.my_log_group.arn}:*" # This uses the ARN of the Log Group created above
}
]
})
}

resource "aws_iam_role" "ec2_role" {
name = "ec2_role"

assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
}
]
})
}

resource "aws_iam_role_policy_attachment" "ec2_policy_attach" {
role = aws_iam_role.ec2_role.name
policy_arn = aws_iam_policy.ec2_policy.arn
}


resource "aws_iam_instance_profile" "ec2_instance_profile" {
name = "EC2InstanceProfile"
role = aws_iam_role.ec2_role.name
}


resource "aws_instance" "app_instance" {
ami = "ami-00381a880aa48c6c6" # Update this with the correct AMI for your region, typically an Ubuntu Server AMI
instance_type = "t3.micro"
key_name = "name-of-your-key" # Replace with your SSH key name, you should previusly create one if you have not do it

security_groups = ["${aws_security_group.app_sg.name}"]
# Associate the instance profile
iam_instance_profile = aws_iam_instance_profile.ec2_instance_profile.name
user_data = <<-EOF
#!/bin/bash
# Update and install necessary packages
sudo apt update
sudo apt install -y docker.io
sudo systemctl start docker
sudo systemctl enable docker

# Install AWS CLI
cd /tmp
sudo apt install -y unzip
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

# Pull and run your Docker container
aws ecr get-login-password --region your-region| docker login --username AWS --password-stdin your-aws-account-id.dkr.ecr.your-region.amazonaws.com/
sudo docker pull your-aws-account-id.dkr.ecr.your-region.amazonaws.com/your-repo-name:your-tag
sudo docker run -d -p 8000:8000 your-aws-account-id.dkr.ecr.your-region.amazonaws.com/your-repo-name:your-tag
EOF

tags = {
Name = "FastAPI-App-Server"
}
}

resource "aws_security_group" "app_sg" {
name = "fastapi_app_sg"
description = "Allow web traffic to FastAPI app"

ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

# Add this block to allow SSH access
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["your-ip/32"]
}

# Add this block to allow access to FastAPI on port 8000
ingress {
from_port = 8000
to_port = 8000
protocol = "tcp"
cidr_blocks = ["your-ip/32"] # Adjust this to restrict access if necessary
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

With this script we will create log groups, security groups, ec2 instance, iam policy, attach the policy, and execute a bash script to install docker in our instance, run in on port 8000, please remember to modify it according to your resources, and create the ssh key.

8. Initialize Terraform, plan your deployment, and apply the configuration:

terraform init
terraform plan
terrraform apply

Wait for the configuration to be applied (approximately 5 minutes).

Access Your Deployed Application:
Go to your AWS instances dashboard to find the external IP of your deployed instance.
Access your FastAPI application by navigating to http://[external-ip]:8000/docs in your browser.

It works!

For cleanup purposes, you can easily remove the entire deployment using the following Terraform command in your terminal:

terraform destroy 

n this two-part series, we embarked on a journey to transform your machine learning model for ad sales prediction into a real-world tool.

Part One focused on the heart of the application: the model itself. We explored a classic advertising dataset and meticulously trained and evaluated various machine learning models. By comparing their performance, we identified the most effective model for predicting sales based on advertising spend.

Part Two took that winning model and prepped it for deployment on AWS. We leveraged the power of Docker to package our application, ensuring portability across environments. We then created a secure repository in ECR (Elastic Container Registry) for storing our Docker image. Finally, Terraform, our infrastructure-as-code tool, streamlined the provisioning of AWS resources to run the application in production.

You like more Now you have a robust and scalable ad sales prediction tool deployed on AWS, ready to churn out real-time predictions with each new slice of advertising data that comes its way! But before you wrap up and click away, remember the sacrosanct rule of production applications: usually, we rock the boat with port 443 (HTTPS) and gird our digital loins with a more solid security setup.

Now, while this guide has laid the groundwork, it’s really just your launch pad. Here’s where I drop the DWYATT bombshell — Do What You Ain’t Taught. That’s right, this playful twist isn’t just for kicks; it’s an invitation to tread beyond the beaten path, to mix things up, to sprinkle a bit of your own unconventional magic onto what we’ve started here. Who knows? You might just concoct something brilliantly out-of-box that takes your deployment from standard to stellar.

Stay tuned for our future escapades where we might just dive deeper into the rabbit hole, optimizing your deployment or knitting your nifty tool with other applications. Until then, keep experimenting, keep learning, and most importantly, keep doing what you ain’t taught!

--

--

Andres Lopez

I wear many hats, but at their core, they all share a common thread: a love for logic, problem-solving, and building. Connect with me on X @Hindsightech