Build Your Own Highly Scalable Vercel . (Part-1/3)

Jash Agrawal
7 min readMar 17, 2024

In our quest to create a Vercel-like platform tailored specifically for React projects, we’ve outlined a comprehensive system design consisting of three key components.

You can find system design for the same is : Here

As per our plan, our platform will have three important parts:

  1. Build Service: This is like a worker inside a special box (Docker container). Its job is to put together all the parts of a React project and store everything neatly in a big storage called AWS S3. It listens to instructions called environment variables to know what to do for each project.
  2. Main Node.js Server: Think of this like the boss of our platform. It’s where we create new projects, start putting them online, and keep track of what’s happening. It’s also where we look at the records of what’s been happening, like who visited our sites and what they did.
  3. Reverse Proxy Server: This server is like a helpful guide that directs people’s requests to the right place. Imagine you’re in a big library, and you ask for a specific book. The reverse proxy server is like the librarian who knows exactly where that book is and gets it for you quickly.

These three parts work together to make sure our platform runs smoothly and React projects are hosted online for everyone to see.

Let’s dive into the first component: the Build Service.

npm init -y
npm i kafka @aws-sdk/client-s3 mime-types

Here, we’ll use three important tools to help us with different tasks:

  1. KafkaJS: This tool is like a special messenger that helps us send and receive logs. It’s perfect for streaming logs in real-time, so we can keep track of what’s happening with our projects.
  2. AWS S3: Think of this as a giant storage locker in the cloud. We’ll use it to store all the important pieces of our project builds, like files and folders.
  3. Mime-types: This is a handy tool that helps us figure out what type of file something is without having to guess. It makes things easier for us by doing the hard work of identifying file types automatically.

Now, let’s grab a special certificate called ca.pem from our Kafka provider and put it in the main folder of our project. While we’re at it, we’ll also create two important files: script.js and Dockerfile. These files will help us with building and running our project smoothly.

Your folder structre should look something like this

Now, let’s lay the groundwork for our Docker container by crafting a Dockerfile. This file serves as the blueprint guiding the container on what actions to undertake.
Here’s a concise representation of our Dockerfile:

# fetch unbuntu image from docker hub
FROM ubuntu:focal

# install curl and update packages
RUN apt-get update
RUN apt-get install -y curl
RUN curl -sL https://deb.nodesource.com/setup_20.x | bash -
RUN apt-get upgrade -y

# install git and node.js on the image
RUN apt-get install -y nodejs
RUN apt-get install -y git

# set working directory (where our app will be)
WORKDIR /home/app

# copy our files this will go to /home/app (eg main.sh will be at /home/app/main.sh)
COPY main.sh main.sh
COPY script.js script.js
COPY package.json package.json
COPY package-lock.json package-lock.json
COPY kafka.pem kafka.pem

# install dependencies
RUN npm install

# set permissions this will give these files permission to execute
RUN chmod +x main.sh
RUN chmod +x script.js

# run the file
ENTRYPOINT [ "/home/app/main.sh" ]

Let’s create a bash script called main.sh to handle the cloning of our project's repository from GitHub

#!/bin/bash
export GIT_REPO_URL="$GIT_REPO_URL"
export GIT_BRANCH="$GIT_BRANCH"
git clone -b $GIT_BRANCH $GIT_REPO_URL /home/app/output
exec node script.js

Once the script executes, it clones the project repository from GitHub into the /home/app/output directory within our Docker container. This directory is significant because it aligns with the working directory we've designated in our Dockerfile (/home/app).

So inside docker container our app is something like this

Preparing to Upload Build Files to AWS S3

Now that we've gathered all our project files within our working directory, let's shift our focus to script.js, where the magic happens.

Our first task is to create a function responsible for uploading our project's build files to AWS S3. But before we dive into the code, we need to set up an S3 bucket and ensure it's accessible for our upload process.

Step 1: Create an S3 Bucket

To do this, follow these steps:

  1. Log in to your AWS Management Console.
  2. Navigate to the S3 service.
  3. Click on "Create bucket" to initiate the bucket creation process.
  4. Enter a unique name for your bucket and choose the AWS region.
  5. For simplicity, you can keep the default settings for the remaining configurations.
  6. Ensure to uncheck any boxes related to making the bucket public for security reasons.

Step 2: Configuring Bucket Policies

It's important to configure bucket policies properly to ensure secure access control. However, for simplicity .

Once your S3 bucket is set up, we're ready to proceed with our script.

Wait your s3 is not quite public yet .
Go Inside bucket go to permissions tab Edit policy and Paste below policy and save changes .

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "your-bucket-arn/*"
},
{
"Sid": "AllowWebAccess",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "your--bucket-arn/*"
}
]
}
const { S3Client, PutObjectCommand } = require("@aws-sdk/client-s3");

// An S3Client instance is created for interacting with S3.
// The client is configured with:
// region: The AWS region where the S3 bucket resides ("regionfromaws").
// credentials: Access credentials for the S3 service (access key)

const s3Client = new S3Client({
region: "awsregion",
credentials: {
accessKeyId: "accessKeyId",
secretAccessKey: "secretAccessKey",
},
});

//function to upload a specified file to an S3 bucket
async function uploadFileToS3(PROJECT_ID, fileName, filePath) {
try {
// Log the start of the upload process (don't worry we be creating this in a sec)
await publishLog(`Uploading ${fileName}`);

// Create a PutObjectCommand object for S3 interaction
const command = new PutObjectCommand({
Bucket: "your-bucket-name", // Name of the S3 bucket to upload to
Key: `__output/${PROJECT_ID}/${fileName}`, // Path and filename within the bucket
Body: fs.createReadStream(filePath), // Readable stream of the file's contents
ContentType: mime.lookup(filePath), // Content type of the file
});

// Send the command to S3 to initiate the upload
await s3Client.send(command);

// Log the completion of the upload
await publishLog(`Uploaded ${fileName}`);

// Indicate successful upload
return true;
} catch (error) {
// Handle any errors during the upload process
console.error("Error uploading file:", error);
throw error; // Re-throw the error for further handling
}
}

Creating the publishLog Function

Our next step involves setting up a function called publishLog, which will be responsible for publishing logs to Kafka. This function ensures that we have a streamlined process for monitoring and tracking the progress of our deployment tasks.

Let’s dive into the implementation of this function:

const { Kafka } = require("kafkajs");

//make kafka connection
const kafka = new Kafka({
clientId: `docker-build-server-${DEPLOYMENT_ID}`,
brokers: ["broker"],
ssl: {
ca: [fs.readFileSync(path.join(__dirname, "kafka.pem"), "utf-8")],
},
sasl: {
username: "username",
password: "password",
mechanism: "plain",
},
});

//initialize kafka-producder
const kafkaProducer = kafka.producer();

//function to publish logs with some extra details to kafka .
const publishLog = async (log) => {
console.log(log);
await kafkaProducer.send({
topic: "logs",
messages: [
{ key: "log", value: JSON.stringify({ PROJECT_ID, DEPLOYMENT_ID, log }) },
],
});
};

Finally lets get Started to build our react project


//Your kafka code

//Your s3 code

async function init() {
// Connect to kafka producer
await kafkaProducer.connect();

await publishLog("Starting build");

// Get the output directory's absoulte path
const outDirpath = path.join(__dirname, "output");

// go to output directory and Execute build command
const p = exec(`cd ${outDirpath} && npm i && npm run build`);

// On receiving data from stdout log it (to stream build logs)
p.stdout.on("data", async (data) => {
await publishLog(data.toString());
});

// On receiving error from stdout log it ( to stream build error logs)
p.stdout.on("error", async (data) => {
await publishLog("Error", data.message.toString());
});

// When the build is completed log it and upload build files to S3
p.stdout.on("close", async (data) => {
await publishLog("Build Completed");

// Get the dist directory path (where our build exists)
const distDirPath = path.join(__dirname, "output", "dist");

// Get the contents of dist directory
const distFolderContents = fs.readdirSync(distDirPath, {
recursive: true,
});

// Log starting of file upload
await publishLog("Starting to upload build files");

// Upload files to S3
for (const file of distFolderContents) {
const filePath = path.join(distDirPath, file);
if (fs.lstatSync(filePath).isDirectory()) {
continue;
}
await uploadFileToS3(PROJECT_ID, file, filePath);
}

// Log completion of deployment
await publishLog("Deployment Completed");

// Exit the process
process.exit(0);
});
}

init();

With the build server configuration finalized, we’ve laid the groundwork for a robust deployment pipeline tailored to our React projects.

The culmination of our efforts brings us to a pivotal juncture : -

Uploading our newly created image to AWS ECR.

Now inside the the repository click view push commands

Then copy paste command 1 to 4 in same order

Now you will see your image here something like this

We will build api server in the next part . Here

( Actively looking for Full-stack Developer Roles hit me up . )

ME | LinkedIn | Github | Contact Me

--

--