Résumé Chatbot

Aaron Philip
7 min readJul 2, 2024

--

I have had a portfolio website for a while now, I built it using a template of Next.JS . But for the longest time I’ve felt that there is something missing in developer portfolios. I want recruiters to know about my work in an interactive manner and just linking my Resume on my website was not going to help. So I took inspiration from a LinkedIn post that I saw of someone building a chatbot that answers questions about them, and thought of a way to integrate it into my portfolio.

My portfolio website

My Portfolio : https://nextjs-blog-sand-rho-45.vercel.app/

Now that the idea is clear, let’s move onto it’s the implementation.

  1. Convert Resume from PDF to a txt format
  2. Make sure to remove unnecessary whitespaces
Example of Resume in text format

Backend System Architecture:

System Architecture

Backend Code:

import express from 'express';
import bodyParser from 'body-parser';
import cors from 'cors'; // Import cors middleware
import { FaissStore } from "@langchain/community/vectorstores/faiss";
import { TextLoader } from "langchain/document_loaders/fs/text";
import { GoogleGenerativeAI } from "@google/generative-ai";
import { TaskType } from "@google/generative-ai";
import { readFileSync } from 'fs';
import path from 'path';
import { fileURLToPath } from 'url';

// Initialize Google Generative AI
const genAI = new GoogleGenerativeAI(GOOGLE_API_KEY);
const model = genAI.getGenerativeModel({ model: "embedding-001" });
const model2 = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });

// Get the directory name of the current module
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);

// Initialize Express server
const app = express();
app.use(bodyParser.json());

// Use cors middleware to allow requests from all origins
app.use(cors());

async function embedRetrivalQuery(queryText) {
const result = await model.embedContent({
content: { parts: [{ text: queryText }] },
taskType: TaskType.RETRIEVAL_QUERY,
});
const embedding = result.embedding;
return embedding.values;
}

async function embedRetrivalDocuments(docTexts) {
const result = await model.batchEmbedContents({
requests: docTexts.map((t) => ({
content: { parts: [{ text: t }] },
taskType: TaskType.RETRIEVAL_DOCUMENT,
})),
});
const embeddings = result.embeddings;
return embeddings.map((e, i) => ({ text: docTexts[i], values: e.values }));
}

// Returns Euclidean Distance between 2 vectors
function euclideanDistance(a, b) {
let sum = 0;
for (let n = 0; n < a.length; n++) {
sum += Math.pow(a[n] - b[n], 2);
}
return Math.sqrt(sum);
}

// Performs a relevance search for queryText in relation to a known list of embeddings
async function performQuery(queryText, docs) {
const queryValues = await embedRetrivalQuery(queryText);

// Calculate distances
const distances = docs.map((doc) => ({
distance: euclideanDistance(doc.values, queryValues),
text: doc.text,
}));

// Sort by distance
const sortedDocs = distances.sort((a, b) => a.distance - b.distance);

return sortedDocs.map(doc => doc.text);
}

// Generates a final answer using all the relevant documents
async function generateFinalAnswer(queryText, docs) {
const context = docs.join("\n\n");
const result = await model2.generateContent(`Question: ${queryText}\n\nContext:\n${context}\n\nAnswer:`)
const response = await result.response;
const text = await response.text();

// Remove ** and \n from the final answer
const cleanedText = text.replace(/\*\*/g, '').replace(/\n/g, ' ');
return cleanedText;
}

// Load the document texts from embeddings.txt
const txtPath = path.resolve(__dirname, 'embeddings.txt');
const loadEmbeddingsTxt = () => {
const fileContent = readFileSync(txtPath, 'utf-8');
const docs = fileContent.split('\n').filter(line => line.trim() !== '');
return docs;
};
const docTexts = loadEmbeddingsTxt();

// Precompute embeddings for our documents
let docs = [];
embedRetrivalDocuments(docTexts).then((precomputedDocs) => {
docs = precomputedDocs;
});

// Define the POST endpoint
app.post('/ask', async (req, res) => {
const { question } = req.body;
console.log("Received question:", question);
if (!question) {
return res.status(400).json({ error: 'Question is required' });
}

try {
// Use retrieval query embeddings to find most relevant documents
const sortedDocs = await performQuery(question, docs);

// Generate a final answer using all the relevant documents
const finalAnswer = await generateFinalAnswer(question, sortedDocs);
res.json({ answer: finalAnswer });
} catch (error) {
console.error("Error processing request:", error);
res.status(500).json({ error: 'Internal server error' });
}
});

// Start the server
const PORT = process.env.PORT || 8000;
app.listen(PORT, () => {
console.log(`Server is running on port ${PORT}`);
});

This code sets up a backend system to create a chatbot for answering questions based on a user’s résumé, using Google Generative AI models for embedding and text generation. Here’s a brief overview of what the code does:

1. Initialize Environment:
— Import necessary libraries and modules (`express`, `body-parser`, `cors`, `fs`, `path`).
— Initialize Google Generative AI models (`embedding-001` for embeddings and `gemini-1.5-flash` for text generation)

2. Setup Express Server:
— Create an Express server and configure it to use `body-parser` for parsing JSON requests and `cors` for handling cross-origin requests.

3. Embedding Functions:
— Define functions to generate embeddings for query texts and document texts using the Google Generative AI model.
— Calculate Euclidean distance between vectors for relevance search.

4. Perform Query:
— Implement a function to perform a relevance search by comparing the query text embedding with precomputed document embeddings.
— Sort documents based on their relevance (distance) to the query.

5. Generate Final Answer:
— Use the second AI model to generate a final answer by providing it with the query and the most relevant document contexts.

6. Load Document Texts:
— Load and preprocess document texts (résumé content) from a file (`embeddings.txt`).

7. Precompute Document Embeddings:
— Compute embeddings for the document texts and store them for quick access during query processing.

8. Define POST Endpoint:
— Create a `/ask` endpoint to handle POST requests. It processes the incoming question, finds relevant documents, generates an answer, and returns it as a response.

In summary, the code provides a RESTful API endpoint to receive questions, find relevant sections of a résumé, and generate an appropriate response using AI models.

Backend Deployment :

Push the code to Github and make sure it is called index.js

For deployment, we’ll use Render

Connect your Github account and choose the repository to where you have pushed the backend code to

If you have no errors in your code, the logs should look like this:

Frontend Code :

import React, { useState } from 'react';
import { ChatIcon, PaperAirplaneIcon } from '@heroicons/react/solid';

const ChatComponent = () => {
const [isOpen, setIsOpen] = useState(false);
const [message, setMessage] = useState('');
const [messages, setMessages] = useState([]);

const toggleChatWindow = () => setIsOpen(!isOpen);

const sendMessage = async () => {
if (message.trim() !== '') {
const userMessage = { content: message, sender: 'user' };
setMessages([...messages, userMessage]);
setMessage('');

try {
const response = await fetch('RENDER_URL/ask', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ question: message }),
});
if (!response.ok) {
throw new Error(`Error: ${response.statusText}`);
}
const data = await response.json();
const { answer } = data;

const apiMessage = { content: answer, sender: 'api' };
setMessages(messages => [...messages, apiMessage]);
} catch (error) {
console.error("Failed to get answer from the API", error);
const errorMessage = { content: "Sorry, couldn't fetch the answer.", sender: 'api' };
setMessages(messages => [...messages, errorMessage]);
}
}
};

const handleSendMessage = (event) => {
if (event.key === 'Enter') {
event.preventDefault();
sendMessage();
}
};

const handleClickSendMessage = () => {
sendMessage();
};

const handleInputChange = (event) => {
setMessage(event.target.value);
};

return (
<div className="fixed bottom-4 right-4 flex flex-col items-end z-50">
{isOpen && (
<div className="w-96 h-[600px] bg-white shadow-lg rounded-lg flex flex-col">
<div className="p-4 border-b border-gray-200 flex justify-between items-center">
<h2 className="text-lg font-semibold">Chat</h2>
<button onClick={toggleChatWindow} className="focus:outline-none">Close</button>
</div>
<div className="flex-1 p-4 overflow-auto">
{messages.map((msg, index) => (
<div key={index} className={`my-2 p-2 rounded ${msg.sender === 'user' ? 'bg-blue-100' : 'bg-green-100'}`}>
{msg.content}
</div>
))}
</div>
<div className="border-t border-gray-200 p-4 flex">
<input
type="text"
className="flex-1 border rounded-md p-2 mr-2"
placeholder="Type a message..."
value={message}
onChange={handleInputChange}
onKeyPress={handleSendMessage}
/>
<button
onClick={handleClickSendMessage}
className="bg-blue-500 text-white rounded-full p-2 focus:outline-none"
aria-label="Send message"
>
<PaperAirplaneIcon className="h-5 w-5 transform rotate-90"/>
</button>
</div>
</div>
)}

{!isOpen && (
<button
onClick={toggleChatWindow}
className="bg-blue-500 text-white p-3 rounded-full shadow-lg focus:outline-none"
aria-label="Open chat"
>
<ChatIcon className="h-6 w-6" />
</button>
)}
</div>
);
};

export default ChatComponent;

Just injecting this Javascript code will make a chatbot icon popup on your webpage.

Things to keep in mind —

  • Replace the RENDER_URL with your Render URL
  • Remember to install the relevant libraries and TailwindCSS

Thank you! Hope you enjoy making your own Resume Bots!

--

--