No Budget for Tokens? No Problem! Building Chatbots with Falcon LLM
Building a Chatbox with LLM Falcon: A Step-by-Step Tutorial
we will explore how to build a chatbox using the LLM Falcon FREE language model. LLM Falcon, powered by Hugging Face, is an advanced language model capable of generating high-quality responses. By the end of this tutorial, you’ll have a fully functional chatbox that can engage in natural and interactive custom conversations.
Prerequisites: To follow this tutorial, you should have the following prerequisites:
- Basic knowledge of Python programming
- Familiarity with Flask web framework
- Hugging Face API token
Step 1: Setting up the Environment: Before we start building the chatbox, let’s set up our development environment. Create a new Python virtual environment and install the required libraries by running the following commands:
$ python -m venv chatbox-env
$ source chatbox-env/bin/activate
$ pip install langchain Flask requests playsound chainlit
Step 2: Obtaining the Hugging Face API Token: To access the LLM Falcon model, we need to obtain an API token from Hugging Face. Visit the Hugging Face website, sign up for an account, and generate an API token. This token will allow us to make API requests and interact with the Falcon model.
Step 3: Setting up your .env file:
To securely store and access your environment variables, we’ll use a .env
file. Follow these steps to set it up:
- Create a new file in the root directory of your project and name it
.env
. - Open the
.env
file in a text editor. - Add the following line to the file:
HUGGINGFACEHUB_API_TOKEN=your_api_token_here
Replace your_api_token_here
with your actual Hugging Face Hub API token.
4. Save the .env
file.
By setting the HUGGINGFACEHUB_API_TOKEN
variable in the .env
file, we ensure that our application can securely access the API token without exposing it directly in our code.
Remember not to share or upload your .env
file to any public repositories or sources, as it contains sensitive information. It's best to add the .env
file to your project's .gitignore
file to avoid accidental sharing.
With the .env
file set up, we can now proceed to the next steps of the tutorial.
Step 3: Implementing the Chatbox: Now, let’s dive into the implementation of the chatbox. Create a new Python file called app.py
and add the following code:
from langchain import HuggingFaceHub, LLMChain, PromptTemplate
from langchain.memory import ConversationBufferWindowMemory
from dotenv import find_dotenv, load_dotenv
import requests
import os
from flask import Flask, render_template, request
import chainlit as cl
huggingfacehub_api_token = os.environ['HUGGINGFACEHUB_API_TOKEN']
load_dotenv(find_dotenv())
repo_id = "tiiuae/falcon-7b-instruct"
lm = HuggingFaceHub(huggingfacehub_api_token=huggingfacehub_api_token,
repo_id=repo_id,
model_kwargs={"temperature":0.6, "max_new_tokens":2000})
template = """
Your custon promp
{history}
Me:{human_input}
Jack:
"""
def factory(human_input):
prompt = PromptTemplate(
input_variables=["history", "human_input"],
template=template
)
llm_chain = LLMChain(
llm=HuggingFaceHub(huggingfacehub_api_token=huggingfacehub_api_token, repo_id="tiiuae/falcon-7b-instruct", model_kwargs={"temperature": 0.2}),
prompt=prompt,
verbose=True,
memory=ConversationBufferWindowMemory(k=2)
)
output = llm_chain.predict(human_input=human_input)
return output
#web GUI
from flask import Flask, render_template, request
app = Flask(__name__)
@app.route("/")
def home():
return render_template("index.html")
@app.route('/send_message', methods=['POST'])
def send_message():
human_input=request.form['human_input']
message = factory(human_input)
return message or ''
if __name__ == '__main__':
app.run(debug=True)
Step 4: Creating an HTML Template for Use with Flask:
Create the index.html file in a folder name: templates
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>English Tutor</title>
<style>
body {
display: flex;
justify-content: center;
align-items: center;
flex-direction: column;
height: 100vh;
background-color: #1f1f1f;
color: #ffffff;
font-family: "Arial", sans-serif;
}
form {
display: flex;
flex-direction: column;
align-items: center;
margin-bottom: 20px;
}
input[type="text"] {
padding: 10px;
margin-bottom: 10px;
border: none;
border-radius: 5px;
font-size: 16px;
}
button[type="submit"] {
padding: 10px 20px;
border: none;
border-radius: 5px;
background-color: #26a69a;
color: #ffffff;
font-size: 16px;
cursor: pointer;
}
#response_message {
text-align: center;
font-size: 18px;
width: 80%;
margin-top: 20px;
overflow-y: auto;
max-height: 200px;
}
</style>
</head>
<body>
<form method="POST" action="/send_message">
<input
type="text"
name="human_input"
placeholder="Introduce tu pregunta"
/>
<button type="submit">Enviar</button>
</form>
<div id="response_message"></div>
<script>
const form = document.querySelector("form");
const responseMessage = document.getElementById("response_message");
form.addEventListener("submit", function (e) {
e.preventDefault();
const formData = new FormData(form);
fetch("/send_message", {
method: "POST",
body: formData,
}).then((response) =>
response.text().then((data) => {
responseMessage.innerHTML = data;
})
);
form.reset();
});
</script>
</body>
</html>
Et Voila c’est fait!
Here the structure:
my_project/
├── app/
│
│ ├── templates/
│ ├── index.html
│
├── .env
├── .gitignore
├── app.py
└── requirements.txt
Finally
In this tutorial, we explored how to build a chatbox using the LLM Falcon model with Flask. We leveraged the power of language models and the Hugging Face Hub to create an interactive conversational agent.
Throughout the tutorial, we covered essential steps such as setting up the project, creating HTML templates, handling user input, and integrating the LLM Falcon model. We also implemented a simple web interface using Flask, allowing users to interact with the chatbox.