Build Custom Chatbot in Python (2)

BridgeRiver AI
4 min readSep 30, 2023

--

custom chatbot series:

Quick Overview

In this article, we’ll show you how to build the knowledge base feature in Botpress and Voiceflow from scratch, by using Llama-index library in python and Javascript powered UI. After this tutorial, you could easily build your own custom chatbot, rather than paying to Botpress by messages.

Github repo: https://github.com/michelle-w-br/chatbot_ml

Note: this blog needs understanding of python and programing basics as prerequisite.

What is Knowledge Base in Botpress and Voiceflow

A knowledge base is a repository of information or data that a chatbot or voice assistant can access and use to provide answers or responses to user queries. Here’s a brief overview of knowledge bases in Botpress and Voiceflow. Key features of the Botpress knowledge base may include:

  • File Upload: You can upload files containing information, such as FAQs, product details, or any other relevant data.
  • Question-Answer Mapping: You can associate questions with their corresponding answers within the uploaded documents.
  • Bot Integration: The chatbot can be programmed to query the knowledge base to find answers to user questions during conversations.
  • Dynamic Updates: You can update the knowledge base as needed to keep the chatbot’s information up to date.

The knowledge base in Botpress is useful for creating chatbots that can provide information or answer questions based on a predefined set of documents or data.

knowledge base in botpress allows users to uplodate documents, web pages, web search and text document

What is Llama Index

For those unfamiliar, Llama Index is a library designed to enhance the retrieval augmentation pipeline for language models (LMs). This pipeline becomes crucial when we want to equip our LMs with external knowledge from various sources, including databases or real-world data, reducing the risk of generating inaccurate or hallucinated information. Llama Index steps in to support these functions.

Llama Index boasts a range of capabilities, but we won’t cover them all in this session. The library offers data loaders that make it effortless to extract data from diverse sources like APIs, PDFs, databases, and CSV files. From Llama Hub, you could find all data formats that is supported by this library.

Llama Hub has plugins to support all different data types

Additionally, it provides advanced ways to structure data, allowing for connections between different data sources. Imagine having chunks of text from PDFs and establishing connections between these chunks to maintain context. Llama Index can also assist with post-retrieval re-ranking. However, our primary focus here is on getting started with a basic introduction to the library.

How does llama index enable query to a certain document? It creates embeddings for the documents. To generate these embeddings, we’ll be using OpenAI, so you’ll need an API key from OpenAI’s platform. Then we’ll set up the indexing pipeline, which only takes three steps:

  1. load our documents, embeds them, and stores them in index.
  2. create a query/chat engine which based on the index
  3. send the query and return the response

We put this in the github repo “v3.0_kb/qa_pdf.py” file. Function “get_response” will be called later in the chatbot server program.

import os
from llama_index import SimpleDirectoryReader
from llama_index import VectorStoreIndex

os.environ['OPENAI_API_KEY'] = "sk-xxxxxxx"

# step 1: upload data and build index
docs = SimpleDirectoryReader('./dataset').load_data()
index = VectorStoreIndex.from_documents(docs)

# step2: setup the query engine
chat_engine = index.as_chat_engine(
response_mode="compact"
)

# step3: send query
def get_response(msg):
response = chat_engine.chat(msg)
return response

Chatbot User Interface

Build a chatbot user interface is fairly straight forward. We only below files in order to build chatbot UI shown in the image:

  • templates/home.html: html landing page for the server
  • static/app.js: javascript defines the chatbot behavior
  • static/style.css: how the UI looks like
home.html and the chatbot UI

Chatbot server by flask

We use python flask to set up the chatbot server, in the “chatpdf.py” file. It only needs two functions:

  1. index_get(): render the html home page
  2. predict(): accept user message, send to query user’s custom knowledge base established by llama-index, the returned response will be saved in json format in order to display the answer in the chatbot box
from flask import Flask, render_template, request, jsonify
from qa_pdf import get_response

app = Flask(__name__)

@app.get("/")
def index_get():
return render_template("home.html")

@app.post("/predict")
def predict():
text=request.get_json().get("message")
response=get_response(text)
message={"answer":response.response}
return jsonify(message)

if __name__=="__main__":
app.run(debug=True)

What if my custom data is a youtube video

As mentioned earlier, from Llama Hub, you could find all data formats that is supported by this library. For example, Youtube trascript loader allows to load youtube data by providing the youtube video url.

In our git repot, we also include a use case for youtube data as the custom knowledge base, as shown in the file “qa_video.py” and “chatvideo.py”. Logic is the same as query pdf data, the only difference is the loader function as mentioned here.

To Summarize

In this blog, we covered how to build chatbot knowledge base feature from scratch, using python library llama index, python flask and Javascript to support the chatbot UI. However, the best way to learn is always to get hands on. The full code is accessible from Github. If you have any questions, please feel free to leave your comments here.

--

--