Building a Chatbot with Local LLM: A Hands-On Python Tutorial

Everton Meyer da Silva
6 min readApr 4, 2024

--

Introduction

In this hands-on tutorial, we’ll embark on a journey to build a chatbot using Local Large Language Models (LLMs) with the Python programming language. This guide empowers you to build a chatbot that’s not only efficient but also fully under your control.

Why Local Language Models (LLMs)?

Local LLMs put you in the driver’s seat, allowing you to tailor your chatbot’s behavior and responses according to your preferences. With Python’s flexibility, we break free from external dependencies, ensuring that your chatbot reflects on your unique vision.

What You’ll Learn

In the upcoming sections, we’re going to walk you through the entire process:

  1. Installing the necessary tools.
  2. Downloading a Model from LM Studio
  3. Coding the chatbot with Python and Local LLMs.
  4. Enhancing the bot’s capabilities by adding memory.

Get on this journey with us to craft a chatbot that aligns perfectly with your goals and creative aspirations. Let’s dive in!

1. Installing the Necessary Tools

Before we begin the chatbot development process, it’s essential to set up the required tools on your machine. Ensure a smooth coding experience by following these straightforward installing steps:

Python

Python is the foundation of our chatbot development. If you don’t have Python installed, proceed as follows:

  • Visit the official Python website.
  • Go to the “Downloads” section.
  • Select the appropriate Python version for your operating system (Windows, macOS, or Linux).
  • Download the installer and complete the installing steps.

To confirm your Python installation, open a terminal or command prompt and type:

python --version

Visual Studio Code (VS Code)

  • Visit the VS Code website.
  • Click on the “Download” button for your operating system.
  • Run the installer and follow the setup instructions.

Local Language Model Studio (LLM Studio)

  • Visit the LLM Studio website.
  • Click on the “Download” button for your operating system.
  • Run the installer and follow the setup instructions.

With these tools ready, you’re prepared to start building your chatbot. In the next section, we’ll be able to initiate the coding process using the GPT-4 API.

2. Downloading a Model from LM Studio

To enhance the capabilities of your chatbot, we’ll leverage a pre-trained language model available on LM Studio. Follow these steps to download one of the available models:

Navigate to the ‘Search for Model’ section, where you can explore a variety of available language models. In this tutorial, we’ll be using a specific version, “mistral-7b-instruct-v0.1.Q5_0.gguf”.

Now that you’ve successfully downloaded the mistral-7b-instruct-v0.1.Q5_0.gguf model from LM Studio, let’s integrate it into our AI Chat environment.

In the LM Studio platform, navigate to the “AI Chat” section and select the downloaded model, as shown in the image below.

With the model selected, you’re now ready to test its capabilities. Feel free to try it with different prompts to explore the versatility of the model.

Finally, navigate to ‘Local Server’ and start the server to establish an inference point for this model.

3. Coding the chatbot with Python

Now that we have our tools set up, shall we dive into the coding process? Follow these initial steps to create a dedicated workspace for your chatbot project:

Create and Open Project Folder in Visual Studio Code

Begin by creating a new folder on your computer to house all the files related to your chatbot project. For consistency, let’s name the folder ‘ChatBot_Local_LLMs’.

Initialize a Python Virtual Environment

To keep your project dependencies isolated, it’s a good practice to use a virtual environment. In the VS Code terminal, within your ‘ChatBot_Local_LLMs’ folder, run:

python -m venv venv

This creates a virtual environment named ‘venv’ in your ‘ChatBot_Local_LLMs’ folder.

Before that, activate the virtual environment using the appropriate command for your operating system:

.\venv\Scripts\activate

You should see the virtual environment’s name in your terminal prompt, indicating that it’s active, as shown in the image below.

Installing OpenAI Library

Before we start coding, let’s install the OpenAI library. Simply execute the command below:

pip install openai

With these steps completed, you now have a clean project setup in Visual Studio Code with a Python virtual environment ready. In the next section, we’re going to start coding our chatbot.

The Chatbot Code

With the project setup completed, it’s time to delve into coding our chatbot. In the code snippet below, we utilize the OpenAI function ChatCompletion to generate a response from the GPT-4 model. This example showcases the chatbot’s proficiency in providing insights on financial topics, specifically in response to a user query about Apple stock.

import openai

# Set API configuration
openai.api_type = "open_ai"
openai.api_base = "http://localhost:1234/v1"
openai.api_key = "whatever" # Replace with your actual OpenAI API key

# Generate a response using the Chat Completion API
response = openai.ChatCompletion.create(
model='gpt-4',
messages=[
{'role': 'system', 'content': 'You are a financial analyst.'},
{'role': 'user', 'content': 'what do you about apple stock?'}
],
temperature=0.5, # Controls randomness in response generation
max_tokens=1024 # Limits the maximum number of tokens in the generated response
)

# Print the complete response from the API
print(response)

# Print the content of the first choice in the response (model-generated response)
print(response.choices[0].message.content)

An important part of the code is the section for defining the message roles. This part specifies the messages that will be provided to the OpenAI ChatCompletion API to generate a response from the GPT-4 model. Each message consists of two components:

messages=[
{'role': 'system', 'content': 'You are a financial analyst.'},
{'role': 'user', 'content': 'what do you about apple stock?'}]

Role: Specifies the role of the message, indicating whether it’s from the system (chatbot) or the user.

  • 'role': 'system': Indicates that the message is from the chatbot.
  • 'role': 'user': Indicates that the message is from the user.

Content: Contains the actual text content of the message.

  • 'content': 'You are a financial analyst.': This is an example of a system message where the chatbot is informing the user about its role or identity. In this case, the chatbot is stating that it is a financial analyst.
  • 'content': 'what do you about apple stock?': This is an example of a user message where the user is asking a question about Apple stock. The chatbot will use this input to generate a response.

When running the code, the response obtained will be displayed, providing insights into the financial analysis of Apple Inc. stock as generated by the chatbot. See below for part of the generated response:

In this hands-on Python tutorial, we’ve explored the process of building a chatbot using local Large Language Models (LLMs). We’ve created a functional chatbot that demonstrates the power of local AI models.

Whether you’re new to chatbot development or looking to expand your skills, I hope this tutorial has provided valuable insights and guidance.

“Lastly, I intend to add memory to the chatbot, enhancing its ability to retain context and provide more personalized responses in future interactions. Stay tuned for my next post, where I’ll delve into this topic further.”

Happy coding!

--

--