Running Chatbot UI Locally with a LeapfrogAI Backend and API

Learn how to use the open-source tool LeapfrogAI to run a local instance of Chatbot UI

Brandi McCall
Defense Unicorns
7 min readNov 5, 2023

--

Note: The repository used in this tutorial has been archived and is now Read Only. Specifically, the ctransformers backend is no longer available. See LeapfrogAI’s GitHub site for currently available backends. If you have any questions or would like more info on LeapfrogAI, feel free to open an issue on GitHub. We love demoing this tool!

Prerequisites:

  • Git installed
  • Python3 installed
  • npm installed

Objectives:

  • Discuss LeapfrogAI and the benefits of using it
  • Setup a local LeapfrogAI backend
  • Run the LeapfrogAI API
  • Start and interact with a Chatbot UI

Disclaimer: LeapfrogAI and its repos are rapidly evolving. This tutorial was written on November 4, 2023 and was working with the repos at that time. Depending on repo updates, the steps below may vary as LeapfrogAI evolves.

What is LeapfrogAI?

When you first hear “LeapfrogAI,” what comes to mind? It’s the Leapfrog kid toys right? My first thought too. In fact, my kiddo had many of these gadgets at a young age, and they were all incredibly fun and educational. Let me introduce you to another frog:

LeapfrogAI is an open-source artificial intelligence tool designed by Defense Unicorns whose primary goal is to get AI tools into resource-constrained environments, like air gap environments. This means that environments like those classified Department of Defense (DoD) ones can have access to incredible AI tools, even without internet connectivity. This is a game changer for the DoD and allows somewhat ancient government systems to have access to cutting edge technology, whether they’re on a submarine or in a SCIF. Or, in the case of this tutorial, LeapfrogAI allows you to run the open-source tool Chatbot UI locally.

Why use LeapfrogAI?

You’ve probably heard of or even used ChatGPT, which uses the OpenAI API. OpenAI is trained on large data sets that include confidential and sensitive information, raising the concern for privacy and security (i.e. data leaks). In fact, some companies have banned their employees from using OpenAI for fear that proprietary or sensitive information may get ingested and subsequently leaked.

Enter LeapfrogAI. LeapfrogAI allows you to host your own large language model (LLM) and retain control over your data. LeapfrogAI’s API closely matches OpenAI’s API thus tools that are built around OpenAI can be used with LeapfrogAI as the backend. This allows you to specify the language model you want to use and fine tune it to your needs. With the freedom to choose your own model, you can easily train a model, switch it out to a different model, or build your own user interface around it, all while maintaining control over your data.

In the next few steps, we will go over how to use LeapfrogAI to set up a local backend with the SynthIA-7B-v2.0-GGUF language model. We will then set up the LeapfrogAI API, and start up a Chatbot UI to show how we can use OpenAI-like tools to point to our own endpoint.

Setup Local LeapfrogAI Backend

For this tutorial, I wanted to set up everything locally as a proof of concept for using your own endpoint with OpenAI technology. I am running this tutorial on a MacBook Pro with M2 processor and the Sonoma 14.0 OS. I have not tested the following steps on a Linux or Windows machine, thus there might be slight differences with deployment.

To set up the LeapfrogAI backend, we start with the leapfrogai-backend-ctransformers repo. The C Transformers library provides a way to use Generalized Graph-based Machine Learning (GGML) models within Python programs. Move into the directory you want to run the project in, and clone the repo to your local machine.

git clone https://github.com/defenseunicorns/leapfrogai-backend-ctransformers

Change into the repo directory and run the following commands:

mkdir .model/
wget https://huggingface.co/TheBloke/Synthia-7B-v2.0-GGUF/resolve/main/synthia-7b-v2.0.Q4_K_M.gguf
mv synthia-7b-v2.0.Q4_K_M.gguf .model/synthia-7b-v2.0.Q4_K_M.gguf

The above commands will copy the SynthIA-7B-v2.0-GGUF model to a local .model directory. Next we want to set up a Python virtual environment to put our project dependencies in. If you have Python3 and get the error “command not found” when running Python commands, simply set the following alias or change the python commands to python3 .

alias python=python3

Now run the following commands:

python -m venv .venv
source .venv/bin/activate
python -m pip install -r requirements-dev.txt

The above commands create and activate a Python virtual environment, allowing you to install package requirements without affecting your system-wide Python installation. This ensures the correct versions of libraries are installed only within the virtual environment, avoiding interference with other Python projects you may have on the same computer. To start the model backend, run the following command:

python main.py

If you get an error, see if it says “Address already in use.” I got this error on my first attempt at running python main.py and used these commands to discover that my Multipass application was running on port 50051 (the default LeapfrogAI port). After I killed Multipass, and ran the command again, I was able to get the backend working.

Errors

If you did not get an error and your terminal looks something like this, you are ready to move on!

Good to go!

Run LeapfrogAI API

LeapfrogAI allows us to deploy an API that closely matches the OpenAI API. For this tutorial we will use the leapfrogai_api_server repo. Open a new terminal window and clone the repo to your local machine.

git clone https://github.com/defenseunicorns/leapfrogai_api_server

Change into the repo directory and run the following command:

go run main.go

Once this is running, you should see something like this:

Start the Chatbot UI

To make LeapfrogAI easier to interact with, we want to connect it to a user interface. Chatbot UI is an open-source chat user interface for AI models. While LeapfrogAI is working on their own user interface (found here), it is still a work in progress, thus we will use the more mature Chatbot UI.

To get started, clone the Chatbot UI repo to your local directory and change into it. Note that this is the Defense Unicorns version of Chatbot UI.

git clone https://github.com/defenseunicorns/chatbot-ui

Create a .env.local file and add the following environment variables to it:

OPENAI_API_KEY=fakekey
OPENAI_API_HOST=http://localhost:8080/openai
DEFAULT_MODEL=ctransformers

Install dependencies by running:

npm i

Run the Chatbot UI with the following command:

npm run dev

Open your web browser and navigate to localhost:3000. You should be met with the Chatbot UI. Use the drop down arrow under Model to select ctransformers.

Now you’re ready to ask Chatbot a question. The time to respond is dependent on your local machine’s performance. As you send requests in the Chatbot UI, you should see the LeapfrogAI API and the ctransformers backend responding in their terminal windows.

ctransformers terminal
LeapfrogAI API terminal

Now pick a question to ask the Chatbot. I asked Chatbot a silly question about a unicorn novel. Check out the response below:

Conclusion and Clean Up

Hopefully you found this tutorial useful and can see how using LeapfrogAI allows you to run large language models locally. Depending on your local computer’s resources, you could replace the SynthIA-7B-v2.0-GGUF model with a different model that may be more accurate, or have a different personality. The beauty of LeapfrogAI is in its flexibility, so feel free to experiment!

If you want to cleanup everything we’ve done here, simple delete the three repos you cloned to your local machine. Thank you so much for following along, and keep watching for more DevOps tutorials!

--

--