Hosting Your Own AI Chatbot

David Vasquez
8 min readOct 21, 2024

--

Wouldn’t it be cool if you had your own Chat GPT-like AI? And it was free, and you can have it on your own computer without having to train the AI?

And it would also be great if you didn’t have to get an expensive computer either wouldn’t it?

Well guess what? You can!

I recently built a home server with some cool functions that would help me with my day-to-day tasks. I wanted to document everything I setup for it and decided to split it up with the different functionalities that I setup with it.

So, this is the first part of that project. This was one of most fun things I did for that server.

Chat GPT is an AI language model that learns, understands and generates human language. Designed to provide helpful responses to its users.

What I have

Now the idea of this project was to make the server as budget friendly as possible, trying not to spend anything extra and went with the old equipment that I had and almost decided to get rid of.

I encourage you do try the same as it’s super fun, but if you want to use your own computer instead of a separate server, this can work too.

I did this on an old Dell PC, with a 1TB spindle drive, 8GB of RAM, and no GPU (I know, crazy).

The great thing was that I was able to use my own equipment, not buy anything extra, and that included the software. If you do have a high-end computer with a great graphics card, you will see a performance difference, but don’t worry if you don’t. I had a decent time with just a CPU and no advanced GPU.

OS

So this guide is not meant for Mac OS. There’s definitely a way to do this, but I did not perform this on a Mac, so my documentation here is for Linux and Windows with WSL. If you’re not sure what WSL is, take a look at my article about WSL here: Running Linux on Your Windows Computer

From what I’ve read, this works really well on newer Macs. So if I get my hands on one maybe I’ll it out there and document that.

I setup my server with Ubuntu and it works great! Now, enough talk. Let’s get this going!

Large Language Model (LLM)

Now the first thing we’re going to do, is install something called Ollama which will allow us to use LLMs. Then, once we have that installed, we’re going to use a model called Llama.

What is a large language model you ask? Well, that’s essentially the AI that learns, understands and generates human language. It’s what you talk to when you’re using a chatbot, like Chat GPT.

The model we’re going to be using for this project is called Llama. Developed and trained by Meta. There’s Llama, Llama2 and most recently Llama3.

This is the really cool thing. Meta trained these models with millions of dollars worth of equipment. They used powerful computers to build, train and make this model available to the public…for free! (For now, who knows what’ll happen in the future).

Yes, you can get a model, trained my very powerful and expensive computers by a massive company, Meta, at your finger tips. On your own computer. Cool right? Let me show you how!

Llama (Large Language Model Meta AI) is a large language model developed by Meta.

Let’s Get Started

Ok let’s go. As I mentioned, this was setup on my Ubuntu server, so we’re going to do this with the command-line. If you’re on Windows, go ahead and setup WSL.

Ollama recently made an installation available for Windows. I haven’t tried that out yet, for now let’s continue with Linux/WSL.

Ollama

The first thing we’re going to get is Ollama. Let’s head over to their site and get the install files for that. You’re going to go to ollama.ai and click on download, then Linux.

Next, let’s open up a terminal and follow good practices by updating our packages.

sudo apt update

Now, go back to Ollama’s download page, copy and paste the command to your terminal and voila! Ollama will begin installing.

Great. That’s it for Ollama. Now let’s get Llama setup. So, the next command we’re going to enter is this right here:

ollama pull llama3

Let it do it’s thing..

OK, now that this is done, let’s start using Llama3. Enter the command below:

ollama run llama3
A poem about cats, generated by Llama3 on my computer

Now, llama3 is running! You’re ready to interact with an AI, right on your computer. You can be completely offline as this can work whether you have an Internet connection or not. Which is pretty neat.

Play around for a bit and have some fun.

Web UI

Alright, so you have llama3 setup on your computer, it’s running and working. You’ve played around with it and tested some prompts.

But…doing this on the terminal doesn’t look so pretty does it?

Of course not! As fascinating as this is, we still don’t have quite the same feel as Chat GPT as I mentioned in the beginning.

So, let’s get this Web UI going! We’re going to be using something called OpenWeb UI. This is a fantastic Web UI that works with Ollama.

This will be setup in a Docker container. So to get this going, we will first need to setup Docker.

Docker

Now, if you’ve never used Docker before, don’t worry! You don’t need to be an expert for these next few steps. Plus, I’ll guide you with what commands to enter.

Everything I’m about to do here is on Docker’s website. They have great documentation. You can check it out here.

Anyways, let’s continue. First, we are going to setup Docker’s apt repository. To do that, use these commands below:

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

Ok, now we’re going to install the latest version of Docker with this command here:

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
If you get prompted with a message asking if you want to continue, just select Yes.

Finally, let’s make sure the installation was successful by running this command:

sudo docker run hello-world

Open WebUI Setup

Excellent, now that we have Docker installed. Let’s go ahead and setup our Web UI.

We’re going to be setting up a docker container for Open WebUI which will integrate with Ollama. To do that, let’s run this command

sudo docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Now it will start setting up Open WebUI. To confirm it was done correctly, you can run this command here:

sudo docker ps

Take a look below to see what this shows:

As you can see, under NAMES it shows open-webui. IMAGE will show that it successfully pulled the image from Open WebUI. Now that we know it’s running…let’s take a look to see how it look!

Trying Out the Interface

Alright, I know it’s a long one, but stay with me! We’re almost there.

Open up a browser and in the address bar, enter 127.0.0.1:8080 to access you new interface.

First, you should see a sign up screen like this:

The login/sign up page for Open WebUI

You’re going to want to sign up for an account. This is only for this particular instance and not an online account. By default, the first account you setup with automatically be the admin account.

Once you’ve done that, you should be taken to your web interface. On the top left, make sure you select your model. In this case, it’s Llama3.

And there you go! Take a look at how the interface looks when you start a new chat.

A short poem about cats, but this time with Open Web UI

Pretty cool right? This looks much better than using the Terminal, it has a Chat GPT-like feel and better yet, it’s private! Like I said, this will work whether you are offline or online.

This might be a good option for people who work at companies who aren’t allowed to use Chat GPT due to data privacy reasons.

This is getting long but I’ll make another article about the cool features as an admin user. In the meantime, have fun with your new chatbot!

--

--

David Vasquez
David Vasquez

Written by David Vasquez

IT Professional | App Developer

No responses yet