Why you need your own ChatGPT and how to install PrivateGPT

Jack Reeve
Version 1
Published in
5 min readMar 30, 2024

April is AI month at Version 1, so I decided to embark on an adventure to install a completely local and open-source chatbot in the hopes of it replacing my dependence on ChatGPT, Bing AI and the others.

First I want to touch on why I want to run a local LLM (and why you should too!) In no particular order here are a few problems with online chatbots:

Data Privacy

On both a personal and enterprise level, we could end up sharing a lot of data with the likes of OpenAI and Microsoft and we’re trusting them to handle it securely and privately. This will inevitably fail and such data will end up being sold. We’ve already seen employees get in trouble for leaking trade secrets. Examples of leaking personal data have already happened too. It’s really hard to get help from an AI chatbot if we can’t give it the context of our problem.

Enshittification of training data

There have been reports that ChatGPT is getting worse in its responses when compared to its original launch. One theory is that as these LLMs train in real time they’re learning from previous AI generated responses and otherwise lower quality data compared to when it first launched. There have also been groups of people purposefully misleading and tricking the LLMs resulting in less accurate responses.

Slower response times

As more and more people start using these LLMs, the resources needed to serve everyone increase and the systems slow down generating responses. Instead of asking a question and getting a response immediately, we might be waiting for up to a minute before getting anything useful back. We could eliminate this if our own bot is only serving us.

Premium subscription costs

Nothing is free (or can remain free forever) and companies need to subsidize the insane requirements for running these LLMs on a mass scale. It’s only fair to pay to be able to use it since it’s not free to host, but since we’ve already paid for our own hardware….why not use it?

Overly restrictive guard rails

Alignment is becoming a big issue with LLMs. Hosted chatbots need to be inoffensive and politically correct as to not upset or bias its userbase. At this point we’re dependant on the values and morals of the developers behind these guard rails. This goes wrong often, for example when Bard refused to generate images of white people and the issues relating around that. These guard rails are not infallible and can often be bypassed with clever prompting and rewording of requests, but it’s an issue that shouldn’t exist in the first place.

Installation on WSL

NOTE: I’m using the beta version of WSL2 (2.0.14.0) which seems to have fixed issues with ollama finding your nvidia GPU. I’ve personally had zero issues on this beta, however, be warned that this is pre-release software and using it on a work machine might produce unexpected issues

The whole process took just a couple of minutes to set up:

Install dependencies (python 3.11, pipx, poetry)

Note: You may need to restart the terminal after installing pipx if it can’t run immediately

sudo add-apt-repository ppa:deadsnakes/ppa 
sudo apt install python3.11
curl -sS https://bootstrap.pypa.io/get-pip.py | python3.11
sudo apt install pipx --fix-missing
pipx install poetry

Install Ollama

curl -fsSL https://ollama.com/install.sh | sh

Hopefully at this point you see

Ollama install successful

PrivateGPT will still run without an Nvidia GPU but it’s much faster with one.

Pull models to be used by Ollama

ollama pull mistral
ollama pull nomic-embed-text

Run Ollama

Ollama will try to run automatically, so check first with ollama list. If that command errors out then run:

ollama serve

You should see something like the following

Ollama running

Install PrivateGPT

git clone https://github.com/imartinez/privateGPT
cd privateGPT
poetry install --extras "ui llms-ollama embeddings-ollama vector-stores-qdrant"
Poetry installing dependencies

Run PrivateGPT

PGPT_PROFILES=ollama make run
PrivateGPT running

Open PrivateGPT in your browser

Go to localhost:8001 and start chatting!

PrivateGPT web interface

Uploading your own context

We’ll cover this in more detail in a later post, but I wanted to touch on this powerful feature. The ability to upload files and ask specific questions about the content of them. I’m particularly interested in uploading my Daylio entries so that I can query my past activities or thoughts without having to read through days worth of journal. Obviously this is private information that I wouldn’t want uploaded anywhere, but since this is local I have less reservations.

PrivateGPT answering a question from personal context

Re-running after reboot

For convenience, to restart PrivateGPT after a system reboot:

ollama serve

In a new tab, navigate back to your PrivateGPT folder and run:

PGPT_PROFILES=ollama make run

Conclusion

We’ve downloaded and setup PrivateGPT using default settings and now have a basic chatbot that’s completely local and does not send any data out of our network (solving our data privacy concern).

By default we’re using the mistral LLM (as defined when we ran ollama pull mistral above) but there are tons of available models to choose from (including uncensored ones). Join me in the next post where we'll dive into customizing our models and creating our own context. In the meantime, play around with your local mistral AI chatbot!

About the Author:
Jack Reeve is a full stack software developer at Version 1.

--

--