Your AI Assistant Awaits: A Guide to Setting Up Your Own Private GPT and other AI Models

IMBRO
3 min readMar 23, 2024

--

New AI models are emerging every day. There are numerous models that are pre-trained, open source, and readily available for download. Why not take advantage and create your own private AI, GPT, assistant, and much more? Embark on your AI security journey by testing out these models.

Set up your Machine

  • Updating Ubuntu
sudo apt update
sudo apt upgrade
sudo apt install build-essential
sudo apt install curl
  • Setting up Python env:
sudo apt install git gcc make openssl libssl-dev libbz2-dev libreadline-dev libsqlite3-dev zlib1g-dev libncursesw5-dev libgdbm-dev libc6-dev zlib1g-dev libsqlite3-dev tk-dev libssl-dev openssl libffi-dev
curl https://pyenv.run | bash
export PATH="/home/$(whoami)/.pyenv/bin:$PATH"
export PYENV_ROOT="$HOME/.pyenv"
[[ -d $PYENV_ROOT/bin ]] && export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init -)"
source .bashrc
sudo apt install lzma
sudo apt install liblzma-dev
  • Installing Python 3.11 and setting as global version:
pyenv install 3.11
pyenv global 3.11
pip install pip --upgrade
pyenv local 3.11
  • Installing Poetry
curl -sSL https://install.python-poetry.org | python3 -
export PATH="/home/$(whoami)/.local/bin:$PATH"
source ~/.bashrc
poetry --version

Method 1: PrivateGPT with Llama-CPP

  • Cloning the PrivateGPT repository:
git clone https://github.com/imartinez/privateGPT
  • Installing PrivateGPT:
cd privateGPT
poetry install --extras "ui llms-llama-cpp embeddings-huggingface vector-stores-qdrant"
poetry run python scripts/setup
  • Once installed, you can run PrivateGPT with the following command:
PGPT_PROFILES=local make run

PrivateGPT will load the already existing settings-local.yaml file, which is configured to use LlamaCPP LLM, HuggingFace embeddings and Qdrant.

The UI will be available at http://localhost:8001

You can now Ingest your Documents and get on with your own PrivateGPT.

Method 2: PrivateGPT with Ollama

curl -fsSL https://ollama.com/install.sh | sh
  • Cloning the PrivateGPT repository:
git clone https://github.com/imartinez/privateGPT
cd privateGPT
  • Installing PrivateGPT:
ollama pull mistral
ollama pull nomic-embed-text
sudo systemctl stop ollama
ollama serve
  • Once done, on a different terminal, you can install PrivateGPT with the following command:

poetry install --extras "ui llms-ollama embeddings-ollama vector-stores-qdrant"
PGPT_PROFILES=ollama make run

The UI will be available at http://localhost:8001

Running other AI models

You can discover a lot of AI models in Hugging face.
Most of them are pre-trained and free to use. Details from Training Dataset to Data freshness can be found in model’s description. Go through it and have fun.

Anyway, we can run most of these AI models with a Single Command

ollama run <model-name>

Here, I’m choosing gemma-7b model from google.
The command would be:

ollama run gemma

And there you go. You have your own Private AI of your choice. Good luck.

Note: You can run these models with CPU, but it would be slow. Hence using a computer with GPU is recommended.

Next part will cover how to hack an AI — finding vulnerabilities, exploiting them etc. Additionally we’ll be exploring how to automate your attacks and other tasks with these open-source AI models. See ya.

--

--