Vicuna on Your CPU & GPU: Best Free Chatbot According to GPT-4

Martin Thissen
3 min readApr 4, 2023

In this article I will show you how to run the Vicuna model on your local computer using either your GPU or just your CPU.

At the moment this article contains only the commands used to install the Vicuna model. I will add more details soon. If you can’t wait for that, feel free to watch my YouTube video in the meantime:

Foundation: Install Conda

This step is recommended for running the Vicuna model with both your GPU or CPU. Using virtual environments helps to avoid version mismatches when working in multiple projects.

wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
sha256sum Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh
source ~/.bashrc

GPU Installation (GPTQ Quantised)

First, let’s create a virtual environment:

conda create -n vicuna python=3.9
conda activate vicuna

Next, we will install the web interface that will allow us to interact with the Vicuna model in a visually appealing way:

git clone https://github.com/oobabooga/text-generation-webui.git…

--

--

Martin Thissen

Writing Articles on How to Use AI Models and How They Work