Resolving challenges while setting up PrivateGPT on local (ubuntu)

Bennison J
YavarTechWorks
Published in
6 min readOct 23, 2023
Resolving challenges while setting up PrivateGPT on local (ubuntu)

Hello, everyone! šŸ‘‹ Iā€™m Bennison. In this article, Iā€™m going to explain how to resolve the challenges when setting up (and running) PrivateGPT with real LLM in local mode. šŸš€šŸ’»

Pre-Requests

PrivateGPT requires Python version 3.11, If you want to manage multiple Python versions in your system install the pyenv is a tool for managing multiple Python versions in our system.

You can set up the pyenv on your machine by using the following reference.

Cloning the Repository

The PrivateGPT setup begins with cloning the repository of PrivateGPT. use the following link to clone the repository.

Installation and Settings

Once the cloning of the repository is done, we can straight away go with installation & settings.

Use the official PrivateGPT repository for this installation process.

Once the installation is done just verify everything is working by running make run and just open the following URL in your favorite browser http://localhost:8001. You should see a Gradio UI configured with a mock LLM that will echo back the input.

I was facing a problem with configuring a real LLM, I have covered these things in the following topics.

To run the privateGPT in local using real LLM use the following command

Before running this command just make sure you are in the directory of privateGPT.

PGPT_PROFILES=local make run

While running the above command, If you are facing any unhandled error like the following topics, follow the following topics to resolve the problems.

Troubleshooting ā€œlibcublas.so.*[0ā€“9] not foundā€ Exception

When I run the command PGPT_PROFILES=local make run to run the privateGPT with local LLM I got the exception libcublas.so.*[0ā€“9] not found

Regarding this, I asked the question in StackOverflow and created a discussion on GitHub

From this, I got answers that this exception is related to the NVIDIA CUDA toolkit. NVIDIA CUDA toolkit is used for GPU-accelerated computing.

So to run PrivateGPT fully locally GPU acceleration is required.

Just go to the following CUDA toolkit download page, then choose your platform (Linux, Windows, or macOS) Under the selected platform, choose your operating system and distribution, and finally choose the version of the CUDA Toolkit you want to download. and just follow the same document to the CUDA installation.

Once the installation is completed, make sure that the CUDA toolkit is installed successfully by using the following command on your terminal.

nvcc --version

If the toolkit is installed it may show the result like the following

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Fri_Sep__8_19:17:24_PDT_2023
Cuda compilation tools, release 12.3, V12.3.52
Build cuda_12.3.r12.3/compiler.33281558_0

If the above command is not working we need to setup the environment variable in the .bashrc file manually. generally, the CUDA will be downloaded in the directory /usr/local . we can find the directory by using the following find command

sudo find /usr/local -maxdepth 1 -type d -name "cuda" -o -name "cuda-[0-9]*"

It shows the results as follows

/usr/local/cuda-12.3
/usr/local/cuda-12

Once you find out the directory of cuda. you need to add this to .bashrc file as follows

export PATH=/your/path/to/cuda/bin:$PATH
export LD_LIBRARY_PATH=/your/path/to/cuda/lib64:$LD_LIBRARY_PATH

Note: When you paste the above to your .bashrc, don't forget to update your CUDA path.

Finally, just run the following command to reload the environment variable

source ~/.bashrc

Now you can verify the installation of the CUDA toolkit by using the command nvcc --version . it shows the version of the CUDA toolkit.

Once this process is done. just try to run the PrivateGPT on your local machine using the command PGPT_PROFILES=local make run

Troubleshooting ā€œlibcudnn.so.8: cannot open shared object fileā€

This error message indicates that your system is missing the libcudnn.so.8 shared library, which is a part of NVIDIAā€™s cuDNN (CUDA Deep Neural Network) library. cuDNN is a GPU-accelerated library used by deep learning frameworks like TensorFlow and PyTorch.

you need to install cuDNN from the official website, sign-in is required to download this. just open the website, and sign in, and just click the button Download cuDNN Libarary

Please, just make sure to download the cuDNN library file compatible with your CUDA version. In all installation option I would recommand Local Installer for Ubuntu<version> <architecture>(Deb)

Once the installation process is done you need to install it, you can do this by using the following command

sudo -dpkg -i cudnn-local-repo-ubuntu<distro>-<version>-1_amd64.deb

While running the command just make sure to replace the string ā€œcudnn-local-repo-ubuntu<distro>-<version>-1_amd64.debā€ with your installed package name.

Once this download process is done, we need to install it on our machine. just use the following command for this installation steps.

sudo apt update
sudo apt install libcudnn8 libcudnn8-dev

Once this installation is done, we need to update the environment variable

sudo find /usr -name libcudnn.so.8

Run the above command to get the downloaded file path. and add the following command at the end of the .bashrc file. donā€™t forget to replace the file path with yours.

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/your/path/to/libcudnn.so.8

Once this is done just reload the environment variable by using the command source ~/.bashrc

Now, letā€™s run privateGPT using the command PGPT_PROFILES=local make run . If still you are facing the problem look at the following topics.

Troubleshooting ā€œlibcudnn.so.2: cannot open shared object fileā€

The error youā€™re encountering is due to a missing shared library file, specifically libnccl.so.2. This library is required by PyTorch for GPU acceleration using NVIDIAā€™s NCCL (NVIDIA Collective Communications Library)

To check the file libcudnn.so.2 file already exists on your system run the following find command.

sudo find /usr -name libnccl.so.2

If the file exists just add the following environment variable to the end of the .bashrc file. donā€™t forget to change the file path when using the following command.

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/your/path/to/libnccl.so.2

If the file does not exist, just follow the following documentation for downloading the NCCL on our machine. just navigate to the following website and click the button Download NCCL

Please, just make sure to download the NCLL library file compatible with your CUDA version. In all installation option I would also recommand Local Installer for Ubuntu<version> <architecture>(Deb)

Once the download process is done, you can use the following document for NCLL installation on your machine or you can just follow the command that I have provided below

Replace the nccl-repo-<version>.deb with your downloaded file name.

sudo dpkg -i nccl-repo-<version>.deb

Note: The local repository installation will prompt you to install the local key it embeds and with which packages are signed. Make sure to follow the instructions to install the local key, or the install phase will fail later. the prompt will be the cp command, just copy and past it on your terminal.

sudo apt update
sudo apt install libnccl2 libnccl-dev

Once this installation step is done, we have to add the file path of the libcudnn.so.2 to an environment variable in the .bashrc file.

Find the file path using the command sudo find /usr -name libnccl.so.2 .

Once you get the file name, just add it at the end of the .bashrc file as I mentioned very first in this topic.

Now you can run the following command to run the privateGPT with real LLM on the local machine.

PGPT_PROFILES=local make run

These are the challenges I encountered while configuring a privateGPT with a real LLM in a local environment šŸ¤Æ. Iā€™m sharing this information in the hope that it may assist others who have encountered similar issues when setting up privateGPT with a real LLM. šŸ¤ If you continue to experience any difficulties in this regard, please feel free to reach out to me via LinkedIn or Twitter. šŸ“«šŸ˜Š

--

--

Bennison J
YavarTechWorks

šŸ‘©ā€šŸ’» Software Engineer šŸš€ UNIX/Linux ā™„ļø | JavaScript/Node.js šŸ”„ | SQL šŸ“Š | Backend Developer šŸ’» | Tech Blogger āœļø | Tech Enthusiast šŸŒŸ |Continuous Learner šŸ“š