Chat with your Local Documents | PrivateGPT + LM Studio

100% Local: PrivateGPT + 2bit Mistral via LM Studio on Apple Silicon

Ingrid Stevens
6 min readFeb 24, 2024
Dall-E 3: PrivateGPT Local Chat with Your Docs

Note: a more up-to-date version of this article is available here.

Introduction

Welcome to a straightforward tutorial of how to get PrivateGPT running on your Apple Silicon Mac (I used my M1), using 2bit quantized Mistral Instruct as the LLM, served via LM Studio.

Note: I ran into a lot of issues getting this PrivateGPT running, so if you try this, know that the instructions I have put forth are what worked for me on my M1 Mac. If you run into issues, please refer to the official PrivateGPT documentation. If you find a bug, you can open an issue in the official PrivateGPT github repo. With that said, I hope these steps work, but if they don’t, please refer to the official project for help.

Note: This guide mirrors the process of deploying Ollama with PrivateGPT, for which I’ve also crafted a comprehensive walkthrough.

PrivateGPT running “openailike” via LM Studio using 2bit quantized models

PrivateGPT is a robust tool offering an API for building private, context-aware AI applications. It’s fully compatible…

--

--