Jan.ai — Run LLM’s Locally & Build Apps on Them

Robby Boney
Short Bits
Published in
6 min readMar 7, 2024

--

ideogram + Affinity Photo — https://ideogram.ai/g/J6rSgHk7QkaPPknJBTEMUA/0

Run LLMs locally on your own computer

You might be familiar with LLM’s such as ChatGPT, Claude, Gemini or Mistral. These models all run on the cloud though; sometimes it would be nice to chat with a language model that is local on our device. Whether you want to keep your chat threads data for your own analysis or you like the idea of keeping your chats private, local LLMs is an inviting idea. Jan.ai is a product that allows us to do just this! Although many large language models require RAM that most personal devices do not have, there are a growing list of models that are both small files and can be used with devices having less memory to work with. These models are often specialized towards a specific use case to improve performance while also staying smaller then a large foundational model like Llama 2’s 70 billion parameter model.

Get a Model

Before you can start chatting with an ai on your local machine you need to download a model from their model hub or import your own. You can also simply use GPT-3.5 or GPT-4 if you have an Api key as we will show below. Navigate to the Model Hub on the left and find a model that you want to use. Download or set it up!

There are lots of models available to download and start chatting with right away in Jan. All models are open source and…

--

--

Robby Boney
Short Bits

Director of Product Development @ Interject Data Systems. I write Software for Science, Data & Enterprise…https://robbyboney.notion.site/