Using Copilot for Obsidian with the new Llama3 local LLM

PKM Explorer
3 min readApr 22, 2024
Photo by Joakim Honkasalo on Unsplash

In a previous post I wrote about my experiment using Copilot for Obsidian with a local Ollama LLM. This time I decided to use LM Studio as the LLM backend.

A local LM Studio server is much easier to set up than a local Ollama server because it is all done from the LM Studio interface. No fussing around with Windows Powershell, setting up the max context window, or configuring an Origins variable.

A. Setting up the LM Studio backend

  1. Download and install LM Studio from lmstudio.ai.
  2. Run LM Studio. The LM Studio home screen appears, with a number of preselected LLMs that you can download to your local hard disk:
LM Studio home screen

You can download any of these preselected LLMs (or other ones) using LM Studio, or you can use any model file downloaded from Hugging Face, provided it has the .guff extension. I downloaded the much talked-about LLama 3–8B Instruct model recently made available by Meta.

LM Studio downloads and looks for local LLMs under
C:\Users\<username>\.cache\lm-studio\models\

--

--