Sitemap

Run Cursor AI for Free with Open-Source LLM

3 min readFeb 8, 2025

--

Ever wanted to harness the power of Cursor AI without breaking the bank? You’re in luck! This guide will walk you through setting up Cursor AI locally using open-source Large Language Models (LLMs). Let’s dive in!

Requirements

  1. Install Cursor AI and Log In
    -
    Head over to Cursor AI and create an account.
  2. Install LLM Studio
    -
    Download LLM Studio from LMStudio. This GUI makes it easy to download LLM models. Alternatively, you can use Ollama directly from the terminal.
  3. Download Ngrok
    -
    Grab Ngrok from here to generate a server port for Cursor AI to access locally.

Step-by-Step Guide

1. Install Cursor AI

Visit Cursor AI and follow the installation instructions. Once installed, log in to your account.

2. Set Up LLM Studio

Download and install LLM Studio from LMStudio. Use the GUI to download the model that best fits your computer’s configuration. For this guide, we’ll use Qwen2.5 7B Instruct 1M. Alternatively, you can use Ollama with the command:

ollama run qwen2.5-coder

Check out the model details here.

downloading ‘Qwen2.5 7B Instruct 1M’ using LLM Studio

3. Load the Model

Once the model is downloaded, load it into LLM Studio. You’re now ready to roll!

4. Configure LLM Studio

  • Enable CORS: Make sure CORS is enabled.
  • Set Your Port: By default, the port is set to 1234, but you can change it as needed.
  • Check Endpoints: Verify that your endpoints for the specified port are enabled.
Configure LLM Studio

5. Install and Configure Ngrok

Download Ngrok from here. Sign up, get your auth token, and follow the installation guide to set up the auth config using the token. Start your Ngrok server with:

ngrok http <port_of_llm>
Running Ngrok
Ngrok Preview

6. Configure Cursor AI

  • Add Model: Go to Cursor AI settings, click on “Models,” and add the model name you downloaded.
  • Override OpenAI API: In the OpenAI API key section, override the API usage by using your Ngrok forwarded HTTP URL. Add /v1 to the overridden URL to hit the local LLM endpoint.
  • Save and Verify: Click “Save” and verify to ensure your local LLM is working.

Note

Make sure to add /v1 in the overridden URL to correctly hit the local LLM endpoint. If you get a pop-up to enable the override, go ahead and enable it.

Configure Cursor AI

And that’s it! You’re now running Cursor AI for free with your own local LLM. Enjoy the power of AI without the hefty price tag. Happy coding!

Example: how the cursor IDE work with local llm

--

--

HyperFox
HyperFox

Written by HyperFox

Rome wasn't built in a day. Take your time, experiment, and iterate.

Responses (21)