Level up your coding with a local LLM and CodeGPT

George Wen
2 min readMar 31, 2024

--

Level up your coding with a local LLM and CodeGPT

In our previous exploration of locally deployed Large Language Models (LLMs), we saw their potential. Today, we’ll look into another exciting use case: using a local LLM to supercharge code generation with the CodeGPT extension for Visual Studio Code.

Setting Up Your Local Code Copilot

  1. Install Ollama: Ollama is a user-friendly tool for hosting local LLMs. Installation is straightforward! Head over to https://ollama.com/ for detailed instructions.

Here’s how I installed it on Ubuntu (refer to the official website for macOS and Windows):

curl -fsSL https://ollama.com/install.sh | sh

ollama pull codellama

Note: I’m using the codellama model here. For other models like mistral, download it and adjust settings as explained later.

2. Install CodeGPT extension: Search for “CodeGPT” in the VS Code extension marketplace and install it. Once installed, an icon will appear in the left sidebar. Click on it, choose Ollama as the provider, and select codellama as the model. That’s it!

Figure 1: Configure CodeGPT
Figure 2: Sample Output from CodeGPT

For more on using CodeGPT: Refer to the official documentation: https://docs.codegpt.co/docs/intro

Happy Coding!

--

--