🕹 Fun Project: Treat Llama-3 as Coding Agent 🤖

Run Llama-3 locally and make it a coding agent.

Vincent Le
4 min readMay 2, 2024

Keywords: Llama-3, Coding Agent, Local LLM.

In this project, our destination is to create a coding agent using Llama-3 as its soul. First, we will have a look at how we can install the Llama-3 to run it locally. After that, we make the magic happen as implementing a coding agent.

The content of the writing is constructed as:

  1. How to run Llama-3 locally.
  2. Create a coding agent with Llama-3.
  3. Conclusion.

How to run Llama-3 locally?

I download the Llama-3 via Ollama. It is a framework that enables users to execute open-source large language models (LLMs) locally on their PCs, eliminating the requirement for a cloud service. It provides a straightforward API for developing, running, and managing models, as well as a library of pre-built models that are readily incorporated into applications.

To be honest, the process is done easily, just in a matter of second. To do so, we go to the download page of Ollama and choose appropriate choice that match your OS.

For me, I use Ubuntu. Here is the result:

After that, we pull the model from the Ollama’s hub by running a command. But before putting our fingers on the keyboard to do so, we might want to know whether or not Ollama supports what we need.

Note: we will use the 8B version.

So, the command that we need is `ollama run llama3`

This is the result after I downloaded it.

Let’s ask the model for some ideas for writing about LLMs.

I asked for some ideas, but it gave me a book. Great!

These are some commands that we can use to interact with Llama3

Create a coding agent with Llama-3

We create a coding agent in VSCode. We start by install an extension on VSCode, namely CodeGPT. After that, we integrate the extension with our local Llama-3.

CodeGPT allows us to connect to AI providers using their APIs efficiently.

We already installed Ollama, therefore, after installing CodeGPT, we go to the extension settings to make some changes. We select “Ollama” as the supplier from the drop-down menu and activate the “CodeGPT Co-pilot” feature. Select “llama3:Instruct” from the auto-complete options.

Now, we are ready to go. However, if there are no models in VSCode, we have to pull it:

ollama pull llama3:8b
ollama pull llama3:instruct

Let’s start by an easy example. We ask our coding agent to generate a quicksort algorithm in Python.

Conclusion

We started this post by learning about Ollama, a framework that enables users to run open-source large language models (LLMs) locally on their computers. Then, via Ollama, we configured the 8B-Llama-3 and interacted with it via terminal. Finally, we followed a step-by-step guide on creating a coding agent in VSCode using CodeGPT, which speeds up the coding process, makes it more intuitive, and reduces error rates.

Thank you for reading this article; I hope it added something to your knowledge bank! Just before you leave:

👉 Be sure to clap and follow me. It would be a great motivation for me.

👉Follow me: Linkedin | Github

Reference

  1. Ollama — Llama 3 — URL: https://ollama.com/blog/llama3

--

--