🕹 Fun Project: Treat Llama-3 as Coding Agent 🤖
Run Llama-3 locally and make it a coding agent.
Keywords: Llama-3, Coding Agent, Local LLM.
In this project, our destination is to create a coding agent using Llama-3 as its soul. First, we will have a look at how we can install the Llama-3 to run it locally. After that, we make the magic happen as implementing a coding agent.
The content of the writing is constructed as:
- How to run Llama-3 locally.
- Create a coding agent with Llama-3.
- Conclusion.
How to run Llama-3 locally?
I download the Llama-3 via Ollama. It is a framework that enables users to execute open-source large language models (LLMs) locally on their PCs, eliminating the requirement for a cloud service. It provides a straightforward API for developing, running, and managing models, as well as a library of pre-built models that are readily incorporated into applications.
To be honest, the process is done easily, just in a matter of second. To do so, we go to the download page of Ollama and choose appropriate choice that match your OS.
For me, I use Ubuntu. Here is the result:
After that, we pull the model from the Ollama’s hub by running a command. But before putting our fingers on the keyboard to do so, we might want to know whether or not Ollama supports what we need.
Note: we will use the 8B version.
This is the result after I downloaded it.
Let’s ask the model for some ideas for writing about LLMs.
These are some commands that we can use to interact with Llama3
Create a coding agent with Llama-3
We create a coding agent in VSCode. We start by install an extension on VSCode, namely CodeGPT. After that, we integrate the extension with our local Llama-3.
We already installed Ollama, therefore, after installing CodeGPT, we go to the extension settings to make some changes. We select “Ollama” as the supplier from the drop-down menu and activate the “CodeGPT Co-pilot” feature. Select “llama3:Instruct” from the auto-complete options.
Now, we are ready to go. However, if there are no models in VSCode, we have to pull it:
ollama pull llama3:8b
ollama pull llama3:instruct
Let’s start by an easy example. We ask our coding agent to generate a quicksort algorithm in Python.
Conclusion
We started this post by learning about Ollama, a framework that enables users to run open-source large language models (LLMs) locally on their computers. Then, via Ollama, we configured the 8B-Llama-3 and interacted with it via terminal. Finally, we followed a step-by-step guide on creating a coding agent in VSCode using CodeGPT, which speeds up the coding process, makes it more intuitive, and reduces error rates.
Reference
- Ollama — Llama 3 — URL: https://ollama.com/blog/llama3