Photo by Liudmila Shuvalova on Unsplash

How to use Huggingface to use LLama-2 on your custom machine?

It was not hard, just tricky.

Rahul Agarwal
5 min readJul 19, 2023

--

Meta’s newly open-sourced LLama 2 Chat model has been making waves on the OpenLLMs Leaderboard. This powerful language model is now available for anyone, even commercially. Intrigued, I decided to try implementing LLama 2 myself. While the process was straightforward, it did require a few steps that I had to dig around to figure out.

In this post, I’ll explain how I got LLama 2 up and running. With Meta open-sourcing more of their AI capabilities, it’s an exciting time to experiment with cutting-edge language models!

So without further ado, let's dig into all the steps you will require to get that LLama-2 Chat Model running.

Get Accesses

To use LLama 2, you’ll need to request access from Meta. You can sign up at https://ai.meta.com/resources/models-and-libraries/llama-downloads/ to get approval to download the model.

Once granted access, you have two options to get the LLama 2 files. You can download it directly from Meta’s GitHub repository. However, I found using Hugging Face’s copy of the model more convenient. So, in addition to the Meta access, I got approval to download from Hugging Face’s repo here…

--

--