Ollama + HuggingFace ✅🔥
Create Custom Models From Huggingface with Ollama
👨🏾💻 GitHub ⭐️| 🐦 Twitter | 📹 YouTube | 👔LinkedIn | ☕️Ko-fi
Ollama helps you get up and running with large language models, locally in very easy and simple steps. Compared with Ollama, Huggingface has more than half a million models. Wouldn’t it be cool, if we can create custom models from Huggingface with Ollama ? If your answer is yes, you landed in the right post 😎
If you are new to Ollama, I have created a playlist, take your time to go through it, no pressure !
Here are the steps to create custom models.
- Make sure you have Ollama installed and running ( no walking 😄 )
- Go to huggingface website and download the model ( I have downloaded the GGUF model )
- Create a modelfile and input necessary things.
- Create a model out of this modelfile and run it locally in the terminal.
Now, you might be wondering what are the necessary things to input in the modelfile, I know your thinking, here it is 🤗 In this example, I am using the TheBloke/CapybaraHermes-2.5-Mistral-7B-GGUF model.
# Modelfile
FROM "./capybarahermes-2.5-mistral-7b.Q4_K_M.gguf"
PARAMETER stop "<|im_start|>"
PARAMETER stop "<|im_end|>"
TEMPLATE """
<|im_start|>system
{{ .System }}<|im_end|>
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
"""
You might also be thinking how to create and run the custom model, for that too, here it is ✌️
ollama create my-own-model -f Modelfile
ollama run my-own-model
Now, you know how to create a custom model from model hosted in Huggingface with Ollama. Give a try and good luck with it. Still, If you prefer a video walkthrough, here is the link.
👨🏾💻 GitHub ⭐️| 🐦 Twitter | 📹 YouTube | 👔LinkedIn | ☕️Ko-fi
Recommended YouTube playlists:
Thank you for your time in reading this post!
Make sure to leave your feedback and comments. See you in the next blog, stay tuned 📢