Creating a chat model using HuggingFace & langchain

11th August 2024 | DAY 4

freshlimesofa
2 min read4 days ago

langchain and hugging face have a collaboration library called langchain-huggingface
installations that are required:

pip install langchain-huggingface
pip install huggingface_hub
pip install transformers
pip install langchain

we need an environment key which can be generated from your account profile, this token allows you to gain access of hf hub
we need to create a private key say : hf_token

for accessing the key in your code from the environment variables
since its in os we import os and the use os.getenv

import os 
os.environ["HUGGINGFACEHUB_API_TOKEN"] = "your generated api key"
HUGGINGFACEHUB_API_TOKEN= os.getenv("enviornment variable name")

go to hf and find the model that you need to use for your application
get the model id
calling the model is by two ways:
a) we use endpoints to call model
b) we download model on local

we will be using endpoint method

#this is already done above btw
from langchain_huggingface import HuggingFaceEndpoint
#setting up hf token
os.enviorn["HUGGINGFACEHUB_API_TOKEN"]="your key"

now we set up the repo id which is basically the id of the model that we want to call for our application

repo_id = "model id"
#now we create a variable which is our endpoint connection to the api
llm = HuggingFaceEndpoint(
repo_id=repo_id,
max_length=128,
temperature=0.5,
huggingfacehub_api_token=HUGGINGFACEHUB_API_TOKEN,
)
# once you execute this you get the will get the git credentials error
llm.invoke("what is machine learning?")
#hopefully get an answer

now we’ll try to work along with the prompt template

from langchain import PromptTemplate, LLMChain
question "who won uefa champions leagure 2011 ?"
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template = template, input_variables = ["question"])
print(prompt)
llm_chain = prompt | llm

to execute this entire thing we use LLMChain which basically runs everything as a chain including our prompt template

print(llm_chain.invoke({"question": question}))

Error log:

  1. token not added to git credential: don’t be worried about this coz at the moment we just wanna make api calls
  2. make sure that in your huggingface profile the key permissions are enabled to make api inference calls else the application will not run.

NOTE:
this is more like a personal note taking journal kinda shi so it wont have exact to the point knowledge

im writing all this so that i can refer to it later

--

--