Enhancing LLMs with Prompt Engineering Using LangChain and HuggingFace Library

Abhishek Jaiswal
4 min readSep 25, 2023

--

In the previous article, we explored the application of LangChain to enhance the accessibility of LLMs. A simple tutorial to illustrate the implementation of LangChain and the calling of LLM API from OpenAI using a Streamlit app. In this article, we will delve further into the topic of Prompt Engineering using LangChain.

A “prompt” is a sequence of tokens that directs a LLM to generate a specific response. These prompts play a crucial role in guiding the model’s behavior and can significantly impact the quality of its output. Prompt engineering is thus defined as a process of designing effective prompts to elicit the desired output from a language model. This involves fine-tuning input phrases to cater to the model’s training and structural biases. We will walk through a similar approach in this article.

In this article we will utilize HuggingFace library as the LLM model provider in place of OpenAI’s models. For this the first step is to generate an access token to HuggingFace models through their portal.

You need to have a verified account to generate the access keys and can navigate to Settings -> Access Tokens from your profile to generate one as shown below —

Once you’ve generated the access token, you need to set the environment variable for the HuggingFace API key with the same as shown below —

In order to test everything’s working with your API key you can try running below code for the same

Great, we are all set to begin with Prompt Engineering using LangChain. LangChain provides Prompt Templates that helps us in orchestrating and organizing prompts for our LLM application in a sequenced and systematic manner. These are pre-defined recipes to generate prompts templates that can be used to define instructions and examples (specific to a context).

LangChain provides a variety of tools to work with these templates and we will be exploring a few in this article. As said this is in continuation to my previous article, so I’ll be continuing from where I ended the last tutorial.

To begin, we will import PromptTemplate and LLMChain library from LangChain and define our input prompt. We will initialize the prompt using PromptTemplate with input_variables and template as parameters. Then we will initialize our LLMChain object to run our prompt query using the LLM model that we initialized earlier.

In the LLMChain Object we will define our llm model, the prompt(query to run), verbose=True, output_key(to reference the output from this prompt further in the app) and the memory to store the conversation in memory with a label. Initializing Memory will help LLM model to save information about objects for which the conversation has taken place. We define this with the help of ConversationBufferMemory.

Now let’s run our Streamlit app and see if we are able to get the output for this input prompt using command streamlit run

As we can see just typing the name “ronaldinho” the llm gave output for our input prompt — Who is ronaldinho

That’s it we have easily implemented a prompt template into out app!! But wait we want to implement multiple prompt templates to make some sense out of our app.

We are going to use a tools SequentialChain to implement the same. We will simply define another chain following our first input prompt chain.

We define a parent_chain that will execute all the input chains in a sequence using SequentialChain. After this we define the call for our app and how to save conversation history using MemoryBuffer.

As we can see, we are getting output for both the input prompts set by us in a sequential manner. By implementing multiple prompt templates, we have created a multi-prompt based LLM application to track information about football players.

LangChain and HuggingFace libraries provide powerful tools for prompt engineering and enhancing the accessibility of language models. With the use of prompt templates, LLM applications can be organized and sequenced in a systematic manner. By fine-tuning the input phrases to cater to the model’s training and structural biases, we can significantly impact the quality of the model’s output. The implementation of multiple prompt templates has the potential to create a highly functional and useful LLM application.

--

--

Abhishek Jaiswal

Passionate Data Scientist and Cloud Practitioner | Constant Learning & Improvement