5 Levels of Building Chatbot Apps with Haystack — Level 1

Arman Tunga
5 min readAug 28, 2024

--

Canva AI Image Generator

Hi 👋
In this article, we’ll walk through Level 1 of our 5 levels of building chatbot apps using Haystack framework, where in this series we’ll cover starting from an introductory bot and progressing to more complex, context-aware conversational systems. Whether you’re new to chatbot development or looking to refine your expertise, this guide will provide practical insights into leveraging Haystack to create chatbots that can deliver engaging, dynamic interactions with well-designed systems.

Haystack Framework

Haystack is a highly customizable framework designed for building chatbot and AI applications tailored to your specific needs. Its flexible components and pipelines architecture allow you to construct anything from simple retrieval-augmented generation (RAG) apps to complex, multi-faceted systems.

Haystack's two main concepts are components and pipelines:

  1. Components: Modular building blocks like Generators and Retrievers that handle specific tasks within a pipeline. They are customizable and designed to work seamlessly with LLMs, third-party APIs, and databases.
  2. Pipelines: Configurable workflows that combine components to create powerful, end-to-end systems. Pipelines are flexible, allowing for the integration of various processing steps, and can be easily saved and reused.

Haystack supports everything from quick prototyping to full-scale deployment of chatbot applications. Having worked with the most popular GenAI frameworks, I find that Haystack offers the flexibility needed to build production-grade applications.

Now that we have an overview of the framework, let’s start with our Level 1 Chatbot App.

Level 1: Not Even a Chatbot!

Simple illustration of level 1 chatbot.. not even a chatbot

In this example we won’t even implement chatting, we’ll only get to know how to use Haystack’s two core concepts. Our app will be a simple while loop that takes in an input from the user and sends it to OpenAI API(that will be handled by Haystack as well as the other staff). It won’t know about previous conversation, only designed to answer questions.

I’ll be using poetry for dependency management. If you don’t have poetry installed, refer to this page. After that, create a directory called not-even-a-chatbot and cd into it. Then simply write poetry init, hit enter for every question and Voila🎉 we initialized our project! To install a package we use poetry add <package_name_1> <package_name_2>. For now we’ll only install haystack-ai and for accessing our secrets from .env file, python-dotenv package. To do so, in the terminal simply write:
poetry add haystack-ai python-dotenv

Now that we’ve installed necessary packages, let’s dive into coding.

Import Statements

For now don’t worry about .env file, we’ll handle it later.

from dotenv import load_dotenv

load_dotenv() # load environment variables from .env file
from haystack.components.generators import OpenAIGenerator
from haystack.components.builders import PromptBuilder
from haystack import Pipeline

We don’t even need to import and use the Pipeline or thePromptBuilder for this example but it’s a good practice before moving on to next levels.

Preparing the Components

# Prepare the prompt_builder component
prompt_template = """
You are a kind assistant and you are here to help people to find the information they need.
If you don't know the answer, simply say, "I don't know".

Question: {{question}}
Answer:
"""
prompt_builder = PromptBuilder(template=prompt_template)

# Prepare the llm component
model_name = "gpt-4o-mini"
llm = OpenAIGenerator(model=model_name)

We only have two components: prompt_builder and llm. At first it might look like we don’t need a PromptBuilder but it renders a prompt filling in any variables so that it can send it to a Generator. Also we can use the Jinja2 template syntax with it and make flexible prompts. For those who don’t know what Jinja2 is, Jinja2 is a popular templating engine for Python, widely used for generating dynamic content. It allows you to create templates that can be filled with data at runtime, making it useful for a lot of things including creating complex prompts for language models.

Building the Pipeline

# Prepare the Pipeline
chat_pipeline = Pipeline()

# Add the components to the pipeline
chat_pipeline.add_component("prompt_builder", prompt_builder)
chat_pipeline.add_component("llm", llm)

# Make the connections between components in the pipeline
# chat_pipeline.connect("prompt_builder.prompt", "llm.prompt")
chat_pipeline.connect("prompt_builder", "llm")

Our pipeline is READY!

Let’s test it

Before writing our final code and test it, we need to create a file called .env in the project root folder(next to main.py) and inside of it we need to store our OPENAI_API_KEY. Otherwise we won’t able to use any model from OPENAI. Don’t know how to get your API KEY? In this page you can create an API KEY. Please do not share this key with anybody. The whole point of using .env, okay not the whole but mostly, is to hide it and use it in a secure way.

Content of the .env file should look like this:
OPENAI_API_KEY=sk-proj-123...

If you’ve done that, then you are good to go with the code below:

print(
"Hi, I am a simple chatbot that doesn't remember anything. You can ask me anything, I won't even remember it again!\nType 'q' to quit.\n")
while True:
question = input("User: ") # Get the question from the user
if question == "q":
break
result = chat_pipeline.run({"prompt_builder": {"question": question}}) # Run the pipeline
print("Assistant: ", result["llm"]["replies"][0]) # Print the answer
print("Goodbye!")

Here we created a simple while loop that runs forever unless it takes q as an input from the user. We simply get the input and use .run() method of our chat_pipeline to get a simple response from the llm. As an argument we sent {"prompt_builder": {"question": question}. This is because we used {{ question }} inside our prompt_template variable. If you used a different variable name, use the same name here as well.

Output of our test

Hi, I am a simple chatbot that doesn’t remember anything. You can ask me anything, I won’t even remember it again!
Type ‘q’ to quit.

User: Mark lives in Berlin
Assistant: That’s great! If you have any specific questions about Mark or Berlin, feel free to ask!
User: Where does Mark live?
Assistant: I don’t know.
User: q
Goodbye!

As seen from the example above, our not so chatbot cannot remember any information and solely able to chat if everything is given in one shot.

Outro

Congratulations on reaching the end of Level 1! 🎉

In this first step, we learnt one or two components of the Haystack framework, focusing solely on understanding the fundamental components and how they interact. We created a simple loop that communicates with the OpenAI API through Haystack, demonstrating how easy it is to get started with chatbot development.

This foundational knowledge sets the stage for more advanced levels, where we’ll delve into more complex features and capabilities, including context management and dynamic interactions.

Stay tuned for the next levels, where we’ll build on this knowledge to develop increasingly sophisticated and context-aware chatbot systems. As always, feel free to experiment and explore further with Haystack, and happy coding! 🚀

If you have any questions or need further clarification, don’t hesitate to reach out. Until next time, keep smiling!

--

--