LangChain is Nice, But Have You Tried EmbedChain ?
I recently discovered EmbedChain, a module that allows you to create chatbots based on LLMs to chat with your data (YouTube video, PDF file, web page, docx file, documentation, notion). What struck me was the super simple interface that EmbedChain offers, as opposed to LangChain or Llama index.
Below, I will present you with some use-cases for EmbedChain. First and foremost, you will need to install EmbedChain:
pip install embedchain
Use-Case 1: Chatting with Wikipedia Articles
EmbedChain can be used to chat with Wikipedia articles. To do this, it’s as simple as setting up your OpenAI API key, adding the articles you want to chat with, and asking your questions. EmbedChain takes care of creating embeddings and indexes and manages the entire RAG (Retrieval Augmented Generation) system in the background.
Below is the code to do it:
import os
from mytoken import apikey
from embedchain import App
os.environ["OPENAI_API_KEY"] = apikey
wikipedia_bot = App()
# Embed Online Resources
wikipedia_bot.add("https://en.wikipedia.org/wiki/Donald_Trump")
wikipedia_bot.add("https://en.wikipedia.org/wiki/Barack_Obama")
while True:
question = input("Enter your question, or 'quit' to stop the program.\n >>")…