image by Lexica AI

Build Your Own ChatGPT Bot with Internet Access and Memory using LangChain and Gradio.

Iva @ Tesla Institute
Artificialis
Published in
4 min readApr 14, 2023

--

As a researcher in the field of conversational AI, I have observed the progression of chatbots and their integration into our daily lives. Artificial Intelligence and Natural Language Processing technologies have enabled the development of more sophisticated conversational agents. In this blog post, I will demonstrate the process of constructing a ChatGPT bot with internet access and memory capabilities using LangChain and Gradio.

LANGCHAIN

LangChain is a library that offers a systematic and flexible framework for building conversational agents. It facilitates the integration of various tools and components, including search engines, databases, and AI models, into the chatbot system. Gradio, on the other hand, is an interface for the deployment of machine learning models as web applications, providing a user-friendly interface for the end-users.

By combining the capabilities of LangChain and Gradio, a ChatGPT bot with internet access and memory retention can be developed. This allows the chatbot to provide more informed and context-aware responses to user queries. In this post, I will provide a comprehensive guide on the steps involved in setting up and launching a ChatGPT bot with internet access and memory.

Installation

To get started, let’s ensure we have all the necessary dependencies installed. It’s as easy as running a few simple commands in your terminal:

!pip install google-api-python-client
!pip install langchain
!pip install gradio

The langchain library provides a flexible framework for building conversational agents, with easy integration of tools and components. It also has memory management capabilities, allowing the chatbot to retain previous conversation history for better responses. The library includes AI models, like OpenAI’s GPT-3, and the initialize_agent function for defining the tool flow in generating responses:

import os
from langchain.agents import Tool
from langchain.memory import ConversationBufferMemory
from langchain.chat_models import ChatOpenAI
from langchain.utilities import GoogleSearchAPIWrapper
from langchain.agents import initialize_agent
import gradio as gr

Environment variables are used to store sensitive information, such as API keys, that should not be stored directly in the code.

The load_dotenv function is used to load the environment variables stored in a file named .env. This file should be located in the same directory as your Python script and should not be committed to version control. The .env file should contain the following lines:

from dotenv import load_dotenv
load_dotenv()

OPENAI_API_KEY =os.getenv("OPENAI_API_KEY")
GOOGLE_API_KEY =os.getenv("GOOGLE_API_KEY")
GOOGLE_CSE_ID =os.getenv("GOOGLE_CSE_ID")

By using this approach, you can safely store sensitive information, such as API keys, in a separate file and avoid accidentally exposing it in your code. This is particularly important when working with open-source projects or sharing your code with others, as it ensures that the sensitive information remains secure. Next:

search = GoogleSearchAPIWrapper()
tools = [
Tool(
name ="Search" ,
func=search.run,
description="useful when you need to answer questions about current events"
),
]

This creates a list of tools that can be used to generate chatbot responses. The search variable is an instance of the GoogleSearchAPIWrapperclass, a wrapper for the Google Search API. This allows the chatbot to perform web searches and retrieve information from the internet.

memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

By using the ConversationBufferMemoryclass, you can build a chatbot that has a memory of previous conversations and can provide more informed and personalized responses to user queries. This can lead to a more natural and engaging conversation between the chatbot and the user.

llm=ChatOpenAI(temperature=0)
agent_chain = initialize_agent(tools, llm, agent="chat-conversational-react-description",
verbose=True, memory=memory)

This sets up the chatbot to use the OpenAI GPT-3 model for generating responses and creates a list of tools that can be used to help the chatbot answer questions. The chatbot is also given a memory of previous conversations, so it can provide more personalized answers. The initialize_agent function puts everything together and sets up the chatbot to be ready to use.

def chat_response(input_text):
response = agent_chain.run(input=input_text)
return response

The function uses the chatbot’s agent chain, which was previously set up to include the OpenAI GPT-3 model and the tools and components specified in the tools list, to generate a response to the input text.

Finally, we can integrate our chatbot functions into a user-friendly interface using Gradio. This will allow for a smooth and intuitive interaction with the chatbot, enabling users to easily input their questions and receive well-informed responses. By integrating our chatbot with Gradio, we have created a powerful and accessible tool for generating human-like responses to a wide range of inquiries.

interface = gr.Interface(fn=chat_response, inputs="text", outputs="text", description="Chat with a conversational agent")

interface.launch(share=True)

Voila!

CONCLUSION

In conclusion, building a chatbot with internet access and memory has never been easier. By leveraging the power of LangChain and Gradio, we have created a flexible and intuitive conversational agent that can provide well-informed responses to user queries. The integration of the Google Search API and the OpenAI GPT-3 model, along with the ability to retain previous conversation history, makes this chatbot a valuable tool for generating human-like responses.

By following the steps outlined in this blog post, you too can build your own chatbot with internet access and memory. Whether you’re looking to create a chatbot for personal use or for a larger project, the combination of LangChain and Gradio offers a powerful and accessible solution. So why wait? Start building your chatbot today and experience the benefits of a conversational agent with internet access and memory for yourself!

CODE ON GITHUB:

RESOURCES:

--

--

Iva @ Tesla Institute
Artificialis

hands-on hacks, theoretical dig-ins, and real-world know-how guides. sharing my notes along the way; 📝