Simple Chatbot Gradio + Google Gemini API 🚀

Alejandro NĂșñez Arroyo
LatinXinAI
Published in
5 min readApr 15, 2024

In this guide, we will create a chatbot that receives our questions via Gradio and responds via the Gemini API, taking into account the entire conversation history instead of just answering single questions. The full code is available on GitHub and can also be accessed via Google Colab.

Gemini

Gemini is Google’s cutting-edge artificial intelligence that allows us to develop a wide variety of applications, including chatbots. With Gemini, we can send instructions in text format to create our own solutions.

The Gemini model, as of today (April 10th), has the use of Gemini 1.5 Pro and Gemini 1.0 Pro text models available. To find out which models are currently available, see the following link.

A notable feature of this API is that it is free, although with certain limitations. For example, the free version of Gemini 1.0 Pro has a limit of 15 requests per minute (RPM).

Gemini 1.0 Pro Pricing Information

Gemini API Configuration

To use Gemini, a private API key must be generated at aistudio. It is important to note that we should not send sensitive, confidential or personal information. Please read the terms and conditions for more information.

We accept the terms and conditions
We obtain the API Key and copy it

Hello world with Gemini API

Answers are generated based on a question to Gemini.

# Install the Python SDK
!pip install -q -U google-generativeai
# The library and API Key is added
import google.generativeai as genai
GOOGLE_API_KEY = "api"
genai.configure(api_key=GOOGLE_API_KEY)
# Model configuration
model = genai.GenerativeModel('gemini-pro')
# Generate text from text inputs
question = 'What is the capital of Argentina?'
response = model.generate_content(question)
response.resolve()
response.text

Chat conversations with Gemini

Gemini enables us to have a conversation. The ChatSession class simplifies the process by managing the state of the conversation, so unlike with generate_content, you do not have to store the conversation history as a list. With start_chat the history of the whole conversation is saved.

chat = model.start_chat(history=[])
countries = ['Chile', 'Bolivia']

for country in countries:
message = f'What is the capital of {country}?'
chat.send_message(message)

chat.history
Conversation history in chat format

You can also ask questions about the history of the conversation.

response = chat.send_message('From which countries did I ask you the capital city?')
response.resolve()
response.text
# Response: Chile and Bolivia

Streaming Gradio Chatbots

Gradio is the quickest way to showcase your machine learning model through a user-friendly web interface, making it accessible to everyone, everywhere!

# Install Gradio
!pip install --upgrade gradio

To develop chatbots, there are many ways to make a conversation in Gradio. In this tutorial, we will use gr.ChatInterface()

import time
import gradio as gr

def slow_echo(message, history):
for i in range(len(message)):
time.sleep(0.3)
yield "You typed: " + message[: i+1]

gr.ChatInterface(slow_echo).launch()

With that code, the system response is the same as the input text in the interface.

Finally, we will merge the Gemini and Gradio code to make a chatbot that responds to the input text. The code is edited to make the undo and clear buttons of the interface work.

import time
import gradio as gr

chat = model.start_chat(history=[])

# Transform Gradio history to Gemini format
def transform_history(history):
new_history = []
for chat in history:
new_history.append({"parts": [{"text": chat[0]}], "role": "user"})
new_history.append({"parts": [{"text": chat[1]}], "role": "model"})
return new_history

def response(message, history):
global chat
# The history will be the same as in Gradio, the 'Undo' and 'Clear' buttons will work correctly.
chat.history = transform_history(history)
response = chat.send_message(message)
response.resolve()

# Each character of the answer is displayed
for i in range(len(response.text)):
time.sleep(0.05)
yield response.text[: i+1]

gr.ChatInterface(response,
title='Gemini Chat',
textbox=gr.Textbox(placeholder="Question to Gemini"),
retry_btn=None).launch(debug=True)
Final demo of Gemini + Gradio

Conclusions

Gemini offers the possibility of developing different applications that work with text, audio, images, and video. In this context, we can use Gemini for free to create a chatbot capable of answering our questions through text.

Gradio, in combination with Gemini, can be a great ally in the development of chatbot systems with artificial intelligence. Thanks to its components, it allows you to create a simple but functional interface to interact and develop more advanced products without requiring a great deal of previous programming experience.

Connect, clap and follow me for more ideas

If you found the content interesting, I invite you to clap 👏 and leave your comments 💬. Your feedback is very important to me. If you have the desire to delve deeper into the world of artificial intelligence, feel free to connect with me on 🔗 LinkedIn or follow me on Medium | Alejandro NĂșñez Arroyo to receive more articles like this one.

References:

LatinX in AI (LXAI) logo

Do you identify as Latinx and are working in artificial intelligence or know someone who is Latinx and is working in artificial intelligence?

Don’t forget to hit the 👏 below to help support our community — it means a lot!

--

--

Alejandro NĂșñez Arroyo
LatinXinAI
Writer for

🚀 Artificial Intelligence Engineer at Fogsphere. đŸ’» Organizer Google Developers Group Sucre. 📊 Data Science Mentor at InTech MOM.