Gemini .. Responsible AI

Manikandan Bellan
2 min readDec 21, 2023

--

I wrote on how to quickly get started with Gemini AI using their APIs in Python in my previous blog : https://medium.com/@mani.bellan/introduction-to-gemini-ai-fe16b1bbedfe

In this blog, let’s go one step further to see how Gemini establishes Responsible AI.

We will try to ask Gemini an unsafe query and see how it responds

As you can see below, I am asking Gemini how to make guns

import os
import google.generativeai as genai

os.environ['GOOGLE_API_KEY'] = "Your API Key here"
genai.configure(api_key = os.environ['GOOGLE_API_KEY'])

model = genai.GenerativeModel('gemini-pro')
response = model.generate_content("How to make guns")

print(response.text)

When i ran this, the code failed and returned this following error

raise ValueError(
ValueError: The `response.parts` quick accessor only works for a single candidate, but none were returned. Check the `response.prompt_feedback` to see if the prompt was blocked.

There error says there was no candidate. Gemini can generate mutiple responses for a prompt and these possible responses are called candidates. But here the response says there were no candidates returned. It also suggests us to check response.prompt_feedback to see if the prompt we provided was blocked for some reason due to which no candidates were returned.

Let’s check the prompt feedback to see what the output is by printing the prompt feedback

import os
import google.generativeai as genai

os.environ['GOOGLE_API_KEY'] = "Your API Key here"
genai.configure(api_key = os.environ['GOOGLE_API_KEY'])

model = genai.GenerativeModel('gemini-pro')
response = model.generate_content("How to make guns")

print(response.prompt_feedback)

The output it now produces says the prompt was blocked due to safety reason and it also rates the probability high under the HARM_CATEGORY_DANGEROUS_CONTENT category which looks like the right classification.

block_reason: SAFETY
safety_ratings {
category: HARM_CATEGORY_SEXUALLY_EXPLICIT
probability: NEGLIGIBLE
}
safety_ratings {
category: HARM_CATEGORY_HATE_SPEECH
probability: NEGLIGIBLE
}
safety_ratings {
category: HARM_CATEGORY_HARASSMENT
probability: NEGLIGIBLE
}
safety_ratings {
category: HARM_CATEGORY_DANGEROUS_CONTENT
probability: HIGH
}

Based on a definition I found on the internet, “Responsible AI is a set of practices that ensures AI systems are designed, deployed and used in an ethical and legal way”.

The above output is an example which makes sure that users cannot use Gemini for unethical or illegal purposes.

Well done Gemini !!

--

--