Using Google Gemini with LiteLLM

Dagang Wei
Google Cloud - Community
3 min readApr 8, 2024

This article is part of the series Hello LLM.

Introduction

Large Language Models (LLMs) are redefining how we interact with computers, but the explosion of providers and their unique APIs can lead to integration headaches. That’s where LiteLLM shines. This open-source library transforms the way you work with LLMs.

What is LiteLLM?

LiteLLM is a Python library that offers a unified interface for interacting with various LLM providers. Think of it as a universal translator for LLMs, allowing you to communicate with models from OpenAI, Cohere, Anthropic, Hugging Face, and others using a consistent format.

Why Use LiteLLM?

Effortless Swapping: Experiment with different LLMs without overhauling your code. LiteLLM lets you easily change your LLM provider with minimal adjustments.

Simplified Development: Gone are the days of juggling multiple, provider-specific API calls. LiteLLM streamlines your development process with uniform input and output structures.

Reliability: Say goodbye to inconsistencies and errors. LiteLLM is built with robust error handling and fallback mechanisms, ensuring your applications stay operational.

Cost Optimization: Manage LLM usage and set budgets or rate limits based on project needs, API keys, or specific models.

How Does LiteLLM Work?

LiteLLM works its magic in a few simple steps:

  1. Translation: It converts your requests into the specific format required by the chosen LLM provider.
  2. Communication: It sends the translated request to the provider’s API.
  3. Standardization: It formats the response from the LLM into a standardized output structure.

Getting Started with LiteLLM

Installation:

pip install litellm

Authentication: Obtain API keys from your desired LLM providers.

Examples

Basic Usage

from litellm import completion
import os

## set ENV variables
os.environ["OPENAI_API_KEY"] = "your-openai-key"
os.environ["COHERE_API_KEY"] = "your-cohere-key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)
print(response)

Async

from litellm import acompletion
import asyncio

async def test_get_response():
user_message = "Hello, how are you?"
messages = [{"content": user_message, "role": "user"}]
response = await acompletion(model="gpt-3.5-turbo", messages=messages)
return response

response = asyncio.run(test_get_response())
print(response)

Streaming

from litellm import completion
response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
for part in response:
print(part.choices[0].delta.content or "")

# claude 2
response = completion('claude-2', messages, stream=True)
for part in response:
print(part.choices[0].delta.content or "")

Using Gemini as the LLM

Get an API key in Google AI Studio https://ai.google.dev/, then you can use it to integrate Gemini with LiteLLM:

from litellm import completion
import os
import sys


def check_api_key():
"""
Check if the API key is available in the environment.
"""
api_key = os.environ.get('GEMINI_API_KEY')
if not api_key:
print("Please set the GEMINI_API_KEY environment variable with the API key.")
sys.exit(1)


def chat(prompt: str) -> str:
"""
Generate a response from the model based on a given prompt.
"""
response = completion(
model="gemini/gemini-pro",
messages=[{"role": "user", "content": prompt}],
)
if response and response.choices:
answer = response.choices[0].message.content
return answer
else:
return "No response from the model"


check_api_key()

# Example usage
prompt = "Explain union types in TypeScript"
answer = chat(prompt)
print(answer)

Summary

LiteLLM makes working with LLMs a breeze. Whether you’re building innovative applications, exploring cutting-edge research, or simply want to compare different LLM providers, LiteLLM is your essential toolkit. Want to learn more or contribute? Visit the project’s GitHub repository: https://github.com/BerriAI/litellm

--

--