Google Palm 2 API With LangChain 🦜️🔗

Ahmed Haytham
5 min readJul 5, 2023

Greetings, fellow adventurers! I hope you are doing well.

This is my first article on Medium, and I would be grateful for your support if you enjoyed it. I am passionate about adventure, and I hope to share my experiences and insights with you through my writing.

The Map 🗺️

  1. How to get PaLM API and MakerSuite?
  2. How to use PaLM API and MakerSuite?
  3. How to use it with Langchain?
  4. Build simple apps {Q/A, Chat} from {websites & Documents}
  5. The Code

How to get the API?

Simply go to the Build generative AI applications with Google website and join the waitlist.

wait..! after few days you will get this email

1. How to use the API-key?

  1. you will visit MakerSuite

create an API key (Play with the models before it ) come here finish the article first go to play later.

2. Creat Api for new project ==> copy it

z

3. for me I use a .env file and put the key on it like

then use it wherever I want

import os 
from dotenv import load_dotenv, find_dotenv

load_dotenv(find_dotenv()) # read local .env file

# configure palm
import google.generativeai as palm
api_key = 'AIzaSyBXBk2SKMiiSYJI07uH8C-Yj3PjmMUnki0' # put your API key here
palm.configure(api_key=api_key)

just like that or in any alternative manner you desire.

2. let’s generate some text

For sure install the google-generativeaipackage or requirements.txt in the repo

firstly we should get the model

# choose a model there are 1 model available
models = [m for m in palm.list_models() if 'generateText' in m.supported_generation_methods]
model = models[0].name

# generate text
prompt = 'why is the sky green?'
text = palm.generate_text(
prompt=prompt,
model=model,
temperature=0.1,
max_output_tokens=64,
top_p=0.9,
top_k=40,
stop_sequences=['\n']
)
text.result

we will choose the only available model

Prompt engineering is definitely will be a helpful technique here for designing prompts.

check other parameters on the documentation page … ok here there are, dear adventurer but don’t be lazy.

Discover how to embed and chat using the documentation

3. How to use it with Langcahin?

do this

from langchain.embeddings import GooglePalmEmbeddings
from langchain.llms import GooglePalm

llm = GooglePalm()
llm.temperature = 0.1

prompts = ["The opposite of hot is",'The opposite of cold is'] # according to the class prmpts must be in list
llm_result = llm._generate(prompts)

llm_result.generations[0][0].text
llm_result.generations[1][0].text

> Put the prompts in an array

Why are the results stored in a multidimensional array?

Imagine you are working with a bard.

  • You provide 1 input to the bard.
  • The bard processes it and returns 3 outputs.

Similarly, in this case, we can obtain a maximum of 8 outputs.

  • The first dimension represents each prompt in the prompts list.
  • The second dimension represents the answers for each input.

You have the LLM and the embedding model you can do what you want

4. Build simple apps {Q/A, Chat} from {websites & Documents}

from langchain.document_loaders import UnstructuredURLLoader  #load urls into docoument-loader
from langchain.chains.question_answering import load_qa_chain
from langchain.indexes import VectorstoreIndexCreator #vectorize db index with chromadb
from langchain.text_splitter import CharacterTextSplitter #text splitter

from langchain.chains import RetrievalQA

from langchain.document_loaders import UnstructuredPDFLoader #load pdf



urls = ['https://www.linkedin.com/pulse/transformers-without-pain-ibrahim-sobh-phd/',]
loader = [UnstructuredURLLoader(urls=urls)]
index = VectorstoreIndexCreator(
embedding=GooglePalmEmbeddings(),
text_splitter=CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)).from_loaders(loader)



chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=index.vectorstore.as_retriever(),
input_key="question")


answer = chain.run('What is machine translation?')
answer

Yes, absolutely! It’s as simple as populating the array with links, and voila! The answers appear like magic.

pdf_folder_path = 'Books'
pdf_loaders = [UnstructuredPDFLoader(os.path.join(pdf_folder_path, fn)) for fn in os.listdir(pdf_folder_path)]
pdf_index = VectorstoreIndexCreator(
embedding=GooglePalmEmbeddings(),
text_splitter=CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)).from_loaders(pdf_loaders)

pdf_chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=pdf_index.vectorstore.as_retriever(),
input_key="question")

pdf_answer = pdf_chain.run('What are GANs?')
pdf_answer

chat with pdfs easy too

template = """
You are an artificial intelligence assistant work in academia. You are asked to answer questions. The assistant gives helpful, detailed, and polite answers to the user's questions.

{question}

"""

from langchain import PromptTemplate, LLMChain

prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm, verbose=True)

print(llm_chain.run('Who are you?'))

making a template is very easy too

5. The treasure

follow and the star

don’t forget to connect with me on LinkedIn I’d love to know your feedback

https://www.linkedin.com/in/ahmed-haytham/

--

--

Ahmed Haytham

Data Scientist @ NADSOFT | Creating AI solutions | And Yes, I have a life beyond AI, maybe