Influencer Analytics with Vertex AI PaLM APIs & Langchain

Ravi Manjunatha
Google Cloud - Community
3 min readJul 9, 2023

--

(view expressed are personal, they do not represent that of organization’s)

With advent of LLMs tapping insights from a wide range of sources such as Balance sheets, views on social media platforms, Influencers opinions has become all the more easier.

Social media influencers especially in the Tech & Finance world are increasingly seen as key proponents of an organizations or its competitors product & policies.

Influencer Analytics with Vertex AI & Langchain

With PalM2 models in Vertex AI and Langchain , getting a pulse about the various social media platform about a product has now become all the more easy.

In this series of articles, we will explore how this solution can come together,

Import the libraries for the solution,

from langchain.document_loaders import YoutubeLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.chains import RetrievalQA

Initialize our Vertex AI LLM model,

llm = VertexAI(
model_name="text-bison@001",
max_output_tokens=256,
temperature=0.1,
top_p=0.8,
top_k=40,
verbose=True,
)

Load the videos that we would like to summarize or ask questions,

loader = YoutubeLoader.from_youtube_url("https://www.youtube.com/<your video>", add_video_info=True)
result = loader.load()

Split the video into multiple chunks using the Recursive Charactersplitter technique,


text_splitter = RecursiveCharacterTextSplitter(chunk_size=1500, chunk_overlap=0)
docs = text_splitter.split_documents(result)
print(f"# of documents = {len(docs)}")

Store your documents and index them,

db = Chroma.from_documents(docs, embeddings)
retriever = db.as_retriever(search_type="similarity", search_kwargs={"k": 2})

Create chain to answer question,

qa = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=retriever, return_source_documents=True)

Define your function with the prompt template,


def sm_ask(question, print_results=True):

video_subset = qa({"query": question})
context = video_subset
prompt = f"""
Answer the following question in a detailed manner, using information from the text below. If the answer is not in the text,say I dont know and do not generate your own response.

Question:
{question}

Text:
{context}

Question:
{question}

Answer:
"""
parameters = {
"temperature": 0.1,
"max_output_tokens": 256,
"top_p": 0.8,
"top_k": 40
}
response = llm.predict(prompt, **parameters)
return {
"answer": response

}

Integrate the LLM application with Gradio,

import gradio as gr

def get_response(input_text):
response = sm_ask(input_text)
return response

Query = gr.inputs.Textbox(label="Enter question")
Response = gr.outputs.Textbox(label="Response")

grapp = gr.Interface(fn=get_response, inputs=Query, outputs=Response, title="Fininfluencer Analyzer")
grapp.launch(debug=False,share=True,auth=(<ur_id,<pwd>))

I used one of the fin-influencer’s video to test this solution,

In the next series of article, we will explore the integration of social media reviews , balance sheet and get a 360 view about Organizations and their products in a summarized and Q&A way.

--

--