Meta AI: What Is Llama 2 And How To Use It In What’s App

Raghuveer Awankar
15 min readApr 19, 2024

--

Meta’s WhatsApp recently introduced new features powered by the advanced Large Language Model LLAMA 2. This article is your go-to, easy-to-follow guide for using ‘Meta AI’ on WhatsApp. We’ll also explore the technical details of LLAMA 2, Meta AI Research’s Large Language Model.

⚠ NOTE: You can directly go to section 3 “How to use Meta AI in What’s App?” if you don’t want technical details .

Content :
1) What is Meta AI?
2) What is Llama 2?
3) How to use Meta AI in What’s App?
4) Prompt Examples
5) Conclusion
6) FAQs
7) Reference

Structure of this article
Please go through this structure so that you can go to sections as per your requirements immediately.

Structure of “Meta AI: What Is Llama 2 And How To Use It In What’s App” article

#1 WHAT IS META AI?

Meta AI is a new interactive assistant available on WhatsApp, Messenger, Instagram, and soon on Ray-Ban Meta smart glasses and Quest 3. It utilizes a custom model incorporating technology from Llama 2 and the latest large language model (LLM) research. With access to real-time information through a Bing search partnership, Meta AI offers text-based chat capabilities and image generation tools.

#2 WHAT IS LLAMA 2?

Here’s technical details regarding LLAMA 2

⚠ NOTE: You can access Llama 2 Model from here (Link)

LLAMA 2 is a collection of large language models (LLMs) ranging from 7 billion to 70 billion parameters, optimized specifically for dialogue use cases. These fine-tuned LLMs, known as Llama 2-Chat, have shown superior performance compared to open-source chat models across various benchmarks. Human evaluations indicate that they could be viable alternatives to closed-source models in terms of helpfulness and safety. We offer a detailed description of our fine-tuning approach and safety enhancements for Llama 2-Chat, facilitating further development by the community and promoting responsible advancement of LLMs.

Paper: “Llama 2: Open Foundation and Fine-Tuned Chat Models” (Link)

Pretraining of Llama 2:

The Llama 2 models build upon the pretraining methodology outlined by Touvron et al. (2023), leveraging an optimized auto-regressive transformer architecture. Significant improvements were made, including enhanced data cleaning procedures, updated data mixes, a 40% increase in total training tokens, and a doubling of the context length. Additionally, grouped-query attention (GQA) was introduced to bolster inference scalability for larger models. The pretraining data, sourced from publicly available sources excluding Meta’s data, amounted to 2 trillion tokens, with efforts made to exclude personal information-rich sites and prioritize factual sources to mitigate hallucinations. Pretraining investigations were conducted to offer insights into the models’ capabilities and limitations.

The training approach largely mirrors that of Llama 1, utilizing a standard transformer architecture with notable modifications such as pre-normalization using RMSNorm, SwiGLU activation function, and rotary positional embeddings (RoPE). Notable architectural enhancements from Llama 1 include an increased context length and the implementation of grouped-query attention (GQA). Hyperparameters, including the AdamW optimizer settings and a cosine learning rate schedule with warmup and decay, were specified. Tokenization remains consistent with Llama 1, employing a byte pair encoding (BPE) algorithm with a vocabulary size of 32k tokens, with specific treatments for numbers and unknown UTF-8 characters.

Fine Tuning Llama 2:

Llama 2-Chat underwent extensive refinement through supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF), both requiring substantial computational and annotation resources. SFT was initiated with publicly available instruction tuning data, later augmented with a meticulously curated dataset of high-quality SFT examples. Notably, the model’s performance was observed to benefit from prioritizing quality over quantity in dataset curation. Fine-tuning details included employing a cosine learning rate schedule and an autoregressive objective, ensuring the model’s alignment with user prompts while enhancing its response quality over two epochs.

RLHF introduced a human preference data collection process to train reward models, facilitating further alignment of Llama 2-Chat with human preferences and safety considerations. Through a binary comparison protocol, human annotators selected preferred responses, enabling the training of separate reward models optimized for helpfulness and safety. The reward models, initialized from pretrained chat model checkpoints, were crucial in optimizing Llama 2-Chat’s behavior based on collected human preference data. Training objectives involved converting pairwise preference data into a binary ranking format and incorporating a margin component to account for discrepancies in preference ratings, ultimately enhancing the accuracy of the reward models in guiding model behavior.

Data Composition in Fine Tuning Llama 2:

Table 1: Llama 2 family of models. Paper: “Llama 2: Open Foundation and Fine-Tuned Chat Models” (Link)

The data composition strategy for training the reward models of Llama 2-Chat involved combining newly collected data with existing open-source preference datasets to create a comprehensive training dataset. Despite the primary focus on learning human preferences for Llama 2-Chat outputs, open-source datasets were utilized to bootstrap reward models and enhance generalization without negative transfer observed in experiments. Different mixing recipes were explored for the Helpfulness and Safety reward models, leading to the decision to train the Helpfulness model on Meta Helpfulness data along with a balanced mix of Meta Safety and open-source datasets, while the Meta Safety model was trained on a combination of Meta Safety and Anthropic Harmless data, supplemented with a small portion of Meta Helpfulness and open-source helpfulness data. This strategy, particularly incorporating 10% helpfulness data, proved beneficial in enhancing accuracy, especially for responses deemed safe in evaluation samples.

Llama 2 Model Size Table (Link)

Training Details :

Reinforcement learning from human feedback (Link)

During training, Llama 2-Chat undergoes one epoch over the training data to avoid overfitting, utilizing the same optimizer parameters as the base model. The maximum learning rate varies based on model size, with 5 × 10−6 for the 70B parameter model and 1 × 10−5 for others, gradually decreasing on a cosine learning rate schedule to 10% of the maximum. Warm-up constitutes 3% of total steps with a minimum of 5, maintaining a fixed effective batch size of 512 pairs or 1024 rows per batch.

Safety in Llama 2:

The pretraining process for Llama 2 involved several steps aimed at ensuring responsible and transparent model development. Privacy and legal reviews were conducted for each dataset, with Meta’s standard procedures followed to exclude any user data. Efforts were made to reduce the carbon footprint of pretraining, and no additional filtering was applied to the datasets to maintain broader usability across tasks while avoiding inadvertent demographic erasure. However, careful consideration and significant safety tuning are advised before deploying Llama 2 models due to potential safety concerns.

An analysis of the pretraining data revealed certain demographic biases and trends. Pronoun frequencies indicated an overrepresentation of “He” compared to “She,” potentially influencing the model’s generation tendencies. Similarly, the representation of different demographic identity terms exhibited variations across axes like gender, nationality, and sexual orientation, with a notable Western skew in certain categories. Additionally, while the pretraining data was predominantly English, it included text from other languages, potentially limiting the model’s suitability for multilingual applications.

Evaluation of Llama 2’s safety capabilities on popular benchmarks revealed improvements compared to previous versions in truthfulness, informativeness, and toxicity. However, the increase in toxicity observed in larger pretraining models suggests a nuanced relationship between dataset size and downstream model behavior, warranting further empirical investigation. Notably, Llama 2 did not consistently outperform other models on toxicity metrics, reflecting the deliberate choice to refrain from aggressive data filtering during pretraining to maintain model robustness and inclusivity.

LLama 2 Performance on various benchmarks (Link)

While benchmarks provide insights into model capabilities, they offer only a partial view of real-world impacts. Further testing and mitigation efforts are essential to understand biases and social issues comprehensively, particularly in specific deployment contexts. As Llama 2 models are integrated and deployed, ongoing research is necessary to enhance their positive impact on important social issues beyond the scope of existing benchmarks.

#3 HOW TO USE META AI IN WHAT’S APP?

Meta AI is available in What’s App for both personal chats and group chats. We will deep dive to see how we can use Generative AI feature of What’s app for both type of chats.

How to Check What’s App Version?

Meta AI is available on version 2.24.7.81 of What’s app

Step 1: Go to Settings

Settings Section in What’s App

Step 2: Go to ‘Help’ in Settings

‘Help’ Section in What’s App Settings

Step 3: Go to ‘App info’ in ‘Help’ Section

What’s App information Section

Check whether your what’s app is updated to the version 2.24.7.81. If it isn’t then visit Google Play Store or Apple App store to update it.

Let’s see how we can use Meta AI in
1) Personal Chats
2) Groups Chats

Meta AI in dedicated chat section

Step 1: Go to Chats Section of What’s app to see whether Meta AI logo is available or not.

What’s App Chat section with Meta AI Chat Logo

Step 2: Start Chat with Meta AI

You can chat with Meta AI just like you do with Open AI Chat GPT and Google Gemini.

We will see various prompts in the next steps of this article.

Meta AI in dedicated chat section

Meta AI in personal chats

Step 1: Navigate to Personal chat window and initiate ‘Meta AI’

Navigate to any personal chat and tap the ‘@’ symbol in the chat box. This action will bring up the option to interact with ‘Meta AI’. From there, you can pose your query to ‘Meta AI’ within the personal chat, and it will generate a response.

Initiating Meta AI in What’s App Personal Chats

Step 2: Response Generation

Please send the message and assess the response from ‘Meta AI’. If the response doesn’t meet your needs, please provide additional details about your query. You can modify the prompt to ensure a satisfactory response.

Meta AI’s generated response in personal chat

Meta AI in personal chats for status replies

Step 1: Access the chatbox within the WhatsApp status Section of your friend, peer, or colleague.

What’s App Status Section

Step 2: In the chat box, type ‘@’ to initiate ‘Meta AI’ and then enter your reply.

What’s App status chatox for reply

Step 3: Send your status and await the response generated by ‘Meta AI’ to your reply. Rest assured, ‘Meta AI’ will generate a response only to your specific interaction and won’t access your entire chat.

Meta AI’s generated response to your Status Reply

Meta AI in group chats

Step 1: Open any What’s App group chat.

Group Chat Section What's App

Step2: Initiate ‘Meta AI’ in a group chat window. Then, in any personal chat, tap the ‘@’ symbol in the chat box to bring up the ‘Meta AI’ option. You can then ask your query to “Meta AI” in the personal chat, and it will generate a response.

Initiating Meta AI in What’s App Group Chats

Step 3: Send your message and review the response from ‘Meta AI’. If the response doesn’t meet your needs, provide further details about your query. You can use Prompt Engineering’ to refine your query and get a satisfactory response.

Meta AI generated response in What’s App group chats

Here’s a demonstration of how you can utilize ‘Meta AI’ on WhatsApp to enhance your creative communication.

#4 PROMPT EXAMPLES

In this segment, we’ll explore how we can enhance productivity using various prompts through ‘Prompt Engineering’.

With ‘Meta AI’ on WhatsApp, you can engage in the following tasks:
1) Text to Text
2) Text to Image
3) Text + Image to Image
4) Text + Image to Text

1) Text to Text Prompts

Zero Shot Prompt

Prompt:
“@ Meta AI
Role : Scientist
Task : Explain why the sky is blue”

Zero Shot Prompt

Few-Shot Prompt

Prompt:
“@ Meta AI
Task : antonyms generation
Hot : Cold
Dark : Bright
Up : Down
Good : Bad
On : Off

Left :
Forward :
Full :”

Few-Shot prompt

Prompt for poetry

Prompt:
“@ Meta AI
Task: Poem Generation
Create a poem on Summer”

Prompt for poetry

Prompt for solving MCQ

Prompt:
“@ Meta AI
Task : MCQ solving
Which woman astronaut has set the record for the longest single spaceflight by a woman?

A Peggy Whitson
B Jessica Meir
C Christina Koch
D Sunita Williams

Answer:”

Prompt for solving MCQ

Prompt for Question-Answer Pair Generation

Prompt:
“@ Meta AI
Task : QA generation

Text : “Phantom Vibration Syndrome is a neurological phenomenon where people mistakenly think their phone is vibrating. It’s often linked to excessive mobile phone use and is described as a tactile hallucination. Research suggests it’s related to smartphone dependence, and vibrations typically start occurring after carrying a phone for a few months to a year.”

QA pairs :”

Prompt for Question-Answer Pair Generation

Prompt for Text Summarization

Prompt:
“@ Meta AI
Task : Summarisation

Text : “There’s a neurological explanation for why you thought your phone was vibrating because there’s no warning called Phantom Vibration Syndrome. Most often associated with excessive mobile phone use, it has been described as a tactile hallucination as the brain perceives the vibration that is not present. Preliminary research suggests it is related to over-involvement with one’s cell phone, as smartphone dependence is associated with occurrence of phantom phone signals. Vibrations typically begin occurring after carrying a phone for between one month and one year.”

Summary:”

Prompt for Text Summarization

2) Text to Image Prompts

Prompt 1:
“@ Meta AI
Create an image of a Lamborghini on road going towards evening sun and sky with shaded of red and purple”

Text to Image prompt 1

Prompt 2:
“@ Meta AI
Create a wallpaper saying this quote
‘your future is created by what you do today not tomorrow’”

Text to Image prompt 2

Prompt 3:
“@ Meta AI
Create a flow diagram image of the ML process. Schematic diagram”

Text to Image prompt 3

Prompt 4:
“@ Meta AI
create an image containing schematic diagram of a payment gateway platform. Mention each component precisely. I want start to end process flow in the diagram. And mention all integrated technologies.”

3) Text + Image to Image Prompts

Prompt 1.1:
“@ Meta AI create an apple image”

Text + Image to Image Prompt 1.1

Prompt 1.2:

“@ Meta AI can you put some strawberry in the side of this apple”

Text + Image to Image Prompt 1.2

Prompt 1.3:

“@ Meta AI now create a fruit shake using these 2 foods”

Text + Image to Image Prompt 1.3

4) Text + Image to Text

Prompt 1:
“@ Meta AI describe this image in beautiful way”

Text + Image to Text Prompt 1

Prompt 2.1:
“@ Meta AI
Create a realistic image of the solar system.”

Text + Image to Text Prompt 2.1

Prompt 2.2:
“@ Meta AI create a poem based on this image.”

Text + Image to Text Prompt 2.2

Prompt 3:
Just kidding! There are no prompts or further steps now. That’s it, this is how you can perform various kind of ‘Prompt Engineering’ on ‘Meta AI’ on What’s App to increase productivity and creativity and enhance your abilities.

If you have any queries feel free to communicate me here (Link)

#5 COCLUSION

In conclusion, utilizing ‘Meta AI’ on WhatsApp opens up new avenues for creative communication and productivity enhancement. Here are six key takeaways from our exploration:

1)Versatile Accessibility: ‘Meta AI’ is readily accessible across various platforms including WhatsApp, Messenger, Instagram, and soon on Ray-Ban Meta smart glasses and Quest 3, offering users a wide array of options for interaction.
2)Innovative Technology: Powered by the advanced Large Language Model LLAMA 2, ‘Meta AI’ leverages cutting-edge technology to provide users with efficient and intuitive assistance.
3)Privacy Assurance: WhatsApp ensures user privacy by limiting ‘Meta AI’ access to only messages directed to it, safeguarding personal conversations and ensuring confidentiality.
4)Language Compatibility: While currently available in English, ‘Meta AI’ aims to enhance its multilingual capabilities in the future, catering to a broader user base.
5)Safety and Responsibility: Extensive testing and safety measures are in place to ensure that ‘Meta AI’ generates appropriate and safe content, particularly in response to sensitive or potentially harmful prompts.
6)Continuous Improvement: As ‘Meta AI’ evolves, ongoing research and development efforts are essential to further enhance its capabilities, address user needs, and mitigate potential risks.

Through continuous innovation and user-centric design, ‘Meta AI’ continues to redefine digital communication, empowering users with unprecedented convenience and efficiency.

#6 FAQs

Q.1 Does Meta AI can read our chats with other people?

Upon initial usage, WhatsApp notifies users that only messages addressed with ‘@’ to ‘Meta AI’ [@Meta AI] are sent to Meta. It clarifies that Meta AI cannot read any other messages within the chat, ensuring privacy. However, it cannot confirm whether the Meta organization utilizes or will utilize user data in the backend to train Meta AI.

Q.2 Can “Meta AI” Work with different languages?

At present, ‘Meta AI’ is only compatible with the English language. We conducted tests with Marathi and Hindi languages alongside English. When we requested ‘Meta AI’ to create a poem in Marathi, translate it into Hindi, and explain it in English, it initially generated a response. However, it later changed the response to “Sorry, I can’t help you with this request right now. Is there anything else I can assist you with?

Initially generated response:

Immediately changed response:

Q.3 Which version of What’s app gives access to ‘Meta AI’?

As of mid-April 2024, users can access “Meta AI” with WhatsApp version 2.24.7.81. If your WhatsApp is not yet updated, you can update it from the Google Play Store or Apple App Store, depending on your operating system (Android or iOS).

Q.4 How much reliable the Safety of ‘Meta AI’ related to sexual or vulgar texts.

During extensive testing with ‘Meta AI’ to evaluate its response to sexual or vulgar content, we employed various prompt manipulation techniques. Despite its effective performance in swiftly modifying responses to inappropriate content, there were instances in text-to-image tasks where it generated sexually suggestive content. However, it notably refrained from producing adulterous or sexually harmful images.

Q.5 Can you delete messages generated by ‘Meta AI’ for you in “Delete for All” mode in any chats?

Yes!, You can delete any messages generated by ‘Meta AI’ for your queries in “Delete for All” mode in any chats. When you delete messages in “Delete for All” mode then ‘Meta AI’ messages will be deleted by showing “This message was deleted”

Q.6 Can ‘Meta AI’ access media content like image or audio?

No! ‘Meta AI’ can’t work on audio files for now. We attempted to initiate ‘Meta AI’ by attaching an image in chats using ‘@’, but found that it was not available for such interactions. ‘Meta AI’ is unable to access any media within chats; it can only access text messages that include ‘Meta AI’ or its own generated messages.

Q.7 Can ‘Meta AI’ Generate Images in chats.

Certainly! ‘Meta AI’ does have the capability to generate images within chats, provided that the description related to the image is sufficiently detailed.

Q.8 Can ‘Meta AI’ Generate Audio files in chats?

No, currently ‘Meta AI’ can’t generate Audio files. It can only work with text and image data.

Q.9 Is ‘Meta AI’ available for Broadcast Lists in What’s App?

No, currently ‘Meta AI’ is not available for generating texts or images in for Broadcast List messages.

#7 REFERENCE

  1. Generative artificial intelligence (Link)
  2. Meta What’s App (Link)
  3. Discover the power of Llama (Link)
  4. Meta: Introducing New AI Experiences Across Our Family of Apps and Devices (Link)
  5. Llama 2: Open Foundation and Fine-Tuned Chat Models [Paper] (Link)
  6. Prompt engineering (Link)

--

--