Gorilla AI Creator Unveils How It Equips LLMs With Real-World Skills and Knowledge … 🦍 🧠

Raphael Mansuy
6 min readAug 22, 2023

I recently watched an insightful talk by Gorilla AI founder Shishir Patil at UC Berkeley’s Simons Institute for the Theory of Computing.

He examined the question of how to impart external knowledge and capabilities to large language models (LLMs) to make them more useful and safe for real world deployment.

Specifically, Patil discussed two strategies for augmenting LLMs — fine-tuning and retrieval — and when each approach is preferable.

He also demonstrated in deep Gorilla, an approach that connects LLMs to external APIs to add new skills while mitigating risks like hallucination.

The key questions examined were:

  • When to fine-tune an LLM versus use retrieval?
  • How to enable LLMs to take actions that change the world’s state and receive feedback?

This talk highlighted key developments in making LLMs more knowledgeable, truthful, and practically beneficial. In this post, I’ll summarize Patil’s key points and why they matter for the future of AI.

Gorilla

Why Teaching LLMs is Hard

LLMs like GPT-3 have shown impressive natural language generation abilities. However, their knowledge comes…

--

--

Raphael Mansuy

LinkedIn Top Voice for AI, Data Architecture & Data Engineering 👉 Follow me for deep dives on data-engineering and AI !