Homepage
Open in app
Sign in
Get started
Autonomous Agents — #AI
Notes on Artificial Intelligence and Machine Learning
Follow
RAG Does Not Reduce Hallucinations in LLMs — Math Deep Dive
RAG Does Not Reduce Hallucinations in LLMs — Math Deep Dive
Too much marketing cool-aid has been spent on stating that RAG avoids or reduces hallucinations in LLMs. This is not true at all.
Freedom Preetham
Feb 16
A Math Deep Dive on Gating in Neural Architectures
A Math Deep Dive on Gating in Neural Architectures
Freedom Preetham
Feb 7
Unpredictable Latent Errors in AI can be Catastrophic — Mathematical Explanation
Unpredictable Latent Errors in AI can be Catastrophic — Mathematical Explanation
Is the future of Artificial Intelligence dystopian or utopian? This question has always been a subject of debate, with major camps…
Freedom Preetham
Dec 17, 2023
Enhancing LLM’s Reasoning Through JEPA— A Comprehensive Mathematical Deep Dive
Enhancing LLM’s Reasoning Through JEPA— A Comprehensive Mathematical Deep Dive
Freedom Preetham
Dec 15, 2023
LLMs, Transformers, GPTs — Here is One Ring to Rule Them All
LLMs, Transformers, GPTs — Here is One Ring to Rule Them All
If LLMs were a RR Tolkien’s narration, then this blog is one ring to rule them all :) I have compiled a list of in-depth blogs I have…
Freedom Preetham
Dec 6, 2023
Part 9 — Memory-Augmented Transformer Networks: A Mathematical Insight
Part 9 — Memory-Augmented Transformer Networks: A Mathematical Insight
In the realm of sequential data processing, traditional Transformer architectures excel in handling short-term dependencies but falter in…
Freedom Preetham
Dec 6, 2023
Part 8 — Mathematical Explanation of Why It’s Hard for LLMs to Memorize
Part 8 — Mathematical Explanation of Why It’s Hard for LLMs to Memorize
From the beginning of this blog series we have seen how the development of transformer models like GPT-4 represents a paradigm shift in…
Freedom Preetham
Dec 5, 2023
Part 7 — Strategies for Enhancing LLM Safety: Mathematical and Ethical Frameworks
Part 7 — Strategies for Enhancing LLM Safety: Mathematical and Ethical Frameworks
The quest to enhance the safety of Large Language Models (LLMs) is a sophisticated interplay of technical innovation, ethical…
Freedom Preetham
Dec 2, 2023
Part 6 — Adversarial Attacks on LLM. A Mathematical and Strategic Analysis
Part 6 — Adversarial Attacks on LLM. A Mathematical and Strategic Analysis
Adversarial attacks on Large Language Models (LLMs) represent a sophisticated area of concern in AI safety, requiring an intricate blend…
Freedom Preetham
Dec 1, 2023
Deep Dive into Rank Collapse in LLMs
Deep Dive into Rank Collapse in LLMs
Transformers, central to advancements in machine learning, leverage the self-attention mechanism for tasks across various domains…
Freedom Preetham
Nov 29, 2023
Simplifying Transformer Blocks — A Detailed Mathematical Explanation
Simplifying Transformer Blocks — A Detailed Mathematical Explanation
Large language models (LLMs) can expand their capabilities through various scaling strategies. The more straightforward approach involves…
Freedom Preetham
Nov 28, 2023
Part 5 — In-Depth Analysis of Red Teaming in LLMs: A Mathematical and Empirical Approach
Part 5 — In-Depth Analysis of Red Teaming in LLMs: A Mathematical and Empirical Approach
The field of Large Language Models (LLMs) is rapidly advancing, necessitating robust red teaming strategies to ensure their safety and…
Freedom Preetham
Nov 26, 2023
Part 4 — Enhancing Safety in LLMs: A Rigorous Examination of Jailbreaking
Part 4 — Enhancing Safety in LLMs: A Rigorous Examination of Jailbreaking
The concept of jailbreaking Large Language Models (LLMs) such as GPT-4 represents a formidable challenge within the domain of artificial…
Freedom Preetham
Nov 26, 2023
Q* Algorithm is NOT Q-Learning
Q* Algorithm is NOT Q-Learning
Freedom Preetham
Nov 25, 2023
Part 3 — Mathematically Assessing Closed-LLMs for Generalization
Part 3 — Mathematically Assessing Closed-LLMs for Generalization
In the realm of closed Large Language Models (LLMs) such as OpenAI or Anthropic, the true test of intelligence and versatility lies in…
Freedom Preetham
Nov 24, 2023
Part 2 — LLMs Beyond Memorization
Part 2 — LLMs Beyond Memorization
Large Language Models (LLMs) represent a significant leap in artificial intelligence, transcending the notion of mere memorization models…
Freedom Preetham
Nov 23, 2023
Part 1 — Are LLMs Just a Memory Trick?
Part 1 — Are LLMs Just a Memory Trick?
It has become fashionable for critics to accuse Large Language Models (LLMs) as mere memorization devices, arguing that their extensive…
Freedom Preetham
Nov 23, 2023
Part 3 — AGI; An Advanced Mathematical Perspective
Part 3 — AGI; An Advanced Mathematical Perspective
Following up on Part 2, which explored the nature of AGI and its comparison with human cognition, this blog delves into the ‘how’ of AGI…
Freedom Preetham
Nov 23, 2023
Part 2 — What is Artificial General Intelligence? Can AGI Achieve Human Cognition?
Part 2 — What is Artificial General Intelligence? Can AGI Achieve Human Cognition?
In continuation with the previous blog where I theorized what AGI is NOT, in this blog I dive deeper into explaining what it is. Here, I…
Freedom Preetham
Nov 23, 2023
Part 1 — AGI ≠ AHI: Mathematical Explanation of What AGI is Not
Part 1 — AGI ≠ AHI: Mathematical Explanation of What AGI is Not
I see the biggest confusion and debate about AGI (Artificial General Intelligence) is that people automatically think that it is AHI…
Freedom Preetham
Nov 21, 2023
Algorithms vs AI vs AGI: Dispelling The Myths for Beginners
Algorithms vs AI vs AGI: Dispelling The Myths for Beginners
In the realms of computer science and artificial intelligence, understanding the evolution from traditional algorithms to Artificial…
Freedom Preetham
Nov 14, 2023
ConvNets vs Vision Transformers: Mathematical Deep Dive.
ConvNets vs Vision Transformers: Mathematical Deep Dive.
I have been witnessing this debate on Vision Transformers on how they are as good or better than CNNs. I wonder if we debate the same on…
Freedom Preetham
Oct 29, 2023
Character-level vs. Word-level Embeddings in Transformers: A Mathematical Exposition
Character-level vs. Word-level Embeddings in Transformers: A Mathematical Exposition
Freedom Preetham
Oct 6, 2023
Making Machines Think Like Us: Mixing Randomness with Brain-Like Models
Making Machines Think Like Us: Mixing Randomness with Brain-Like Models
A Stochastic Blueprint for Advancing AGI
Freedom Preetham
Oct 1, 2023
What Next after Transformers? A Rigorous Mathematical Examination of the Retentive Network (RETNET)
What Next after Transformers? A Rigorous Mathematical Examination of the Retentive Network (RETNET)
The landscape of deep learning is punctuated by the emergence of novel architectures, each aiming to address the multifaceted challenges…
Freedom Preetham
Sep 24, 2023
About Autonomous Agents
Latest Stories
Archive
About Medium
Terms
Privacy
Teams