Homepage
Open in app
Sign in
Get started
Autonomous Agents — #AI
Notes on Artificial Intelligence and Machine Learning
Follow
The Scale and Complexity of Protein-Ligand Binding: A Mathematical Perspective on OOD Errors
The Scale and Complexity of Protein-Ligand Binding: A Mathematical Perspective on OOD Errors
Freedom Preetham
Aug 15
Exploring the Challenges of Protein-Ligand Binding Predictions in AI Through Leash Bio’s BELKA…
Exploring the Challenges of Protein-Ligand Binding Predictions in AI Through Leash Bio’s BELKA…
I’ve been following developments in the field of protein-binding affinity for some time (even though my focus is Genomics), driven by a…
Freedom Preetham
Aug 15
Limitations of LLMs in Combinatorial Optimization
Limitations of LLMs in Combinatorial Optimization
In a recent conversation with postdoctoral math grads, we discussed the capabilities and limitations of Large Language Models (LLMs) in…
Freedom Preetham
Aug 13
Mamba vs. Weighted Choquard: Comparative Analysis of Non-local Influence Models
Mamba vs. Weighted Choquard: Comparative Analysis of Non-local Influence Models
In this paper I want to present a mathematical comparison between the Mamba (Selective Structured State Space Model) and my research on…
Freedom Preetham
Aug 8
Part 5 — Integrating the Weighted Choquard Equation with Fourier Neural Operators
Part 5 — Integrating the Weighted Choquard Equation with Fourier Neural Operators
In recent years, there has been significant interest in leveraging machine learning techniques to solve complex partial differential…
Freedom Preetham
Aug 8
Part 4 — Non Local Interactions in AGI through Weighted Choquard Equation
Part 4 — Non Local Interactions in AGI through Weighted Choquard Equation
In the quest to build Artificial General Intelligence (AGI) models, one of the most pressing challenges is to endow machines with the…
Freedom Preetham
Aug 7
The Elegance of Deep Learning Lies in Its Empirics, Not in Its Lines of Code
The Elegance of Deep Learning Lies in Its Empirics, Not in Its Lines of Code
Freedom Preetham
Jul 24
Understanding the Hidden Bias of Transformers in Machine Learning
Understanding the Hidden Bias of Transformers in Machine Learning
A General Summary Without Any Math.
Freedom Preetham
Jul 21
Ensuring Robustness and Mitigating Confounding in Biological Modeling
Ensuring Robustness and Mitigating Confounding in Biological Modeling
The Imperatives of Discretization and Resolution Invariance
Freedom Preetham
Jul 20
Part 2 : SciML — A Mathematical account of PDE Solvers, Discoverers and Operator Learning
Part 2 : SciML — A Mathematical account of PDE Solvers, Discoverers and Operator Learning
The integration of machine learning with scientific modeling, known as Scientific Machine Learning (SciML), has ushered in transformative…
Freedom Preetham
Jul 13
Part 3: Biological Operators to Math Operators ~ Mixture of Operators for Modeling Genomic…
Part 3: Biological Operators to Math Operators ~ Mixture of Operators for Modeling Genomic…
Nature is modular and multi-scale. While natural systems exhibit chaos and complexity in the codomain with high variability, the natural…
Freedom Preetham
Jul 10
Part 1: SciML — Why Transformers Fall Short in Scientific Computing
Part 1: SciML — Why Transformers Fall Short in Scientific Computing
Transformer based models like LLMs have demonstrated remarkable prowess in natural language processing tasks. However, their limitations…
Freedom Preetham
Jul 7
Rethinking Memory in AI: Fractional Laplacians and Long-Range Interactions
Rethinking Memory in AI: Fractional Laplacians and Long-Range Interactions
Whenever I engage in discussions about modeling memory in the context of artificial intelligence research, I often encounter a fundamental…
Freedom Preetham
Jul 2
Understanding Math Behind Chinchilla Laws
Understanding Math Behind Chinchilla Laws
Optimizing LLM Performance through Compute-Efficiency
Freedom Preetham
Jun 14
Part 2 — An Advanced Thesis: Learning from Joint Distributions
Part 2 — An Advanced Thesis: Learning from Joint Distributions
In continuation on the discussions from Part 1, where I surmised that we truly do not need big data for training today’s model, I present…
Freedom Preetham
Jun 13
Part 1 — How Many Cat Pictures? Does AI Really Need Big Data?
Part 1 — How Many Cat Pictures? Does AI Really Need Big Data?
In the realm of artificial intelligence, there has been a longstanding belief that big data is essential for effective learning and model…
Freedom Preetham
Jun 13
Advanced Attention Mechanisms for Long Sequence Transformers
Advanced Attention Mechanisms for Long Sequence Transformers
In processing long sequences, Transformers face challenges such as attention dilution and increased noise. As the sequence length grows…
Freedom Preetham
May 28
Math Behind Positional Embeddings in Transformer Models
Math Behind Positional Embeddings in Transformer Models
Positional embeddings are a fundamental component in transformer models, providing critical positional information to the model. This blog…
Freedom Preetham
May 28
Comprehensive Breakdown of Selective Structured State Space Model — Mamba (S5).
Comprehensive Breakdown of Selective Structured State Space Model — Mamba (S5).
Foundation models often use the Transformer architecture, which faces inefficiencies with long sequences. Mamba AI improves this by…
Freedom Preetham
May 3
Part 3 — Randomized Algo and Spectral Decomposition for High-Dimensional Fractional Laplacians
Part 3 — Randomized Algo and Spectral Decomposition for High-Dimensional Fractional Laplacians
In the ambit of mathematical and computational sciences, solving ultra high-dimensional partial differential equations (PDEs) has always…
Freedom Preetham
Apr 23
RAG Does Not Reduce Hallucinations in LLMs — Math Deep Dive
RAG Does Not Reduce Hallucinations in LLMs — Math Deep Dive
Too much marketing cool-aid has been spent on stating that RAG avoids or reduces hallucinations in LLMs. This is not true at all.
Freedom Preetham
Feb 16
A Math Deep Dive on Gating in Neural Architectures
A Math Deep Dive on Gating in Neural Architectures
Freedom Preetham
Feb 7
Unpredictable Latent Errors in AI can be Catastrophic — Mathematical Explanation
Unpredictable Latent Errors in AI can be Catastrophic — Mathematical Explanation
Is the future of Artificial Intelligence dystopian or utopian? This question has always been a subject of debate, with major camps…
Freedom Preetham
Dec 17, 2023
Enhancing LLM’s Reasoning Through JEPA— A Comprehensive Mathematical Deep Dive
Enhancing LLM’s Reasoning Through JEPA— A Comprehensive Mathematical Deep Dive
Freedom Preetham
Dec 15, 2023
LLMs, Transformers, GPTs — Here is One Ring to Rule Them All
LLMs, Transformers, GPTs — Here is One Ring to Rule Them All
If LLMs were a RR Tolkien’s narration, then this blog is one ring to rule them all :) I have compiled a list of in-depth blogs I have…
Freedom Preetham
Dec 6, 2023
About Autonomous Agents
Latest Stories
Archive
About Medium
Terms
Privacy
Teams