Sitemap
AIGuys

Deflating the AI hype and bringing real research and insights on the latest SOTA AI research papers. We at AIGuys believe in quality over quantity and are always looking to create more nuanced and detail oriented content.

Member-only story

Featured

Why Can’t AI Make Its Own Discoveries?

--

I know that a lot of people outright disagree with the title itself, but spare me a few minutes before you stop reading the article. LLMs which have now become synonymous with AI have done some pretty cool stuff. I use all these tools and models every single day, but we are still quite far from AI models that can make their own truly original discoveries. Things like AlphaFold did great work, but it is still primarily driven by human intuition and ingenuity. So, today we are going to dive deeper into what actually stops AI from making its own discoveries.

Getting Inside the LLMs

It is important to understand what LLMs are doing fundamentally and why reasoning is tough for them. In simple terms, LLMs for the most part are still producing the next best token. We must pause here and ask ourselves, is that how we reason? I’m not denying that we do not produce one word after the other even in our own head, but that’s not the only part. The way we think uses next-word prediction for sure, but first, we think about concepts and then only to articulate the thought concepts we use words. Thinking uses the words as tools but the association is happening at a meta level.

LLMs have basically compressed the entire internet and that automatically introduces a type of generalization, but that generalization doesn’t mean the model has actually learned the correct associations between different concepts.

--

--

AIGuys
AIGuys

Published in AIGuys

Deflating the AI hype and bringing real research and insights on the latest SOTA AI research papers. We at AIGuys believe in quality over quantity and are always looking to create more nuanced and detail oriented content.

Responses (2)