Member-only story
Scale Won’t Turn LLMs Into AGI or Superintelligence
This is part 2 of our series talking about the limitations and capabilities of LLMs and why they are unlikely to reach AGI. We presented a lot of arguments against the scaling laws leading to superintelligence and gave a general direction on how to think about these systems. Now it is time to think even more deeply and directly question the nature of intelligence. Try to understand how the human brain does it and how different it is from current-day machines, not just in scale but in terms of the design itself. So, let’s get ready for an interesting ride about intelligence, the next generation of machines, and the human brain.
Before we proceed with this article, I would highly recommend checking the first part, discussing LLMs limitations in great detail with several examples.
Topics Covered
- Current LLMs Shortcomings
- Why LLMs Lack Common Sense?
- An Architecture For Autonomous Intelligence
- Claims Of People Believing In SuperIntelligence
- A Flawed Reasoning That Stems From A Misunderstanding Of Intelligence
- The Environment Puts A Hard Limit On Individual Intelligence
- Intelligence Is External And Lies In Civilizational Growth
- Individual AI Won’t Scale No Matter How Smart It Gets
- Recursively Self-improving Systems
- What Does The Next Generation of AI Systems Look Like?
- The Intelligent Design Of the Human Brain
- Summary