Member-only story
The Next-Generation of AI
Using Cognitive Science, not LLMs
Today’s AI is not getting closer to AI’s goals and that’s a problem.
By the way, today’s discussion is a YouTube video here.
Sure, there’s a lot of activity in AI around LLMs like ChatGPT with outrageous hype claiming human-level AI is approaching, but that’s easy to falsify.
Breathless announcements for improving on artificial benchmarks are no match for science even if there are billions of dollars of investment involved chasing trillions of dollars in value! In science, we can see goals that are stuck despite benchmark successes.
It’s hard to quickly critique the lack of progress in AI, so let me talk through some issues today, and then talk in more detail in the coming weeks. But always keep in mind that AI should aim at human-level skills and not allow current limitations like errors or lack of trust to be accepted or become the goal.
After all, goals can be whatever we want them to be.
So what is the Goal of AI
Strong AI should fix the long tail of driverless car problems using its vision and world knowledge.
But OpenAI has an AGI plan that excludes the hard problems of AI that will leave it blind, deaf, mute and paralysed. For marketing purposes that’s fine, but their AGI is moving the goal posts to avoid solving the problems of AI. Worse, industry, government and educational…