Great post, although I remain a sceptic on most of what’s implied.
Some important facts the post doesn’t devote any attention to:
- Moore’s law has stopped (or at least decellerated massively). Doing twice the operations concurrently is not anywhere remotely as useful as doing the same amount of operations in half the time.
- Fundamental AI algorithm innovation is actually very thin. Most of the big AI competitions (Netflix, Heritage Health Prize) involve a huge amount of effort going into parameter optimisation which ends up eeking out a mathematically significant but practically worthless improvement in algorithm performance for the sake of ending up higher on the league table. There have not really been the sorts of game changing algorithm ideas that are needed to take AI to the next stage.
- The driverless car thing drives me nuts. To have a driverless car drive a 1000 km journey, I would argue the most it can use a human for is 5cm. Otherwise you simply won’t accept it driving around autonomously. People that’s 7 9’s accuracy. The only example of where humans have achieved such accuracy in a complex system is chip fab production. But guess what, chip design happens in an environment which you control 100%.
I guess the thread that unites my scepticism is that to achieve a better level of application experience (be it Siri, driverless car or whatever) there lies a geometric increase in complexity for which Moore’s law and algorithm design no longer seem to offer tangible progress. All we have is a bunch of “Data Scientists” (and I don’t mean as a derogatory term) who plug parameters into the existing data models in order to enable a broader application of the algorithms we have today on today’s hardware. This enables lots of PR of the sort peddled here — AI is taking over the world. It’s not. It’s just more doing stuff we have always had the tools to do but not had the time to get around to to doing it. The hard stuff is still really really hard, and I don’t see it getting any easier.