The latest round of machine learning is a bit more ‘special’.
The Alpha-GO machine didn’t just beat the world’s best Go player — it also deduced the rules of the game and the very subtle winning conditions of Go. Go is an exceedingly tough game — and all previous efforts had barely been able to beat even a moderately competent human player.
Training an AI to drive a car has also proven to be a “miraculous” learning thing that goes far, far past what conventional programming is able to achieve.
The latest round of deeper neural nets have (IMHO) demonstrated that the previous relatively uninteresting results from CPU-bound neural nets was only due to their inadequate size/depth. Now that we can offload the calculations onto massively parallel commodity GPU cores — we make that increased depth possible — and suddenly there is a gigantic leap in their capabilities over a period of about one year.
This leaves open the *possibility* that even deeper neural nets will start to approximate human general intelligence. We don’t yet know for sure — but it does seem likely that we’ll find out over the next decade. A neural net with the same number of neurons as a human — and built from custom silicon — could be purchased for a few billion dollars today.
What I suspect though is that the training times for general AI would be comparable to the training time for human intelligence — so we might need to shovel data into the thing for decades before we’d see any real intelligence emerging.
That said, the original article that we’re responding to is full of holes and bad assumptions…as I tried to explain in my earlier comments.