I’m afraid I disagree thoroughly with all three of your points. For one thing, current AI really isn’t mimicking the brain, other than in name. “Neural networks” are indeed networks, but they really have very little to do with neurons. The general idea of an ‘activation function’ comes from neurons, but in AI it just means “if the aggregate addition of all these little inputs is greater than 1 (or 3, or 4.8, or whatever we randomly decide) then output a 1. Otherwise output a 0.” How exactly AI is recognizing a white wolf vs. a white dog isn’t entirely clear. What is clear, however, is that already in some of these tasks it has surpassed humans’ abilities to do the same tasks.
The above applies particularly to your AI Limitation #3. It’s actually the opposite. We don’t understand exactly what the neural network is doing but it’s already surpassing our abilities.
I also think this idea of general AI is shockingly misunderstood. The whole point about neural networks and deep learning is that we aren’t programming the computer. We’re just putting in a bunch of inputs and letting it run, using gradient descent, loss functions etc. as a way to ‘tune’ what it’s doing. This means that it is generalizing. It’s generalizing in ‘local’ tasks such as differentiating dogs (which is a general category full of specifics) from ‘wolves’ (which is another general category full of specifics). That is remarkable. It is actually incredible. And this process of generalization doesn’t just stop within a ‘local’ (my word) domain. It is now starting to spread — it is figuring things out in one domain that it has learned in another. There’s now a term for this: transfer learning. I’m not saying that general AI is here, but I think it’s a wild oversimplification to say that current AI is brittle and constrained by narrow tasks. In my view, it’s only a matter of time, and the time is accelerating. It’s shorter than we think.
At any rate, thanks for writing the article and stimulating thought!