It’s not clear how fast it could learn. Learning isn’t just about rote memorization.
A computer can for sure out-do a human in rote memorization, in speed, accuracy of recall and capacity — but you can’t learn (for example) mathematic by rote. You have to have an understanding of how it all interrelates.
What humans are most good at is skills such a summarization (boiling down what matters from a sea of unnecessary junk) — synthesis (if I read that A implies B in one place and that C implies D in some other place — then discover that B and C are actually the exact same thing — then I know that A implies D…that’s a lot harder to do than you’d think).
I’m quite sure that an AI could ingest any amount of information — but distilling what matters from it is a highly non-trivial problem — and it’s very possible that our insanely parallel (but slow and glitchy) brains are much better at it than a strictly serial (but fast and reliable) computer.
So, no — I disagree that it’s in any way obvious that a computer could learn fast than a human. I’m not saying that it couldn’t — or that it could — I’m just saying that *assuming* that it can just because it has speed and reliability, is not a reasonable assumption.
Another thing that interests me is that human brain cells are not as fast as those in many other animals. More “myelinated” neurons are faster than ours. We’ve evolved our massive intelligence by increasing the interconnectedness of our neurons (which is greater than other animals) in preference to increasing their speed. There may be solid reasons for doing that — and computers are heading in the wrong direction.
As for the “running continuously” thing…that’s also not obvious. We sleep (in part) to give our brains time to reorganize knowledge — to weed out the important from the junk. One famous research project is “Cyc”. Cyc can be loaded with tons of raw information (eg “Obama is a US president” and “All US presidents are famous men”) and can make deductions (eg “Obama is a famous man”) — but the database is full of contradictions (eg “Hillary Clinton is not a man”…oh, oh!) — and there are problems with selectivity of knowledge. For example, Cyc once decided that it was overwhelmingly likely that all humans are famous — because the only humans it had data on tended to be famous people.
So they run a ‘batch job’ at intervals that examines the contradictions and new conclusions that Cyc finds and fixes them up. This process is a lot like “dreaming”. It’s very possible (but, again, not certain) that this kind of thing will be needed in AI systems in the future.
Deep neural networks have to be trained and re-trained when something significant in the world changes too — and then they have to have that training tested. That process takes a lot more processing time than actually executing the resulting network — so, again, it’s possible that deep learning AI’s will need to take time out to “sleep”.
I don’t generally disagree with conclusions made about future general AI — but it’s important to note that the underlying assumptions we make about them are just that…”assumptions”…and there is every reason to believe that many, most or perhaps even all, of those assumptions will ultimately prove to be false.