You raise some very interesting points! Especially the thought of some form of consolidation (or defragmentation as you alluded to in your other post) as something that could be analogous to sleeping.
But I wonder if things like this could be done concurrently alongside learning in a machine? Even the computers we use in our day to day lives have many parallel processes running so it would seem logical to use concurrency to address these issues in artificial intelligence. I agree that our brains are vastly more parallelised (and efficient at divided attention) than any machine I’ve read about today but I think the gap will reduce. Code for training neural networks in a parallel GPU set-up is available for any enthusiast to use, and one would imagine that a general AI would be an amalgam of many such systems (much in the way our brains are organised into many regions).
Also I think that mathematics and logic are fields in which AI will flourish. Symbolic differentiation, graph theory, second-order (and above) logic are all areas of computer science that exemplify this (Robbins conjecture is the first example that comes to mind).
But overall I do agree with you. I think was naïve of me to assume that a machine can learn faster than a human, the problem will be very task dependent. There are almost certainly going to be cognitive and technical barriers that no one has even comprehended yet. Nevertheless I’m excited to see how things fare in the near future!