The latest Deep Learning networks do not attempt to mimic natural human intelligence. There is no “self-organization” in the sense that this term is used in biology. The new facial recognition algorithms are said to be “self-learning,” but what they do merely is make generalizations about data. For instance, if a Deep Learning network is fed millions of images from the Internet, it may discover repeating patterns and output a cat face. That network is now an algorithm for identifying cats. We don’t know what criteria are being used to make such categorizations. Imagine if you took this approach to try to determine which convicted criminals are likely to commit additional crimes? Based on the data of previous arrests, who will be assigned longer sentences? New network AI approaches are no better than old-school networks which notoriously aggravate stereotypes. They still use selection for frequency of pattern to determine “relevance.” They do statistics; they do not interpret.
Since about 2005 there have been a number of experiments with reaction-diffusion computers. In 20 years or so, with millions of dollars invested in RD, I suspect that they might be as intelligent as a chicken. World enough and time, artificially created reaction-diffusion computers will be as smart as humans, but the trade off will be that they will make the same kinds of mistakes that humans make; they will process data very inefficiently, and they will be difficult to control. They won’t make very good tools. Current AI is excellent as a tool, for humans, who have good judgement, to use.