The AI Singularity is Not Nigh

Andrew Barisser
HackerNoon.com
Published in
4 min readNov 25, 2016

--

by Andrew Barisser

It has become common to hear now that the ‘AI Singularity’ is soon upon us. Soon Artificial Intelligence will reach such an advanced state that it will update itself, leading to an exponential explosion of self-augmenting sophistication. When algorithms reach such a state, they will be so advanced and self-editing that humans will no longer be able to understand them. They will effectively exit our control.

Even a modestly intelligent AI could spend nearly endless computer cycles to improve itself, and thereby achieve stupendous results in very little ‘human’ time. Imagine a mediocre programmer given 10,000 years to improve a piece of code. Surely even the slowest dullard could accomplish something. Once an algorithm has the sophistication of even the most modestly gifted humans, it could commit time to bootstrapping itself in ‘computer clock time’. A single human day would be eons in such time, more than enough to upgrade. At such a rate, a self-augmenting AI could quickly surpass even the brightest person. From there it would diverge into territory yet unexplored by human minds.

The key necessity is an AI that can design itself analytically, i.e., it can inspect its own source code with intuition and reason the way a human would. The singularity will not occur as a result of naive optimization. Modern machine learning algorithms are trained via optimization techniques. These vary. But they generally boil down to a) brute force approaches and b) blind analytical optimizations. Backpropagation in a neural network can improve results. It is a straightforward analytical technique. But it is not a reasoned, intuited conclusion. This is the sort of dumb, mechanical operation suited to computers. It is not how humans design systems. When an algorithim can design in the human way, however badly it may do so, it may achieve the exponential growth mentioned in the singularity.

It is precisely this sort of human reasoning that is nowhere on the radar in AI research. We will not attain it for 50 years.

Where machine learning has achieved its greatest successes, it has eschewed higher-order reasoning and focused on narrowly tailored, supervised learning problems. This is not to take away from its success. Modern algorithms can accomplish incredible feats in image processing, NLP, and many other areas. But generally these solutions boil down to fitting models to known data with effective but limited methods, like backpropagation.

This is not the kind of learning necessary for an algorithm to bootstrap itself. We have not remotely approached it.

The AI singularity is massively overhyped in part because we have underestimated the sophistication of systems designed by nature. The complexity of biological systems is a very easy thing to overlook. There is a long history of this. But in truth, very few human systems compete with biological ones in those areas for which evolution has selected. In all sorts of areas, the more we delve, the more we discover how difficult it is to match what nature accomplishes.

Biological systems should not be thought of as simply ‘natural’. They should be thought of as highly advanced ‘technology’ that just happens to have arisen due to selective processes over billions of years. There is no effective difference between biological components and alien artifacts possessing technical capabilities and design motifs dramatically different and often superior to our own. It just so happens that they were created in a stochastic environment, which is not to lessen their sophistication.

Computer scientists and other technologists often underestimate the complex depth of biological systems, so they are more prone to the hubris that they can equal them without much effort. Of all the biological technology ever found, the human brain is by far the most advanced ever discovered. So to think that we could construct something equal to that, when we struggle to replicate even the most basic natural processes, is exceedingly optimistic. It is to dramatically discount the inestimable engineering endowed in us by evolution.

Even the simplest distinctly human tasks are beyond the reach of algorithms, and will remain so for the foreseeable. It is generally those tasks for which humans were never well suited, such as intensive mathematical operations, where algorithms excel. The most recent accomplishments in ML, while impressive in isolation, only demonstrate the utter inability of algorithms to reason, generalize, or intuit: precisely the skills necessary for the AI singularity.

While the AI Singularity is not an impossibility, it is dramatically farther off than is commonly imagined. This is largely because we focus on the technologically impressive advancements that have just arrived (deep learning) and ignore the mundane but wildly more sophisticated ‘biological technologies’ we have lived with our whole lives. Equalling nature is not strictly impossible, but it is an extremely tall order.

Follow me on Twitter at Andrew Barisser

Hacker Noon is how hackers start their afternoons. We’re a part of the @AMIfamily. We are now accepting submissions and happy to discuss advertising &sponsorship opportunities.

To learn more, read our about page, like/message us on Facebook, or simply, tweet/DM @HackerNoon.

If you enjoyed this story, we recommend reading our latest tech stories and trending tech stories. Until next time, don’t take the realities of the world for granted!

--

--