Discontinuity on the Road to Superintelligence

Is the future emergence of human-level artificial intelligence a threat? I argue that the answer to this (tremendously complex) question boils down to a deceptively simple property of the road we take to AGI. We are only interested in worrying about existentialist Superintellegence threats if the entity that emerges is something other than a straightforward extension of ourselves. But there is just a little more to the question than that. There’s not a fully robust case that a discontinuous improvement of civilization’s intelligence is a threat to us to begin with, and we have reason to think that our aversion to that scenario is merely an echo of othering and tribalism tendencies embedded into our own psyche due to the environment in which we evolved.

The Argument over which scenario will play out (continuous vs. discontinuous) has been rehashed many times. Some commentators believe it obvious that superintellegence will be created in a discontinuous process because 1) human evolution is very slow and 2) we’re not far from computer networks which are mathematically superior to the human mind’s computational complexity. The case for the counter argument is summed up IMHO by the jingoism of “we are already cyborgs”. This is promoted very heavily by Ray Kurzweil and others, sometimes taking it as obvious that a continuous transition to superintendence will play out. This is because, firstly, brain-computer interfaces have a lot of improvement left to make, and these interfaces may be interpreted as part of the computational structure of the brain to begin with. Subsequently, light modifications to human brains through implants can boost the connection bandwidth with computers even further, allowing us our own brains to be the nexus of the superintelligence explosion. Next, the counter-counter argument would note that heavy modification to living human biology is very difficult to dive into because of the human cost of failures, our self-preservation, and an extremely encumbering regulatory framework. Also, there is an implication that modifying the human brain is inferior to alternatives because much of its structure can’t be reworked with the person still living, and its easier to start over from scratch. Modifying either the human genetic code, or a child’s brain during the early stages of development is less likely to be ethically permitted than any other option on the table.

These arguments over which path we take are all well-informed, and that makes this particular aspect of the future extremely difficult to predict.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.