Bounded intelligence
Scared of killer AI? According to @elonmusk, Stephen Hawking and a slew of very respectable minds, we should be.
Exponential growth is fun, except when it isn’t. It’s not fun when it implies we’re about to be rendered obsolete by exponentially self-improving AI. Thankfully exponential growth is usually bounded by some constraining factor. Michael Crichton said it well in The Andromeda Strain:
“The mathematics of uncontrolled growth are frightening. A single cell of the bacterium E. coli would, under ideal circumstances, divide every twenty minutes. That is not particularly disturbing until you think about it, but the fact is that bacteria multiply geometrically [..]. In this way it can be shown that in a single day, one cell of E. coli could produce a super-colony equal in size and weight to the entire planet Earth.”
The takeaway here is that ideal circumstances for scary, uncontrolled growth exist only in math books and Ray Kurzweil presentations. The reality is that we, or rather our brains, are the most efficient computers for the foreseeable future. We have been making really smart systems for a while now, but there are fundamental limitations to what we can build and other limitations on what something we build can become.
Thermal density limits how much compute power we can fit in a certain amount of space and therefore the minimum size of a computer that can do what our brains can do. The software algorithms we design are much less efficient than the mental ones we use to design them, requiring much more ‘work’ to be done by computers than by the brain to obtain similar results. The use cases where a computer can easily outperform a human are limited and have little bearing on life questions such as what kind of job is fulfilling or whether or not to have kids. Speed gains, even self-designing computers, will not change that.
This latter point is important. Intellect is more than compute in the sense that no amount of compute equates to intellect because the latter involves a sense of what thinking is all about, a sense of self and a sense of purpose. A really smart computer can self-optimize, but without a sense of self or what ‘optimal’ means, what’s the point? Just computing more and faster (even if it were possible without overheating or growing unsustainably huge) is no threat to us meatbags. It cannot compete because it doesn’t have the sense of identity that makes competition a thing to do. Even if it did, it does not follow that humans and smart AI would share a set of constraints such that aggressive competition is necessary to eithers existence.
For the time being the deranged nature of many of our fellow meatbags is a much more fundamental threat than killer AI. Although if I have to choose, Being Terminated by a robot with a German accent is a much cooler way to go than beheading by a crazy Jihadi. I still prefer old age.