Intuition Machine
Published in

Intuition Machine

AI as an existential threat to humanity?

Nick Bostrom, a Swedish philosopher turned futurist, argues that “AI” will become “superintelligent” and this is an existential threat to humanity. This has some people worried and has spurred a lot of discussion, is this realistic or science fiction?

I’m sure Professor Bostrom is a good guy, and I’m fond of philosophers (although not necessarily of the academic kind).

To be fair he’s not the only pessimist forecasting the end of humanity at the hands of AI.

Several others have predicted the rise of super-intelligent machines at the peril of humanity, including Elon Musk, Ray Kurzweil and Stephen Hawking. Now these are fairly intelligent people, but what is their futuristic prediction based on exactly? It’s tempting to draw attention to a doomsday scenario associated with something the vast majority of people don’t comprehend. The news is full of existential threats, why not add to that?

After studying Bostrom’s book, it’s clear that the entire thesis hinges on one relatively straight-forward assumption: that software will be able to evolve itself.

This is quite simple actually: a computer program that is able to make itself more advanced becomes out of control and “takes over.” It makes a lot of copies of itself, propagates across the Internet, and so on. The philosophical conundrum posed by Bostrom and others has to do with the ways in which humanity might be able to “control” such a thing, while that’s still possible.

Indeed this cornerstone idea makes logical sense, even to non-coders. So long as humans are the only ones creating code then they have control. Code that doesn’t create itself cannot evolve without human involvement.

If you prefer to not read the entire book, watch here beginning at 43:25 (below is the verbatim)

“And at some point, presumably in this whole-brain emulation, at some point probably fairly soon after that point, you will have synthetic AIs that are more optimized than whatever sort of structures biology came up with. So there’s a chapter in the book about that. But the bulk of the book is — so all the stuff that I talked about, like how far we are from it and stuff like that, there’s one chapter about that in the beginning. Maybe the second chapter has something about different pathways.”

“But the bulk of the book is really about the question of, if and when we do reach the ability to create human-level machine intelligence — so machines that are as good as we are in computer science, so they can start to improve themselves — what happens then?”

The bulk of the book, he says, and the entire premise of AI as an existential threat to humanity is — code evolving itself.

Intelligence is not understood

The vast majority of the software carrying the label “AI” is simply automated knowledge work. The scientific work referred to as “AGI” (artificial general intelligence) has relatively few resources applied to it, particularly with the commercial attention increasingly paid to so-called “AI”. The knowledge work misleadingly labeled “AI” doesn’t lead to this “singularity” event futurists proclaim, rather it leads to more knowledge work being done by software.

A human toddler can acquire knowledge, make choices through reason, it can think and conceive the world in adaptive ways. No software has this cognitive capacity today.

Software is ultimately a set of equations, and we are very far from having equations for human thought, in fact we don’t have equations for the thought of insects, including ones that are known to have high degrees of social intelligence.

A bee has roughly 800k neurons, it acquires information, communicates position of food, navigates 3-dimensional space, manufactures honey, collaborates in a social hierarchy, etc. If we had software as “smart” as a bee that would be significant. We’re nowhere near that today, and most of the attention is elsewhere, unfortunately.

The unbound intellect of homo sapiens?

Noam Chomsky and others have argued that the scope and range of human intellectual capacity could easily be insufficient to produce comprehensive theories of thought. Humans are organic creatures with limited capacity. Why would we assume the opposite: that human mental capacity is so expansive as to thoroughly understand the mind and cognition?

We have formulas for accounting, therefore we can program computers to perform knowledge work in accounting. We have patterns for winning chess moves, therefore we program computers to win at the game. We simply do not have a theory of thought, even for simple organisms, therefore creating software that thinks is still beyond our capacity.

One thing is certain: to create code that evolves itself the code would need cognitive capacity far beyond that of an insect or a toddler.

Will we ever be able to achieve this? We cannot know, however it seems that other existential threats to humanity are far more deserving of attention.

--

--

Artificial Intuition, Artificial Fluency, Artificial Empathy, Semiosis Architectonic

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store