The AI Cataclysm and Other Conspiracy Theories

Alen Ladavac
9 min readSep 10, 2017

--

You’ve probably heard that Elon Musk said that AI is more dangerous than North Korea.

Not only him. Stephen Hawking and Bill Gates have also voiced their serious concerns that the continuing development of AI technologies at this rate will soon lead to our collective demise. So what do we have here? Three famous intellectuals: Stephen Hawking is the best living physicist, Bill Gates is well known for his Windows operating system, Elon Musk has brought us the PayPal and Tesla cars and reusable rockets… — and they are all all saying that AI is the bane of our future.

That sounds serious. Since they are all such valued professionals, we should believe them, right? All the other researchers in the field of AI would certainly agree with them, and so should we. Right? Erm — no... Actually they don’t. Which is completely unsurprising, since the big three have no credentials in the field of AI. Like Kevin Grandia said:

When I think I’m having chest pains I don’t go to the dermatologist, I go to a cardiologist because it would be absurd to go to skin doctor for a heart problem.

So whom should I trust more on the issues of AI — a guy that used to make operating systems, or someone who is actually working on AI research for a living?

Allow me to be blunt and compare this at least to the famous Petition Project, which claims that 30,000 US scientists agree that human-caused climate change is not happening. It’s just that only 0.1% of those are actually climate scientists, and that poll represents a biased sample of a very small population of scientists. In fact, 97% of relevant experts agree that climate change is real, and caused by us humans.

I could be evil and compare that to Gwyneth Paltrow suddenly becoming an expert on nutrition, or worse (giggles). Anyway… let’s get back to the subject…

The main point of comparing those statements with quackery of Gwyneth Paltrow-level is that a modern intellectual, if he wants to be taken seriously in his statements, should provide arguments beyond “I’m successful, you should listen to me”. Arguments that we would all expect those three would have provided. Have they? It’s hard to say.

I’ve spent some time searching for their original statements, but all I can find is the Open Letter on Artificial Intelligence which is alleged to be the underlying basis for their claims. The letter was co-signed by other scholars who do have AI research credentials. However, the Letter is rather general and hand-wavy. It is titled “Research Priorities for Robust and Beneficial Artificial Intelligence” and the word “danger” doesn’t even appear in it.

That certainly didn’t prevent Elon Musk from tweeting that AI is “vastly more risky than North Korea”, or Bill Gates saying that he “doesn’t understand why some people are not concerned”. Why are they saying that? Is there a good reason for it? If there is, they are not sharing it with us. Are they just scaremongering for publicity? Who knows. Are they just being misrepresented by reporters? We sure are not getting any rebuttals. Do they have a hidden agenda? Hard to say. But there’s one thing we can do to know more — we can examine the facts. Bear with me for a bit…

First of all, we have to understand the three most general levels of AI:

Bishop, a fictional human-like android from the Aliens movie (credit: Lance Henriksen, James Cameron)
  1. AI is the (pretty dumb) artificial intelligence as we know it. This is basically computer programs that are doing something beyond merely executing predetermined algorithms, usually based on some kind of “training” or data extrapolation.
  2. AGI is the Artificial General Intelligence. That would be the AI, but strong enough to be comparable to a human. This is how we envision sentient androids in most Sci-Fi movies — think C3PO from Star Wars, Ash and Bishop from Alien and Aliens, or Ava from Ex Machina .
  3. ASI is Artificial Super Intelligence. This would be a sentient machine much, much more intelligent than the smartest human ever.

There’s one additional category that goes beyond this, and that’s Recursive Self-Improving AI. The idea is that if a silicon-based AI would be smart enough (already ASI, or at least AGI-level) and given a chance to modify its own architecture and algorithms, it could in theory be able to improve itself, and then the improved self would be even more capable of self-improvement… yielding in the end a “technological singularity” — sudden exponential chain of improvements leading towards unimaginable capabilities.

Such a theoretical construct would be susceptible to the (theoretically really serious) danger of so called “runaway AI”. In a nutshell, the concept of runaway AI is based on the premise that an AI is given ability to improve itself, and is also tasked with a seemingly innocuous task. (In the original story, the AI was, in a really bad example of the popular trope, told to “make sure we never run out of paperclips again”.) The AI then takes the task to heart, but not having any (heart that is, and thus morals), it uses outrageous methods to reach the results. In some variants of the story, the AI creates an army of self-replicating nanobots that destroys the entire planet and turns all available material into whatever it was tasked to create.

That’s some great material for a Hollywood movie right there. But how real are those risks in practice? For starters, the human brain has only a bit under 100 billion neurons, but each neuron is connected to about 10,000 other neurons, forming a total of about 1,000 trillion synapses. Sounds like a lot. Can the modern technology reach those numbers? This is where it becomes confusing.

Neurosynaptic core (credit: IBM)

On one hand, IBM claimed that they have simulated 530 billion neurons and 100 trillion synapses, way back in 2012. On the other hand, Google’s AlphaGo AI is able to reliably beat the best human players in the world in the game of Go, using a neural network with less than 20,000 neurons. Now wait a minute. Holly guacamole! Those numbers don’t even make sense, do they? Of course they don’t. Because the daily newspapers and scary social media posts are feeding us sensationalist crap every day about what this whole thing with AI and neural networks actually means.

The “AI” as a broad term in computer science, and engineering can be many things. Sometimes we (software engineers) say “AI” when we actually mean “a very carefully crafted plain-old-algorithm that fools the user into thinking the computer is smart”. I work in computer games, and 99% of the “AI” we do is just exactly that. We cheat a lot. Many parts even in supposedly “real AI” systems, like search engines, digital assistants etc, are just that. An elaborate list of if-then conditions that just appears to be (somewhat) intelligent.

In a more traditional sense, “AI” could mean a genetic algorithm, a Bayesian network, a Fuzzy logic setup… but most often it is a “neural network”. Such a neural network consists of “neurons” and “synapses”. But the relation to actual biological neurons and synapses is very, very vague. The underlying principles of connectivity and weighting are very similar, but there are numerous differences between most of artificial neural networks and real biological ones. Artificial neural networks, unlike biological ones cannot usually “grow” new synapses as they want, they use much simpler topologies, they are task-based (Go playing) instead of general (you can teach your dog to fetch you the slippers, but AI is still far from that), and they have much simpler training algorithms (we don’t even know how biological networks actually train)… The artificial neural networks are much faster (think Gigahertz frequencies, and light-speed of signals) than biological ones (think hundreds of Hertz and about 120m/s speed). The artificial ones run on synchronized clock, while biological ones run asynchronously (we don’t even know exactly what significance does that make, but it certainly makes a difference), etc.

While the classic artificial neural network, like the ones used in today’s voice assistants failing haplessly to understand what we are saying, winning games of Go, detecting nipples in your Facebook posts, and doing other such “super-important” tasks… is very different from the biological one, there’s another approach that tries to emulate the biological networks much more closely. It’s the “neuromorphic artificial neural networks”. Which means they use computers to directly emulate what a biological network does. (With a caveat that we are not yet 100% sure we know exactly what are all the things that it does and how.) This is what the aforementioned IBM project was doing back in 2012. It emulated a huge number of life-like (supposedly) neurons… but at a speed 1500 times slower than a human brain. Which means that it took 25 minutes to simulate one second of “thinking”. If that “brain” (which was still much simpler than a human brain) would need to go through a training that an average human goes through (e.g. 20 years of growing and education), it would need roughly 300 centuries to reach maturity. Let that sink in a bit. 300 centuries.

Could we improve on that speed? Theoretically, Moore’s law would give a 2-fold increase in computing ability every 18 months, leading to comparable speed in about 15 years. By that account, it would be available about 2027… just that (a) the Moore’s law doesn’t actually promise more speed. It just promises more transistors, which doesn’t always translate to speed. Then also (b) that network was using 1.5 million CPU cores (it’s a huge server room), and (c), by their own words they “[…]have not built a biologically realistic simulation of the complete human brain […but…] mathematically abstracted away from biological detail toward engineering goals of maximizing function and minimizing cost and design complexity of hardware implementation”. There’s still work to do there.

Putting it in the words of a frustrated neuroscientist:

We have no fucking clue how to simulate a brain.

The “best” simulation we were able to pull off so far, is of C. Elegans — a very simple worm, which has literally exactly 302 neurons, and we know exactly how it is connected, etc. Yet we don’t know what exactly is in it that makes it work the way it does.

C. Elegans (source: Wikipedia)

We can’t perfectly simulate a worm yet, people. You can quit being scared of super-human AI, mkay?

So, we can create an “AI” to do some specific task (like play Go, detect nipples, or find cat videos) fairly well, or sometimes even better than humans. But that AI will be limited to that task and have some very weird limitations (like thinking a panda is a gibbon , or a Stop sign is a 50mph speed limit for no particular reason).

In a far future, we might be facing the danger of being terminated by an army of rogue killer robots. But in the present situation, we should be more wary of an army of irresponsible humans ruining our planet before we even get a chance to devise the killer robots.

While I hope this brings the subject of AI a bit closer to the general public, the topic is very wide and much too complex to be covered in a short article. I’d like to expand on some ideas related to this in a future text, so stay tuned for more soon!

Mascot of the Campaign to Stop Killer Robots

P.S. In before the “killer robots”… Yes, I do realize that some people are confusing the AI problem with the ban on killer robots. While the killer robots problem itself is something else, and is not insignificant, it doesn’t belong to the domain of AI. It is as much of a problem as landmines, chemical weapons or blinding laser weapons. The problem that I’m addressing here is that the three are not explicitly mentioning the killer robots as a problem of ethical warfare, but are stating nonsense like “[…]humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I”. Which is just unnecessary scaremongering, potentially dangerous to scientific and technological progress.

--

--

Alen Ladavac

Code, project management, game design and psychological help for programmers. TD @ Roblox. Also insatiable craving for all things tech and science.