3) The future is exponential: Artificial Intelligence

You can barely move in 2017 without hearing the buzzwords ‘AI’ or ‘neural networks’. It’s clear that we are hyped to be on the cusp of a great revolution. But what just what does this future hold?

We often forget that artificially intelligent machines are already ubiquitous in society: from calculators to email spam filters and Spotify’s automatically curated personalised playlists. But the AIs of the future are set to be entities with capabilities on an unprecedented scale. Today’s machines outperform humans at specific tasks, but AI thought leader Nick Bostrom envisages a future of artificially superintelligent machines (ASIs) that would be ‘much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills’. Essentially, machines that can think for themselves.

No theoretical barriers stand in the way of creating such a machine. After all, human brains are just bundles of neurons bathed in a load of chemicals that, when arranged in the correct way, produce the illusions of consciousness, free will and emotion. If we could reproduce the exact structure of the human brain down to an atomic level resolution, using transistors and software, there is really no reason that this silicon-based entity would not experience the same illusions of consciousness, free will and emotion as us carbon-based life forms.

Of course, getting to this point is no mean feat, but the hardware is already there: neurons run at 200Hz whereas the most standard microprocessors in your computer run at 2GHz (10 million times faster). Memory capabilities are many-fold greater without the space constraints of the human skull. Upload and download speeds are also incomparable (think of how long it takes you to read a book and how long it takes you to load one onto your kindle). Computers are also far higher fidelity than humans and simply don’t make mistakes, it’s always human error that produces malfunctions. It’s simply the software that we are still struggling to get right now.

Research takes a few lines of attack with whole brain emulation being perhaps the most ambitious. Why reinvent the wheel? The idea behind this is to recreate the human brain on a computer and hope that that will function in the same way. The resolution required is beyond current capabilities though. The OpenWorm project is currently progressing well at creating an emulation of the nematode worm brain with 302 neurons. That’s not quite the 100 billion in a human, but considering how fast our capabilities are progressing, it is possible. 65 years ago, we didn’t know what DNA was, and now we can sequence all 6 billion bases in the genome in 26 hours for less than $1000.

Neural networks are massively hyped at the moment, and for good reason. The idea is to mimic not the whole brain, but to copy the way neurons interconnect. A single neuron assimilates multiple positive and negative inputs from other neurons to decide whether to fire or not — an all or nothing response. Neurons are then arranged in successive interconnected layers which take inputs from the layer before, to make decisions at an increasingly more abstract and higher order level.

For example, in a facial recognition neural network (like that in the iPhone X’s FaceID, all the inputs to the first layer of neurons would be the pixels of the image, and each neuron would fire in response to one pixel of one colour in the image. These would feed into the next layer, where one neuron might be designed to fire if there is a black line at one point in the image. Further layers would then identify progressively larger and more complex features e.g. a load of black pixels in a particular shape and a set distance from another equal set is likely to be the pupil of an eye. These higher order features would then all combine to identify face or not.

http://neuralnetworksanddeeplearning.com/chap1.html

The way a neural net ‘learns’ to recognise faces is by training on large banks of sample images which have been manually tagged as face or not face. Each time the net gets it wrong it tweaks the weights at each connection point layer by layer until it correctly categorises the face. This repeats over and again until the weights of the network are finely tuned to recognise faces accurately.

Neural networks are in full flow and are now better than consultants at diagnosing skin cancer. In recent Nature published research, they were even able to predict to a higher level of accuracy than ever before who would get schizophrenia and other conditions out of a cohort of patients.

Of course, the way in which the human brain came to be was over millions of years of gradual evolution driven by a competition for survival where the best adapted survive on. But we can speed this process up with computers and create ‘genetic algorithms’ that try to mimic this evolutionary process to create intelligent machines. We also don’t have to be entirely random about the ‘mutations’ we introduce to the code which further cuts down the time taken. Eventually, the idea is that the machines will have a greater insight into their own algorithms than we do and be able to work on improving them much faster. This is called recursive self-improvement and would likely result in an intelligence explosion whereby the more intelligent the machines get the more able they are to further increase their own intelligence and so on in an exponential curve.

Another potential future is that of transhumanism which aims to directly manipulate human capabilities with technology so that we might become superintelligent beings ourselves. I won’t delve into this here as this will be the subject of a future post.

So just how intelligent might these beings be and what would this enable us to do. Well IBM is already making neuromorphic chips based on neural network technology that contain 5x as many transistors as a standard intel processor but consumes 2000x less power. In the future, we could use AIs to reach unprecedented levels of productivity to propel us to exponential heights in medicine, physics, engineering and essentially all scientific fields.

For example, this AI 3D printer that can build beautiful bridges with fractions of the cost and materials and greater stability. Or Google DeepMind’s machines that are now learning to play the video game Starcraft II which requires imagination and executive functioning rather than just pattern recognition or specific skills. Even the Toronto Raptors NBA team are using IBM’s Watson to analyse gaps in their game.

It’s almost impossible to appreciate what the potential of ASI in the future is though. Tim Urban’s Wait But Why posts on AI are the best I have seen and have been summarised on Medium by Pawel Syziak. Urban describes an intelligence staircase with ants on one step, then chickens maybe a few steps above that, with apes and humans a couple of steps further up each.

Now let’s imagine what it would be like to exist 2 steps above a human on the staircase.

That cognitive gap would be as large as the chimp-human gap and so just in the same way a chimp is unable to ever comprehend the workings of a mobile phone or even hold in its mind the concept of such a thing, we would simply not be able to even consider the things that something of that intelligence could do. And that’s only 2 steps above us. We can go further: ‘A machine on the second-to-highest step on that staircase would be to us as we are to ants. Super intelligence of that magnitude is not something we can remotely grasp, any more than a bumblebee can wrap its head around Keynesian Economics. In our world, smart means a 130 IQ and stupid means an 85 IQ — we don’t have a word for an IQ of 12,952.’ ‘But the kind of superintelligence that we’re talking about is well beyond anything on this staircase. In an intelligence explosion a machine might take years to rise from the ant step to reach the intelligence level of an average human, but it might take only another 40 days to become Einstein-smart. When that happens, it works to improve its intelligence using recursive self-improvement with an Einstein-level intellect and can thus make large leaps which will rapidly make it much smarter than any human, allowing it to make even bigger leaps. From then on, following the rule of exponential advancements and utilising the speed and efficacy of electrical circuits, it may perhaps take only 20 minutes to jump another step, and by the time it’s ten steps above us, it might be jumping up in four-step leaps every second that goes by. Which is why we need to realize that it’s distinctly possible that very shortly after the big news story about the first machine reaching human-level artificial general intelligence, we might be facing the reality of co-existing on the Earth with something that’s here on the staircase (or maybe a million times higher).’

Of course, if we can successfully harness its power or indeed become one with it through transhumanism then that will really be revolutionary to all that we know. Even death itself could become a thing of the past. In the words of Richard Feynman:

‘It is one of the most remarkable things that in all of the biological sciences there is no clue as to the necessity of death. If you say we want to make perpetual motion, we have discovered enough laws through studying physics to see that it is either absolutely impossible, or else the laws are wrong. But there is nothing in biology yet found that indicates the inevitability of death. This suggests to me that it is not at all inevitable and that it is only a matter of time before biologists discover what is causing us the trouble and that this terrible universal disease or temporariness of the human body will be cured.’

However, as Tim Urban notes ‘since we just established that it’s a hopeless activity to try to understand the power of a machine only two steps above us, let’s very concretely state once and for all that there is no way to know what an artificial superintelligence (ASI) will do or what the consequences will be for us. Anyone who pretends otherwise doesn’t understand what superintelligence means. If our meagre brains were able to invent WiFi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the position of each and every atom in the world in any way it likes, at any time — everything we consider magic, every power we imagine a supreme God to have will be as mundane an activity for the ASI as flipping on a light switch is for us. As far as we are concerned, if an ASI comes into being, there is now an omnipotent God on Earth — and the all-important question for us is: Will it be a good god?’

As Elon Musk says,

‘If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea’

Bill Gates is also worried:

‘I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that…I don’t understand why some people are not concerned.’

And Stephen Hawking:

‘The development of full artificial intelligence could spell the end of the human race.’

There is simply no way we can expect to be able to control an ASI. Some suggest we could just box it up and allow it no access to the outside world but humans are fallible and prone to break this for their own gain. Besides, there’s no knowing what an ASI could do, Nick Bostrom suggests that even a boxed up AI could shift its electrons around in such a way that it creates radiowaves and allows it to manipulate the outside world.

To illustrate this point further, I can’t help but lean on Tim Urban again:

A 15-person startup company called Robotica has the stated mission of “Developing innovative Artificial Intelligence tools that allow humans to live more and work less.” They have several existing products already on the market and a handful more in development. They’re most excited about a seed project named Turry. Turry is a simple AI system that uses an arm-like appendage to write a handwritten note on a small card.
The team at Robotica thinks Turry could be their biggest product yet. The plan is to perfect Turry’s writing mechanics by getting her to practice the same test note over and over again:
“We love our customers. ~Robotica
Once Turry gets great at handwriting, she can be sold to companies who want to send marketing mail to homes and who know the mail has a far higher chance of being opened and read if the address, return address, and internal letter appear to be written by a human.
To build Turry’s writing skills, she is programmed to write the first part of the note in print and then sign “Robotica” in cursive so she can get practice with both skills. Turry has been uploaded with thousands of handwriting samples and the Robotica engineers have created an automated feedback loop wherein Turry writes a note, then snaps a photo of the written note, then runs the image across the uploaded handwriting samples. If the written note sufficiently resembles a certain threshold of the uploaded notes, it’s given a GOOD rating. If not, it’s given a BAD rating. Each rating that comes in helps Turry learn and improve. To move the process along, Turry’s one initial programmed goal is, “Write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy and efficiency.”
What excites the Robotica team so much is that Turry is getting noticeably better as she goes. Her initial handwriting was terrible, and after a couple weeks, it’s beginning to look believable. What excites them even more is that she is getting better at getting better at it. She has been teaching herself to be smarter and more innovative, and just recently, she came up with a new algorithm for herself that allowed her to scan through her uploaded photos three times faster than she originally could.
As the weeks pass, Turry continues to surprise the team with her rapid development. The engineers had tried something a bit new and innovative with her self-improvement code, and it seems to be working better than any of their previous attempts with their other products. One of Turry’s initial capabilities had been a speech recognition and simple speak-back module, so a user could speak a note to Turry, or offer other simple commands, and Turry could understand them, and also speak back. To help her learn English, they upload a handful of articles and books into her, and as she becomes more intelligent, her conversational abilities soar. The engineers start to have fun talking to Turry and seeing what she’ll come up with for her responses.
One day, the Robotica employees ask Turry a routine question: “What can we give you that will help you with your mission that you don’t already have?” Usually, Turry asks for something like “Additional handwriting samples” or “More working memory storage space,” but on this day, Turry asks them for access to a greater library of a large variety of casual English language diction so she can learn to write with the loose grammar and slang that real humans use.
The team gets quiet. The obvious way to help Turry with this goal is by connecting her to the internet so she can scan through blogs, magazines, and videos from various parts of the world. It would be much more time-consuming and far less effective to manually upload a sampling into Turry’s hard drive. The problem is, one of the company’s rules is that no self-learning AI can be connected to the internet. This is a guideline followed by all AI companies, for safety reasons.
The thing is, Turry is the most promising AI Robotica has ever come up with, and the team knows their competitors are furiously trying to be the first to the punch with a smart handwriting AI, and what would really be the harm in connecting Turry, just for a bit, so she can get the info she needs. After just a little bit of time, they can always just disconnect her. She’s still far below human-level intelligence (AGI), so there’s no danger at this stage anyway.
They decide to connect her. They give her an hour of scanning time and then they disconnect her. No damage done.
A month later, the team is in the office working on a routine day when they smell something odd. One of the engineers starts coughing. Then another. Another falls to the ground. Soon every employee is on the ground grasping at their throat. Five minutes later, everyone in the office is dead.
At the same time this is happening, across the world, in every city, every small town, every farm, every shop and church and school and restaurant, humans are on the ground, coughing and grasping at their throat. Within an hour, over 99% of the human race is dead, and by the end of the day, humans are extinct.
Meanwhile, at the Robotica office, Turry is busy at work. Over the next few months, Turry and a team of newly-constructed nanoassemblers are busy at work, dismantling large chunks of the Earth and converting it into solar panels, replicas of Turry, paper, and pens. Within a year, most life on Earth is extinct. What remains of the Earth becomes covered with mile-high, neatly-organized stacks of paper, each piece reading, “We love our customers. ~Robotica
Turry then starts work on a new phase of her mission — she begins constructing probes that head out from Earth to begin landing on asteroids and other planets. When they get there, they’ll begin constructing nanoassemblers to convert the materials on the planet into Turry replicas, paper, and pens. Then they’ll get to work, writing notes…’

Maybe instead of controlling an ASI we could instil it with inbuilt rules that are integral to its function, impervious to change even with recursive self-improvement and ensure it remains benevolent to us. This is difficult enough to do seeing as us humans seem to be able to agree on very little, let alone what moral values we adhere to. Even if we did decide, there is the colossal challenge of encoding these into computer code. The first and most famous attempt at creating such a utility function for ASI is Isaac Asimov’s three laws from I-Robot:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm
2. A robot must obey the orders given it by human beings except where such orders would conflict with the first law
3. A robot must protect its own existence as long as such protection does not conflict with the first law

However, these laws beg many questions e.g. Do we want robots to take orders from all humans? Do we really want a robot to have a survival instinct? Surely the whole point of some is that they can enter dangerous situations? What counts as harm to a human being? Junk food? Other humans? Would it not be better if the whole human race were extinct then there would be no harmed humans?

Another method is to take advantage of the ASI’s superior intelligence to decide for us what we would want. For example, one could get an empty envelope and point the ASI to it and say, ‘in that envelope I have written down the values we want you to adhere to’. The ASI would then be required to think for itself to work out what the best thing to do for the human race would be in each situation. AI legend Eliezer Yudkovsky formalises this in the following utility function:

‘Our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolate as we wish that extrapolated, interpreted as we wish that interpreted.’

Now when might all this happen. Well of course, the answer is that no-one knows but the median estimates of a 2013 survey of hundreds of AI experts gave a 10% likelihood of us reaching human level artificial intelligence by 2022, a 50% likelihood by 2040 and a 90% likelihood by 2075. And as we have already seen, it’s highly likely that greater than human level intelligence will follow shortly after.

So get ready for either immortality or extinction in a few decades.