Singularity

N. R. Staff
Novorerum
Published in
7 min readFeb 27, 2020

--

a neuron

In the summer of 2019, Neuralink, a company in which Elon Musk has invested $100 million, announced that in early 2020 they’d be able to “insert computer connections into your brain” with a “sewing machine-like” robot.
“We are entering a regime as radically different from our human past as we humans are from the lower animals,” wrote Vernor Vinge. “From the human point of view, this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control.” This was in 1993.

I spent last summer reading about the Singularity. Ray Kurzweil called his book “The Singularity is Near.” “Singularity,” a mathematical term, means something that “transcends any finite limitation.” To Kurzweil, it meant a time when biological intelligence “will be thrown from its perch of evolutionary superiority.”

“Most of the intelligence of our civilization will ultimately be nonbiological,” he wrote, “trillions of trillions of times more powerful than human intelligence.” To Kurzweil, would be an exciting future: The Singularity, he exulted, “will allow us to overcome age-old human problems and vastly amplify human creativity.”

And Kurzweil’s predictions weren’t finished. “By the late 2020s,” he wrote, “we will have completed the reverse engineering of the human brain, which will enable us to create nonbiological systems that match and exceed the complexity and subtlety of humans, including our emotional intelligence.”

Not all scientists agreed that much of this would come to pass. Just six years ago, some were insisting that “The ability to describe the content of an image would be one of the most intellectually challenging things of all for a machine to do” and would take at least 20 years. However, a month later, Google announced that its deep-learning network could analyze an image and offer a caption of what it saw: “Two pizzas sitting on top of a stove top,” or “People shopping at an outdoor market.”

Bill Joy was one of the few early on who worried about this stuff. Kurzweil had given Joy “a partial preprint of his then-forthcoming book, The Age of Spiritual Machines, which outlined a utopia he foresaw — one in which humans gained near immortality by becoming one with robotic technology.” Reading it left Joy profoundly uneasy. He felt sure that Kurzweil “had to be understating the dangers, understating the probability of a bad outcome.”

In a 2000 article in WIRED, Joy wrote that the “new Pandora’s boxes of genetics, nanotechnology, and robotics” — the three technologies that Kurzweil was so excited about — “are almost open, yet we seem hardly to have noticed.”

These technologies, he wrote, “share a dangerous amplifying factor: They can self-replicate…. one bot can become many, and quickly get out of control.

“Ideas can’t be put back in a box,” he wrote. “Once they are out, they are out…. I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil.”

Nick Bostrom of Oxford University had also written about the Singularity — not in excitement like Kurzweil, but dread. To him it was a point beyond which “human affairs, as we know them, could not continue.” Bostrom’s 2014 book “Superintelligence: Paths, Dangers, Strategies” made The New York Times bestseller list; In 2015, The New Yorker’s Raffi Khatchadourian wrote a long profile of the Oxford philosophy professor whose background included physics, computational neuroscience, and mathematical logic. Khatchadourian quoted Bostrom’s “Technological Completion Conjecture”: “If scientific- and technological-development efforts do not effectively cease, then all important basic capabilities that could be obtained through some possible technology will be obtained” — geekspeak for “whatever can go wrong, will.”

Bostrom’s classic example of how something could go wrong is sometimes called the Paperclip Maximizer Thought Problem: “a well-meaning team of programmers make a big mistake” setting up a program’s goals, resulting in “a superintelligence whose top goal is the manufacturing of paperclips, with the consequence that it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities.” He talks about creating a superintelligence with a goal which “we might now judge as desirable but which in fact turns out to be a false utopia, in which things essential to human flourishing have been irreversibly lost.

“We need to be careful about what we wish for from a superintelligence,” he said, “because we might get it.”

Joy recalled Kurzweil “saying that the rate of improvement of technology was going to accelerate and that we were going to become robots or fuse with robots or something like that” and another scientist “countering that this couldn’t happen, because the robots couldn’t be conscious.”

Kurzweil had written that “from the perspective of biological humanity, these superhuman intelligences will appear to be our devoted servants.”

“Evolution provides a creative fount of complex and subtle adaptations that often surprise the scientists who discover them.” This wasn’t about biological evolution, though. In January, 2018, a paper called The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities was published. “Many researchers in the field of digital evolution can provide examples of how their evolving algorithms and organisms have creatively subverted their expectations or intentions,” they wrote.

Reading the paper, I learned that back in 1994 a researcher had created 3d virtual creatures that were supposed to teach themselves — that is, “evolve” — “walking, swimming and jumping behaviors” to cover distance. Instead, some evolved to simply be very tall, then fall over and over. They did do what the program had set for them: they figured out a way to cover the distance; but not the way the programmers had ever considered. A later researcher, thinking to correct these kinds of mistakes, “bred creatures to go as high above the ground as possible.” It was intended that they would jump; instead they evolved to simply grow very tall. That was simpler, and they did in fact get as far above the ground as their instructions had dictated.

“Evolution often uncovers clever loopholes in human-written tests, sometimes achieving optimal fitness in unforeseen ways,” wrote the paper’s authors.

Joy felt that it was “more than likely” that the Singularity would “not work out as well as some people may imagine. My personal experience suggests we tend to overestimate our design abilities. Given the incredible power of these new technologies, shouldn’t we be asking how we can best coexist with them? And if our own extinction is a likely, or even possible, outcome of our technological development, shouldn’t we proceed with great caution?”
Danny Hillis, cofounder of Thinking Machines Corporation, suggested that Joy might be overreacting, reminding him “that the changes would come gradually, and that we would get used to them.”

For historian Yuval Noah Harari, there was little question that humans would soon in some way merge with computers. What was about to happen — was already happening — were “real changes in humans themselves — in their biology, in their physical and cognitive abilities.”
But it was worrying to Harari that so few of us were concerned about any of this. It was “cognitive dissonance” he thought — “the ability to hold two utterly conflicting ideas in our heads at the same time. That we can say, ‘what a cute dog’ and ‘yum, yum, what a delicious steak’ and not see a problem with that somehow,” as Harari’s interviewer Carole Cadwalladr put it.
Of all those who either rejoiced in or cringed at the coming Singularity, Bill Joy seemed to be the only one who worried about what I’ve come to call ‘wet life.’ And it was nanotechnology that would do the horrors here.
Biologist Eric Drexler described a what could happen with nanotechnology, which he called “molecular-level replicating assemblers.” As Joy put it,

“Plants” with “leaves” no more efficient than today’s solar cells could out-compete real plants, crowding the biosphere with an inedible foliage. Tough omnivorous “bacteria” could out-compete real bacteria: They could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days.

“We have trouble enough controlling viruses and fruit flies,” said Joy. Nanobots could wipe out wet life entirely “from a simple laboratory accident. Oops!”

Elizabeth Kolbert, in her book The Sixth Extinction, writes, “In times of extreme stress, the whole concept of fitness, at least in a Darwinian sense, loses its meaning: how could a creature be adapted, either well or ill, for conditions it has never before encountered in its entire evolutionary history?”
“Life is extremely resilient but not infinitely so,” wrote Kolbert. “In an extinction event of our own making, what happens to us? One possibility…is that we, too, will eventually be undone.” Kolbert is talking about being undone by our “transformation of the ecological landscape.” But she could as easily be talking about the Singularity. At the Singularity, Vernor Vinge had thought that animals could perhaps adapt, but “no faster than natural selection.”
Kolbert continues, “by cutting down tropical rainforests, altering the composition of the atmosphere, acidifying the oceans — we’re putting our own survival in danger.”
The death of wet life is occurring on two fronts: from what Kolbert calls the destruction of “the earth’s biological and geochemical systems,” and also, it seems, from the destruction of the bio human.
“Soon we must look deep within ourselves and decide what we wish to become,” says the acclaimed biologist E. O. Wilson.

In my email: an “Invitation from Peter H Diamandis of Singularity University”, inviting me to a webinar with Ray Kurzweil:

“I sit down with Ray Kurzweil every year to hear his latest thoughts and insights about where we are headed,” Diamandis writes in a chatty tone meant to convey that he’s a friend — if not a personal one, then as someone who knows me.

Had I gone, here’s what I might have learned from them on Dec. 18: “Are we still on track to achieve ‘human-level AI’ by 2029? Will AI develop consciousness and/or emotion? What opportunities can human-level AI bring to the world? What is the status of brain-computer interface? When will we connect the human brain to the cloud, and what will the implications be?

“Join us,” ends the email, “to hear Ray’s latest predictions and to prepare yourself for the changes and opportunities that are coming sooner than you think.”

--

--

N. R. Staff
Novorerum

Retired. Writing since 1958. After a career writing and editing for others, I'm now doing my own thing. Worried about the destruction of the natural world.