Closer Now…

N. R. Staff
Novorerum
Published in
3 min readMay 17, 2023

--

Thin fibers lit from within across a dark backgroound. Abstract image.
Photo by Maxime VALCARCE on Unsplash

The New York Times just published an article about a Micrososoft OpenAI research paper — which had come out in March — reporting the ways in which its Chat GPT-4 was exhibiting signs of “artificial general intelligence.”

Artificial general intelligence, or A.G.I. , is the term used to describe artificial intelligence that has become so powerful that its capabilities match or surpass human skill and reasoning. It is a order of magnitude beyond “mere” ordinary AI. Ordinary AI crunches and re-forms — “re-generates” — massive amounts of data and information to provide what at first seems to be human-like responses, but exhibits no “sentience.”

A.G.I. is way beyond that.

A.G.I. is the Holy Grail — or Pandora’s Box, take your pick — of those who work in AI. The point at which AI becomes A. G. I. is often referred to as the Singularity. “Singularity” is defined in various ways; but the tech Singularity — which Ray Kurzweil long predicted — is a point at which artificial intelligence will overtake human intelligence — and there will be no going back. The “no going back” is the signal feature of a Singularity. (Google it.)

Times reporter Cade Metz, doing what journalists typically do, framed his story as a “some say, others say” report: some saying A.G.I. has now achieved “a new kind of intelligence,” others saying that wasn’t so.

“Microsoft… [has] stirred one of the tech world’s testiest debates: Is the industry building something akin to human intelligence? Or are some of the industry’s brightest minds letting their imaginations get the best of them?” writes Metz.

Metz quotes a Microsoft researcher. “ ‘I started off being very skeptical — and that evolved into a sense of frustration, annoyance, maybe even fear,’ Peter Lee, who leads research at Microsoft, said. ‘You think: Where the heck is this coming from?’”

A little later in the story, though, Metz switches and quotes Alison Gopnik, a professor of psychology who is part of the A.I. research group at the University of California, Berkeley, who tells him, “‘When we see a complicated system or machine, we anthropomorphize it; everybody does that — people who are working in the field and people who aren’t.’”

The Microsoft paper referenced in the article can be found here.

All of this takes me back to when I was first researching this stuff for my book.

I’d stumbled upon a paper called The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities, which I’d found on a MIT site. “Many researchers in the field of digital evolution can provide examples of how their evolving algorithms and organisms have creatively subverted their expectations or intentions,” they wrote.

Reading the paper, I learned that back in 1994 a researcher had created 3-D virtual creatures that were supposed to teach themselves — that is, “evolve” — “walking, swimming and jumping behaviors” to cover distance. Instead, some evolved to simply be very tall, then fall over and over. They did do what the program had set for them: they figured out a way to cover the distance; but not the way the programmers had ever considered. A later researcher, thinking to correct these kinds of mistakes, “bred creatures to go as high above the ground as possible.” It was intended that they would jump; instead they evolved to simply grow very tall. That was simpler, and they did in fact get as far above the ground as their instructions had dictated.

“Evolution often uncovers clever loopholes in human-written tests, sometimes achieving optimal fitness in unforeseen ways,” wrote the paper’s authors.

And now, here we are.

In my book I quoted Bill Joy, who in the 1980s had co-founded Sun Microsystems. In a much-discussed article published in the April 2000 issue of Wired, Joy had written that when it came to artificial intelligence, we “had to be understating the dangers, understating the probability of a bad outcome.”

His words now seem eerily prophetic.

--

--

N. R. Staff
Novorerum

Retired. Writing since 1958. After a career writing and editing for others, I'm now doing my own thing. Worried about the destruction of the natural world.