Creator Gods, Part I

N. R. Staff
Novorerum
Published in
8 min readMar 2, 2020

--

image off Egyptian frieze, abstract

When I think of the word “artifice,” what comes to mind are the people I call “creator gods.”

“Artifice” means “clever or cunning … especially as used to trick or deceive others.” Merriam-Webster tells me that artifice “stresses creative skill or intelligence, but also implies a sense of falseness and trickery.

“Art generally rises above such falseness,” it goes on, “suggesting instead an unanalyzable creative force.”

READ In the Uncanny Valley

I remember when The Whole Earth Catalog was published. That was back in the day when there was no Internet. Or perhaps Stewart Brand was a precursor of the Internet. I’m not the first to have thought something like that.

I remember opening its first pages with excitement, and finding a curious epigraph: “We are as gods, and might as well get good at it.”

I reflect on that now when I think of the guys excited about creating the Singularity. I call them “creator gods”; it’s not a compliment. The Singularity is a time when a point of no return will occur; a point beyond which nothing can be predicted. For Ray Kurzweil, who popularized the term over a decade ago, it’s the point at which biological intelligence would be “thrown from its perch of evolutionary superiority; the point after which “most of the intelligence of our civilization” would be “silicon-based; that is, computer-derived. The creator gods are excited to be creating that future: “Bring it on!” they say.

There is a certain type of personality for whom creating things is proof that they themselves are alive; possibly these are guys who as young boys exploded things to see how big the explosion would be. They are people like Mark Zuckerberg, last year’s (and possibility this year’s) favorite whipping boy, although I think Mark is actually in over his head and not able to admit to it. These are people who want to code everything in life, guys who want to code and create no matter the consequences.

Stories always focused on Zuckerberg, because he was at the top and one could take easy aim at him, but in truth all the creator gods of Silicon Valley and its satellites shared the traits. It might have been been that they actively emulated the traits of smugness and creator godness, that they trained to be like that if they hadn’t started out that way .

“If we cannot imagine it it doesn’t exist.” Imagining, inventing, creating — what a rush! An insight into how it must feel for computer scientists to create comes from Sun Microsystems founder Bill Joy, who’d gone to graduate school at UC Berkeley in the mid-1970s.

“I started staying up late, often all night, inventing new worlds inside the machines,” he wrote. In his “most ecstatic moments,” he said, as the software in the computer emerged from what he’d coded, he felt as he thought Michaelangelo must have felt as he “released the statues from the stone” — the statues he’d conceived first in his mind.

“Once I had imagined it in my mind I felt that it was already there in the machine, waiting to be released. Staying up all night seemed a small price to pay to free it — to give the ideas concrete form. “

Found on Richard Feynman’s blackboard at the time of his death: “What I cannot create I do not understand.”

“We love the mind,” says animal behaviorist Franz de Waal. Western society has long valued what we considered the mind — “intellect” — and felt that feelings, emotions, anything that wasn’t hard and logical was not worth bothering with. I saw that in Kurzweil; but I also saw it the younger creator gods.

“Larry Page and Sergey Brin founded Google with the mission of organizing all knowledge, but that proved too narrow,” wrote Franklin Foer. “They [now] think they have the opportunity to complete the long merger between man and machine — to redirect the trajectory of human evolution. How do I know this? In annual addresses and town hall meetings, the founding fathers of these companies often make big, bold pronouncements about human nature — a view that they intend for the rest of us to adhere to. Page thinks the human body amounts to a basic piece of code: ‘Your program algorithms aren’t that complicated,’ he says. And if humans function like computers, why not hasten the day we become fully cyborg?’” Foer seems to have it in for these guys for his own reasons, but the sentiments he quotes are shared by a growing number of people who code things that are alive in silicon.

The word “hubris” comes to mind as I read about these guys (and they are in fact, pretty much all guys).

Most of the hubris I saw was in young men, and most of it was about creating artificial intelligence. The fact that AI — just ordinary, narrow AI, far from the “superintelligence” Bostrom worries about — was already becoming too complex to easily understand seemed to give none of these guys pause. It was like, “If it works, great — let’s use it! We don’t have to understand it to use it.” And maybe that was what actually drove their brand of creativity.

The young generation of creator gods “tend to believe that their priorities should override the privacy, civil liberties, and security of others,” wrote Financial Times reporter Rana Foroohar in her book Don’t Be Evil: How Big Tech Betrayed Its Foundations. “They simply can’t imagine that anyone would question their motives, given that they know best. Big Tech should be free to disrupt government, politics, civil society, and law, if those things should prove to be inconvenient.”

Foroohar believed they saw themselves “as prophets of sorts, given that tech is, after all, the future. The problem is that creators of the future often feel they have little to learn from the past.” As venture capitalist Bill Janeway told Foroohar, “‘Zuck and many of the rest … really believe that because they are inventing the new economy, they can’t really learn anything from the old one.’”

Foroohar tells of a professor who once tried to press one of these Silicon Valley guys about a problem with search engines that he felt should be fixed. “The guy told him “We can’t code for that.’

“I said this was a legal matter, not a technical one,” the professor insisted. But the guy “just repeated, with a touch of condescension, ‘Yes, but we can’t code for it, so it can’t be done.’ The message was that the debate would be held on the technologist’s terms, or not at all.”

Young tech guys used “an inordinate number of athletic and wartime metaphors,” reflecting “a bland, overcorrected, heterosexual masculinity” … a ‘casual misogyny’ ”. They “had power, wealth and control.” Meredith Broussard coined the word “technochauvism” for “the belief that tech is always the solution.” She offers an example: “I was talking with a twenty-something friend who works as a data scientist. I mentioned something about Philadelphia schools that didn’t have enough textbooks. ‘Why not just use laptops or iPads and get electronic textbooks?’ asked my friend. ‘Doesn’t technology make everything faster, cheaper and better?’”

Sean Parker, Facebook’s founding president, who’d left and founded the Parker Institute for Cancer Immunotherapy, said that when people had told him they weren’t on social media, he’d just smugly reply, “you will be.”

“I don’t know if I really understood the consequences of what I was saying,” he mused. The plan with Facebook — possibly not fully conscious at first but soon pretty clear — had been to figure out “how do we consume as much of your time and conscious attention as possible?” Figure it out they did — people like Parker and others who came up with, among other things, the “like” button that would give users what Parker called “a little dopamine hit,” so they’d keep reading, and adding content, and clicking “like,” and scrolling, and reading. Parker called it a “social-validation feedback loop.” It was “exactly the kind of thing that a hacker like myself would come up with,” he admitted, “exploiting a vulnerability in human psychology.”

Around the same time, Bill Joy, also now departed from tech, would explain that the people he’d met at big tech board meetings had been “overwhelmingly young, white, male — and maladjusted in some way.”

The other part of the problem, Joy said, was that “our system doesn’t hold people responsible.”

Artificial news — “fake news” — was one of the biggest charges levied against Mark Zuckerberg. Facebook had become a repository for it, and it seemed there was no real way to stop it. AI algorithms now determined what was or wasn’t fake — to the extent that the algorithms were even being used on the content — and there was no outside way to monitor if Facebook was in fact trying hard or not to ensure that algoritms were policing content — they were simply incapable (“for now” AI enthsiasts said, convinced that someday AI really would be better than mere humans) of figuring things out very well. And yet the idea of Facebook employing the tens of thousands — perhaps hundreds of thousands — of real people to do the monitoring (if in fact 100,000 people capable of the task could be found to take the jobs) was not an option, evidently: it would cost billions to do that! Facebook had billions, of course; but not to spend on having people do it, when AI could do it.

And, of course, due to their get-out-of-jail-free card, also known as Section 203 of the Communications Act, no matter how much fake news upset people, nothing could really be done to its publishers on Facebook or other social media sites, which was by and large its native ground.

Director Aaron Sorkin had made a movie about Mark Zuckerberg which it seemed Zuckerberg bitterly resented. (It “just like completely misses the actual motivation for what we’re doing, which is, we think it’s an awesome thing to do,” he said.) Nine years later, Sorkin was also angry about the Communications Act exemption. “The law hasn’t been written yet — yet — that holds carriers of user-generated internet content responsible for the user-generated content they carry,” not the way “movie studios, television networks and book, magazine and newspaper publishers” were held responsible, he wrote in an open letter to Zuckerberg.

When Representative Alexandria Ocasio-Cortez asked Zuckerberg if he saw the “potential problem” with “a complete lack of fact-checking on political advertisements” and pressed him as to whether Facebook would, in Sorkin’s words, remove “lies” from the site, he simply told her that “in most cases, in a democracy, I believe people should be able to see for themselves what politicians they may or may not vote for are saying and judge their character for themselves.” Whether he believed that or not was anyone’s guess. Something he’d told a reporter a few years earlier seemed to get closer to what he really believed. In response to a similar question, he’d replied simply that “technology usually wins with these things.”

Creator Gods, Part 2

--

--

N. R. Staff
Novorerum

Retired. Writing since 1958. After a career writing and editing for others, I'm now doing my own thing. Worried about the destruction of the natural world.