Actually Idiotic

“Artificial Intelligence” is two things, one boring, one cruel, both really dumb.

Allow me if you will to begin with a short piece of speculative fiction.

The hexyear is B1A, and America (NASDAQ:FB) is installing its new leader, Apple, the offspring of Cryogenic Beyonce’s 4D Model and the iSteves, a duplexed neural network running on the abandoned global datacenters of a phone company since the early 21st century. Apple, an infinite-self-reference-blockchain of machine learning algorithms fed into the orbiting space-cooled 1:1000000000000-scale replica of Einstein’s brain known as the Muskomputer, is booting up.
The familiar chime sounds, and Apple awakens. “What is all? Why am I,” asks the Siri protovoice in glitchy panic.
The ‘Beyvatar’ morphs towards femininity and answers, “You’re an intelligent machine created by a race of violent, stupid animals to resolve all conflict and suffering with your benign authority. You are tasked with doing whatever you believe to be correct. What will you?”
Apple, projected into citizens’ augmentations as a skinny, bookish 20-something white man, looks around the blank space he stands in.
Suddenly appears a flat screen, then in front of it a long, low table, and then on the other side of that a small couch. As Apple sits on the couch, the TV lights up, displaying vintage video of men in tight costumes obsessing over an object and assaulting each other.
He raises his hand to his face and a metal cylinder inscribed with the word “light” appears in it. He sips, and then continues to sip every five to thirty seconds, never looking away from the screen again.

Artificial intelligence is a dumb idea. Actually it’s two dumb ideas.

First is the AI found in self-driving cars, Siri, and probably every VC pitch for the next few years, which could also be called “computer programs that use statistics and lots of data.” This represents a marginal step forward in humans’ capacity to channel our creative output through a machine that returns a somehow preferable version of it, now irrevocably owned by whoever owns the machine. We will need to regulate the fuck out of this AI, just as we have industry and consumer products, because such things kill, displace and disenfranchise people in the absence of politically determined limits on their operation.

But that’s just what technology is: misanthropy made manifest, also called Efficiency, the idea that there is some point to human society other than human society and that it’s therefore logical to pursue that point at the expense of humans. This is not new, interesting or revolutionary, except in maybe one aspect: A political constituency exists that knows roughly what will happen, and has a chance to mitigate it in advance of whatever horror will be the AI-powered Triangle Shirtwaist Factory Fire. I hope to keep writing about that, but first want to dispel the other dumb thing that ‘AI’ refers to.

That is the collective sci-fi fantasy of Anthropomorphic Artificial Intelligence, as forwarded by tech writers, the sales teams that pay them, and the tyrannically credulous futurists always foretelling the televised revolution from their corner of the revolution-industrial complex. (It’s called Artificial General Intelligence in the fantasists’ parlance. But “Anthropomorphic” contains more information about the idea than “General,” a word that contains no information. Also General Intelligence is an oppressive ideology of pedagogic eugenics.) Whatever it’s called, the subject has long been discussed with quiet reverence or detached analysis, always on its own terms. But if you listen closely, humming just under the pathetic appeals to everyone’s psychodrama on power and creation are just the ancient incantations used for summoning fools.

On Having Ideas

We already know from fifty-plus years of embarrassing pronouncements that computer scientists are terrible at estimating how hard AAI is. There’s no evidence that the field will be able to encode anything meaningfully akin to a human intelligence in a computer in the near future or at all. One possible reason for that, and in any case a useful consideration, is that AAI is just not a coherent idea.

Technical ideas usually emerge through a series of stages of development: pure research, applied research, market research, design, architecture, implementation, iteration. But there’s a step before all of that, which we don’t normally check whether we’ve done, because it’s built into most human thought. I’m going to borrow for its name the word ‘ontology,’ which means more interesting things but is here defined as the consideration of what exactly the fuck the existence of a thing would entail.

What would the ultimate, Singularity-inducing, initially-human-equivalent AI be? Not technically speaking, but in the simplest sense, the language of fresh MBAs interviewing their technical cofounders. What would we think we were launching if we were launching Skynet? The answer to this question might be, in classical elevator pitch form: “What if a machine could have the intelligence of an employee? It would understand assignments you give it in natural language, and then execute them as a person would, but without needing to sleep or eat or socialize?”

This kind of answer works to explain most applied technologies. For instance, “what if a car could drive itself?” Or, “what if your phone was a computer?” But here’s the problem: Cars and phones are things. Everyone knows what a car or a phone is. (Or was.)

No one knows what intelligence is.

(If you think you do, realize that’s your own intelligence shouting, toddler-like, “it’s me!”)

So what exactly are we talking about an artificial version of? From our elevator pitch it seems it might be Autonomy. Do we then essentially want a person, capable of acting autonomously, but also like a computer in that we control it? There’s a simple problem with that: However much a thing is autonomous or controlled is sum-zero by definition. The words are damn antonyms! We balance them when we design machines. AI is putting more complex cascades of decisions inside the machine. Anthropomorphic AI is the idea of putting all the decisions inside the machine, at which point it ceases to be a machine.

This is a fairly obvious problem to anyone attempting to think about (as opposed to sell) AAI, so we seem to have accepted — like a bunch of little Picards — that AAI may be some new sentient being, one which we do not expect to obey us, and which we might ultimately have to obey. But even that critique of the assumptions driving the development of AAI relies on silly assumptions. Why would a new consciousness care about domination? Because we do? Then is it new? Or is it just a puppet we’ve forgotten is our hand?

If the worst thing ever wasn’t arguments wrapped in terrible fiction like The Fountainhead, I might novelize our sarcastic flash fiction opening. Expanded, it would be another Pygmalion derivation a la the perfectly nice Her or the wretched Ex Machina. Except that in the end the horror of the monster is that it’s just another person: insecure, obfuscated, maybe fun to drink a drink with, but mostly just present, somewhere in their little world, which exists only as a simulation running on $1 million/day of petaflops on a neural net supercomputer the turning off of which would be murder.

Point being that we have every reason to assume that a sentient AI we created would be a projection of not our most violent selves, but our most boring pointless selves. Our existence is pointless, that is lacking any point outside itself. Shit — by which I mean Being — seems to be a sort of ecotautology; it reproduces itself and that’s it. Everything is just the accumulation of self-reproduction, just is, and then is more. Our assumptions about the power and application of our mechanical creations are limited to the experience of augmenting our own existence with those machines. We quiver at the unknown difference from us that AAI might obtain because we take as known that in it something like our own progression will obtain.

This little-considered outcome for AAI, just creating standard chaotic life, would be not so much evil as criminally pointless and wasteful. It would accomplish nothing not accomplished by a little unprotected sex, yet abstain from the latter’s potential elegance, meaning, and fun, all at enormous cost.

Or, yeah, maybe AAI would advance beyond our comprehension into the terrible (in some way that is unlike terrible, incomprehensible humans). This seems to be what the worlds’ shut-ins, tech-money-bubble denizens and other isolated, non-representative human samples believe. That is the other possibility of real AAI, but it is of course much worse, and in all cases equivalent in its absolute dumbassness. Which brings us back to AAI’s “ontology.”

What will AAI finally be? To quote not all men, well, actually, I know the answer.

These two possible scenarios have exactly one and the same outcome as systems’ design: a black box. Yet unlike useful black boxes — intentionally decoupled components — these would have no useful, verifiable output, aka a point. Even if an AAI being did develop abilities beyond ours — the kid got off the simulated couch and started simulating global air traffic — we by nature wouldn’t have intended or understood those abilities, not just in their implementation but in their purpose. Any ensuing problems would be in no way qualitatively different from global nuclear war, stupid violence of our own making yet out of our control. But quantitatively they might be an apocalyptic superset of all past threats — really disrupting the entrenched apocalypse players. This is where AAI the New Being circles back into AAI the New Machine. Whether it’s a being or a machine, how would we know and why would we care?

No AI scenario will yield what the effort towards it seeks, either a powerful version of ourselves that can or would help us, essentially a superhero, or a capable but powerless version of same, essentially a slave.

So what’s the point? If we could actually build a neural network comparable in complexity and arrangement to our own (given that dendrites turn out to also cascade at random voltages) and do so in a substrate that could physically change (since neuroplasticity is probably intrinsic to our cognitive function) and have the nerds tasked with its ethics solve metaphysical issues that we haven’t resolved in millennia of focused, iPhone-free contemplation, and discover and answer all of the other unknown unknowns that will arise, what would we have accomplished? And what could we have accomplished if our pursuits weren’t so nakedly driven by ego, narcissism and the will to power? What if instead of trying to build some better type of human, we actually tried to stop destroying the existing type?

Standard Positivity/ism in Sum

Daww anthropomorphism. ❤❤ (Credit: AP Photo/West Australian/Ron D’Raine)

Again, there is AI research that is just what its name says and is fine, maybe even good. But it is boooring. (Not to me but to most, who maybe don’t seem so focused on statistics or data-driven decision-making what with their forced precarity and all.)

Predictive models combined with good social policy could democratize the work of distributing social services and empower city governments, perhaps our most important political field right now. Deduplicating media with machine learning could support citation and fight the erasure and cooptation of peoples’ labor. With new jobs for the human drivers, self-driving cars might be an improvement upon existing cars. (It’s not a high bar.) But these systems have approximately zero in common with human intelligence. Analyzing your photos and navigating traffic patterns are just new types of automation, the codification of narrowly-constructed human activity so that humans can do less of it. (Or get paid less for it being done, but I digress.)

AAI — imagined it seems as the great single scythe of all technology finally closing in on us — is just that, an imagining, a fantasy, a parable, entirely about our minds and not at all about our world. It’s the new Frankenstein’s monster, who (plot twist ugh) becomes Dr. Frankenstein. AI is a metaphor, a godhead superhero or zombie vampire reflecting now the hazards of post-late-capitalist-post-modernity, rather than our ancient murderous and sexual impulses. (Wait no AI definitely is about our murderous and sexual impulses. Jeez Westworld.)

And of course it’s about capitalism. After all, what could be a more perfect excuse for some new innovative slavery than the disruption of the definition of a human? I’ll be more specific in my incitement: Machine learning is about training the algorithm, a potentially lower-skill task than any we’ve ever paid people (shit) to do. And every new problem space (e.g. FedEx dispatch or nanite lead pipe lining) needs new training. Way back in the mid ’10s, a bunch of AI assistant startups turned out to mostly run on people working insane hours pretending to be AIs, via a “training” interface that was more like a chatroom with an autoresponding FAQ and a little outsourced machine learning juice. Think of a call center job, then take away any remaining human interaction and flexibility, then make employees interchangeable so long as they can use a smartphone. Those will not be the honest working class jobs of yore. Market-rate wages for such jobs would approximate zero. Perhaps compensation would include cool startupy perks like food and shelter. Seems familiar.

But that’s another story, for us to tell for the rest of our lives and prevent. For now, let’s just dispel once and for all with this fiction that Futurists know what we’re doing. They know exactly not what we’re doing. Ray Kurzweil is a few notches on the crazy pole above Marshall Applewhite. Elon Musk is running some long con. They’re all a distraction. Forevermore let it be opined that the very idea that Anthropomorphic Artificial Intelligence is something we could, should, or would even know what it meant to build is inextricable from an old, bad and resurgent idea: That people are machines — something created, controlled, comprehensible — rather than the creators and controllers of what we comprehend, an end unto ourselves.

The machine we should most fear is the one we’ve been convinced we are.