The A.I. Revolution is Class Warfare

We Need a New Political Movement to Counter Big A.I.

Photo by Possessed Photography on Unsplash

The twenty-twenties will be a great time to be a tech tycoon. Untold fortunes will be made hand over fist for the lucky few who break it big with killer AI apps. But for the rest of us normies in the “meatspace,” the coming decades might be a meatgrinder. The true impact of extreme economic inequality, of three Americans owning as much wealth as the bottom half of all Americans, will hardly be felt more acutely than if AI starts eating up jobs like Pacman. It doesn’t really matter who you are; if you’re not in control of the coming AI, the AI may very well be coming for your job; and those in control of AI will be very few, an ultra-elite.

When Big AI insists they won’t leave you technologically unemployed, don’t believe them. Already we can see that jobs we never thought could be automated away anytime soon are in fact some of the first in AI’s crosshairs, including doctors, lawyers, architects — even priests and rabbis. But AI will be coming for the blue-collar jobs soon enough as well, as rapidly advancing AI is combined with advancing robotics.

In 2013, Oxford University researchers Carl Benedikt Frey and Michael Osborne assessed hundreds of existing jobs and concluded that nearly half are at high risk of being automated away in a decade or two, i.e. the decade we are living in today. With the recent release of the newest version of ChatGPT, run on the scary-powerful GPT-4, those predictions are starting to feel all too real. Elon Musk claims we may be rapidly approaching a time when robots are able to do all jobs better than humans. What if he’s right? We need to take these warnings more seriously while there is still time to mitigate the risk of economic and social devastation. To treat these predictions of mass automation by some of the leading experts on technology as “nothing to see hear, move on” is to ignore a legitimate existential threat to our already fracturing human society.

The unabashed proponents of AI, who usually have something to gain economically from its ascent, reflexively waive away these concerns. Rather than replacing jobs, they beam about the productivity and efficiency boost ChatGPT and future AI systems will bring to the workforce. But if AI-equipped employees can do the same amount of work as current employees in a fraction of the time, then most companies will logically conclude they don’t need so many cooks in the kitchen. Increased worker efficiency will not be used to create a glut of products which will then be handed out for free to poorer folks around the world. At least not anytime soon. It will instead be used to cut costs and increase profits for shareholders. Mark Zuckerberg recently heralded 2023 as the “year of efficiency” as he laid off another 13 percent of Meta’s workforce. Meta’s shares shot up over 7 percent in return. This may be a canary in the coalmine for what’s to come from the AI workforce efficiency boosts that AI’s proponents routinely tout as being such a boon for the average worker.

Others argue that AI won’t replace jobs, but merely take over lower-level routine tasks, graciously freeing up time for humans to focus on more creative and intellectually challenging work. But if current employees are not hired to do those higher-level tasks now, why would they be in the future, and what tasks are the AI optimists even imagining? Will radiologists, for example, who may soon be outmatched by AIs in their ability to evaluate medical scans, spend their newly freed up time writing philosophy of medicine essays? That’s not what they’re paid to do. In reality, AI will probably evaluate all the scans and only a small number of radiologists will remain in the loop for human reassurance.

There is also no particularly good reason to believe that AI will only be relegated to automating away lower-level job tasks. Rather they may relatively soon outpace humans in most tasks. Those who doubt that AIs will be up to the challenge probably once doubted that computers would become the world chess champion, then Jeopardy champion, and then Go champion, with each formerly unthinkable intellectual feat leaving a generation of doubters stunned one at a time. Now AI is already scoring in the ninetieth percentile on the American bar exam and higher on the SATs than probably most people reading this article.

The most consistent refrain regarding automation is that while AI will indeed replace many current jobs it will create new ones in return so that on the whole economic equilibrium, even growth, will be maintained. But this is held more like an article of faith than being backed by convincing reasoning. The analogy is often made to the industrial revolution of the centuries past when automation replaced a vast majority of farming jobs, but a whole litany of new jobs arose in turn including the “knowledge-work” so threatened today. But unlike the industrial revolution, today we are not just replacing the human body with machines, but also the human mind. While there will surely be new work-related categories in the future, for example, a Metaverse interior decorator for one’s virtual home, there’s no reason that an AI won’t be able to handle those tasks, and so it’s not at all clear why AI wouldn’t be “employed” over humans.

Even if AI does create all sorts of new jobs which displaced human workers could theoretically enter, we likely couldn’t manage that transition fast enough to avoid social upheaval. These hypothetical new jobs would presumably require more advanced skills than the jobs AI displaced, requiring more advanced training, and the AI takeover would likely happen too fast for this imaginary mass job retraining program to be at all effective. That is if there even are an equal number of new human jobs created for displaced workers to enter into, which seems unlikely anyhow.

If our machinery is quickly becoming stronger and more capable than our human bodies and our human intellect, then what capacities are left for humans to outcompete robots in the coming workforce? It seems to me the only sure-fire job qualification we will have left is our mere humanity, that we are flesh-and-blood beings that can truly understand, have compassion, and care for each other. But while this “human factor” will likely maintain some level of “market-value” that in theory an AI could never replace, it seems likely that this demand for real-life humanity will remain an enduring, but only niche market, like vinyl records. After all, AIs already do a convincing job of mimicking emotions, and it’s not at all clear that an AI’s “emotional intelligence” will not match or surpass that of humans just like it is doing with other aspects of our intelligence.

It’s also not clear that humans of the future will really care whether there is an actual conscious, empathetic human being that they interact with in the service, hospitality, medical, educational, legal or other relevant industries as long as the machine gives them the feeling, or illusion, that there is such a caring provider. Even in professions where being truly cared-for seems essential to the client’s experience, like visiting a therapist, AIs appear to provide a relatively effective simulacrum of emotional support. In fact, the very first chatbot, ELIZA, was a program designed in the 1960’s to be used for therapy. Even as the simplest of programs its users became emotionally attached to it as if it were a caring human being. So that “human factor,” of feeling empathetically cared-for by another, may not give us much of a decisive advantage over AI in the workforce at all.

Optimistic platitudes on how, if anything, AI will only improve workers’ lives, feel like false reassurances that keep employees calm and carrying on while machines sneak up to snatch away their livelihoods and shareholders profit. Many AI acolytes imagine that on the whole mass AI job automation will be a good thing as it heralds a coming utopia of abundance. But even if such a utopian AI future lies ahead, a transition from the current state of affairs to this imagined paradise is not fair to the current generation if we have to go through a dystopian hell of mass technological unemployment to get there.

Optimists argue that we can get from here to the other end of the AI rainbow without major problems if we just “get the timing right”. But the current pace being set by Silicon Valley is nothing short of mania, a frantic ai gold rush to be the first to cash in, and to stay afloat in the tech market. Even companies like Google that harbored serious concerns about the AI technology they were developing in their labs are now rushing to bring their products to the market amidst competition from the likes of OpenAI and Microsoft’s Bing who have much less of a reputation to lose, and might beat them to the punch.

Nothing seems to be slowing the tech industry down in their frenzied quest, even the threat of wiping out all of humanity. A 2022 survey of AI developers found that half of its respondents thought AI has a significant chance of leading to the extinction of the human species or other similarly severe consequences for humanity. Elon Musk, the co-founder of OpenAI, describes AI as far more dangerous than nuclear warheads, but rather than being developed in a secret air-tight lab, it’s being rushed out into the public and spreading like a virus, relentlessly driven by a profit-over-people motive that has perhaps never been so universally dire.

If AI decimates the existing labor-force leaving our society in a state of utmost turmoil and unrest, these are not Big Tech’s problems. They’re building spaceships. They’re building private islands floating in the ocean. They at least think they have all the resources in the world to deal with whatever may come, even the end of planet Earth itself. So whatever “disruption” their machines cause, whatever societal structures they upend as they relentlessly “move fast and break things,” these are your problems, not theirs. Our tech lords are playing with fire, and their rather glib attitude towards world-upending risks like mass technological unemployment or even the end of human life altogether feels like the height of hubris, as if we are living in an actual sci-fi parable of mythological proportions.

This is not to unfairly demonize Silicon Valley executives. But they simply cannot stop themselves despite their credible fears that they may be leading the whole of humanity off a giant cliff. It is worth here recalling the words of Chamath Palihapitiya, former vice president of user growth at Facebook on the negative impacts of social media: “I think we all knew in the back of our minds — even though we feigned this whole line of, like, there probably aren’t any really bad unintended consequences. I think in the back, deep, deep recesses of our minds, we kind of knew something bad could happen….” There is no reason to think that tech leaders are not similarly feigning a line with AI today, but the stakes are now even higher.

Amidst this crisis, the usual tech-industry calls for “self-regulation” are unusually quiet. Rather, leading AI developers are openly calling for their industry to be government-regulated reflecting the out-of-control nature of their own endeavors. This is a cry for help, and we should take them up on it. Since all political parties alike will bear the brunt of AI, perhaps this will be the rare bipartisan issue that congress can actually do something about. We shouldn’t count on it, but the only way such a movement stands a chance is if we the people stand up and demand they do something about it, putting job security and welfare in the face of the AI revolution as a top priority. Don’t wait until the AI starts eating up jobs because by then it may be too late.

Most people intuitively grasp the legitimate threat this technology is rapidly forming to their livelihoods. We must face the prospect of mass technological unemployment head on and prepare for it as a society right now if we are going to avoid possibly cataclysmic social upheaval and immense suffering. Politicians must put protective measures in place that ease the pains of this AI revolution; and if the politicians won’t stand up for the people, then the people must organize and stand up for themselves, peacefully and non-violently resisting the AI takeover.

We may be at a critical moment of epochal change in the story of human civilization. We have a part to play in whether the wheel of history turns slowly and methodically, carrying us along with it, or so fast that millions upon millions get steamrolled in the process. Our fate is in our hands.

Jeremy Weissman is the author of The Crowdsourced Panopticon: Conformity and Control on Social Media (Rowman & Littlefield, 2021).

--

--

Jeremy Weissman
Institute for Ethics and Emerging Technologies

Jeremy Weissman is the author of The Crowdsourced Panopticon: Conformity and Control on Social Media (Rowman & Littlefield, 2021)