AI Won’t Take Away Our Jobs (Most Likely)

Tom Seiple
CodeX
Published in
12 min readSep 21, 2023

It takes some real audacity to assume you can contribute anything new to this bloated zeitgeist of “AI” in the year of 2023. Frankly, I find much of the discourse in this space to be both irritating and highly speculative, particularly in assessing the abilities of AI compared to humans. Honestly, I googled “annoying internet trends” in an effort to compare “AI worship” to something universally loathed by society, go ahead and pick your favorite: over-the-top gender reveal events, influencer culture, subscriptions for everything, family vloggers, or obviously faked pranks. Perhaps I should have asked Chat GPT what the most annoying thing in society is. Anyway, let’s add to the insufferable discourse with yet another article about AI and the state of knowledge work in 2023!

AI isn’t always right… | Photo Credit: The Wall Street Journal

Do you remember the craze around cryptocurrency or the hype around VR? I could probably name more, but the point is this: people get caught up in new ideas, often speculating wildly about the possibilities of the future, and in the process, many people take these speculations as reality. As a prime example, Elon Musk has been promising “self-driving cars” every year since 2014, and every year it gets punted down the road for various non-specific reasons. In fact, as we have all seen, the promised tech is far from perfect, and even dangerous. The echo-chamber of groupthink in technological advances like these often gives me an uneasy feeling. Whenever new ideas emerge, there is a gap between reality and science fiction that leaves room for excitement, but also exploitation.

A Brief Note on AI and Capitalism

Let’s briefly consider the macroeconomic implications of AI replacing human labor. In short, late-stage-globalized-capitalism is incompatible with AI. This is not to say that AI isn’t profitable nor is it to say that we cannot implement it in our global economy. My concern includes (but is certainly not limited to) three of the core tenets of our current global economy.

Adam Smith | Photo Credit: Biography.com

First, capitalism operates on the tenant of competition. An individual firm or business is not incentivized to show restraint. We see this now with the issues of climate change. Despite the clear and impending catastrophe of climate inaction, the collective business community hasn’t made much progress toward a renewable future or considered what “de-growth” could look like. This is largely due to the perpetuated social Darwinism of capitalist competition. Business is not incentivized to be measured in its pursuit of profits. This compounds the other two issues I see.

Second, capitalism is a cycle of consumption. Firms produce products and services for consumers to then purchase and use. If we consider the early stages of Henry Ford’s innovations at Ford Motor Company, he raised the wages of factory workers while lowering the overall price of vehicles with the goal of creating a consumer class. Regardless of your views on Ford, he clearly understood that he needed consumers of his product. Ensuring his workers had wages with which to purchase the products they were making is a simple example of the produce-consume cycle that is a feature of modern capitalism. What happens to this cycle when AI replaces a meaningfully large number of workers? The cost savings to businesses would be immense, but it would cannibalize the produce-consume cycle.

This brings me to my third main concern. The modern economy is primarily fueled by highly leveraged debt. Debt is highly valuable as a means of rent extraction for investment firms, banks, and other financial institutions and serves as a tool for unlocking financing from otherwise non-liquid assets. What happens when large parts of society can no longer pay back their loans? Part of what made the Great Recession so devastating was the complete lack of true value that backed countless loans and theoretical assets. When banks and institutions loan money to people and businesses, the general assumption is that most of these loans will be paid back with interest. That interest is where value is “created”.

We are in the midst of another fragile economy right now, where interest rates drive more of the economy than actual products or services. This economic system also shows no sign of slowing, even in the face of climbing interest rates and inflated prices.

Our current economy is based on speculation rather than value creation | Photo Credit: Frontline

AI cannot displace large portions of the workforce overnight because it would precipitate a catastrophic debt collapse. As we have seen in prior economic recessions of late, these collapses impact every market sector. If people can’t pay their mortgages, can’t afford vacations, stop buying new cars, and generally stop normal forms of economic activity, then markets crash. Value cannot be created without consumption. One cannot extract water from a dry sponge.

As such, I don’t believe our current system will replace workers en mass because it would be a death blow to the entire system. Even when considering the competition complex of capitalism, I can’t see executives and governments endorsing a system that threatens to instantly collapse itself. Globalization was only a taste of what AI would hypothetically do to knowledge workers, and it is abundantly clear how that has hollowed out the American working class since the 1980s. Those practices, over multiple decades, amounted to a slow burn of job losses and still decimated places like Detroit, Cleveland, Buffalo, and many other industrial towns. The instantaneous displacement of millions of people would precipitate a spiral that would drag the entire system down with it. In short, I don’t believe that even the greediest of societies would tolerate the levels of poverty and death this could render.

I’m hardly the first person to take a stab at this broad topic, if you want to dive deeper into this, check out this piece from Economics Explained. With that out of the way, let’s look at the limitations that AI itself struggles with.

AI has no inertia

AI is a tool, not a magic wand. Like any tool, it can do amazing things, often reducing the time needed to complete a task. But, also like all tools, it requires skills to use correctly. Just like a table saw, a calculator, search engines, or programming languages, the skills of the user dramatically change the quality of the output. Spell Check hasn’t eliminated my need to know how to spell or properly use grammar, but as someone with dyslexia, it sure has made my life easier. Google has access to, essentially, limitless information, but we all know that navigating their platforms and maximizing what you get from a search requires a high level of skill. That said, Spell Check or Google cannot synthesize the results for me from a simple prompt, right? Isn’t AI fundamentally different in this case?

I think we also need to consider how technological leaps have changed societal epochs over time. The Renaissance Era closed and made way for mercantilism and eventually industrialism on the backs of technologies and ideas like replaceable parts, steam power, central banking, printing presses, the scientific theory, and liberalism, just to name a few. Likewise, the Industrial Era, Modern Era, Atomic Era, and now the Digital Era have risen and fallen on the backs of technologies or philosophies that radically changed the world. In all these cases, new ideas and technologies gave way to new tools for performing labor. Hammers, reapers, plows, automobiles, wrecking balls, computers, and so on have all acted as disruptors in labor, often forcing people to adapt and change how work was done. In every case, jobs were changed, not eliminated, at least at the macro level. The Industrial Revolution forced farmers into cities to work on assembly lines. Replaceable parts ended guilds and craftsmen roles like smiths or cobblers. The computer radically changed office work and the internet has flattened access to information.

It is worth noting that this is also a reductive understanding of the past in a way; it’s not as if these advances didn’t have losers. The macroeconomic history of technological advancement often glosses over the very real and painful reality of humans fighting to adapt to shifts in the marketplaces of capitalism. Did every craftsman simply change their way of life with the advent of replaceable parts? Certainly not! But, it isn’t as if these changes happened overnight. Many of these technologies took decades to evolve and expand over geographies. People adapted, some better than others.

A classic demonstration of inertia | Photo Credit: Volker Möhrke

I want to double down here, however. New technologies do not disrupt laborers, the decisions of human beings do this. I think it is important that we stay grounded in this: technologies have the power to change labor, but they are not sentient bodies nor do they have an innate inertia to them. Thermodynamics is very applicable in this case. An object at rest will stay at rest unless acted on by an outside force. We are the outside force that must act upon technology to produce changes, in a metaphysical sense. We enact laws and policies in our governments. We implement change management within our organizations. We get to decide what AI does and does not do. We, collectively, have agency in the pathway forward. AI is not inevitable, nor was democracy, the wheel, currency, flight, or telecommunication. One surrenders their freedom and agency when one responds to new technology with fatalism or nihilism.

AI needs fuel

AI is still limited by what is known and what it can absorb. AI requires fuel and maintenance. The fuel of AI is “inputs”, which I’ll grossly over-simplify as “training data”. Without this data, there is no AI. You can ask DALL·E to make an “original VanGueh painting”, and it will do a remarkable job, but it can only do so because it has seen “training data ‘’ of what an impressionist painting is. In this manner, while extremely impressive, there’s not really anything “intelligent” about what AI is doing. The AI of 2023 represents a new paradigm in machine learning. These tools and platforms are mirrors and parrots. They reflect back to us what already exists. Comprehension and innovation are not part of the equation in regurgitating information.

The fuel that AI runs on is the collective knowledge and expertise of society. You and I and everyone else are part of what makes AI viable as it attempts to mimic us in increasingly convincing manners. Without humans, AI would not be able to conjure human behavior from nothing. I’m reminded of the Infinite Monkey Theorem, which, in brief, suggests that an infinite number of chimpanzees pounding on an infinite number of typewriters, over an infinite timeline will eventually produce a Shakespearean work, word for word. This is, of course not “sentience” as we understand it; the chimps aren’t purposefully typing with any goal or objective. While AI may be able to receive a prompt to “write a Shakespearean play” from scratch (unlike infinite chimps), it still requires a reality where Shakespeare existed. In fact, at the risk of taking the metaphor too far, chimps possess the ability to produce outcomes without external input in a way AI does not. After all, chimps are intelligent enough to use tools in the wild. I’ll reference thermodynamics again here. Without fuel (energy), AI has no potential to produce anything. This fuel has to come from an outside source because AI is not sentient nor is it autonomous.

Diagram of Behavioral Surplus | Photo Credit: Shoshana Zuboff and Karin Schwandt

As a brief aside, an argument is to be made about who “owns” this fuel. Today, AI can freely mine most of its fuel from the internet or is fed data from curated datasets. In many cases, the humans who generated the necessary fuel aren’t actually compensated in any meaningful way. Shoshana Zuboff calls this “behavioral surplus” in her book The Age of Surveillance Capitalism. Behavioral surplus is the “stuff” we all create by surfing the web; it is our data footprint. For many firms, this data is the backbone of their enterprises. It is also an essential material for AI to replicate human behavior. Without “behavior” to mimic, there is nothing to create. Sadly, our regulatory bodies are woefully behind in understanding this and offering sound consumer protections.

AI requires maintenance

The maintenance of AI is extensive. These tools, like all of the internet, require massive server farms, computing power, tuning, governance, technological architecture, and energy. There are untold numbers of supporting roles and vocations needed to enable all of this work.

This means a few things for the future of labor. First and most simply, the technical support sector of telecom infrastructure will only continue to expand. It isn’t always obvious to people, but the internet is not weightless; it takes up physical space in our world in the form of servers, fiber optics cables, and telecom networks. Second, a whole class of new jobs is going to emerge as AI becomes more ubiquitous and more advanced. Third, AI requires monitoring and interventions to ensure it stays within appropriate boundaries. For example, AI chatbots are very promising but present a huge risk to customer service if they go rogue or produce unpredictable outcomes. Last, like all technologies, if maintenance is ignored or deferred, the tool eventually fails. Knives will become dull over time. Batteries have to be replaced. Code becomes deprecated. Entropy exists in all systems and tools.

AI is not ethical

I’m hardly the first person to point this out, but AI is racist, ableist, sexist, homophobic, and every other terrible side of humanity you can imagine. As I said before, AI is a mirror, it simply reflects back what it sees. I found this TikTok the other day that explains it nicely, but in simple terms, if the content the AI is trained on is biased (and it is) then it produces those biased outcomes. Countless technologists, philosophers, politicians, and generally concerned people have pointed out how conscious and unconscious biases in AI have the potential to reinforce dangerous stereotypes in society.

AI failing to recognize a pixelated image of President Obama | Photo Credit: The Verge

I feel it important to caveat this accretion however. I’m of the persuasion that most technologies are morally neutral. Gunpowder, nuclear fission, and the internal combustion engine have both provided unprecedented scientific value to humanity, but have also unleashed untold violence. Technology often exists as a neutral object in history. Consider the invention of the wheel. Wheels have enabled countless advancements in humanity and are central to our existence as a species. Wheels have also been a mechanism of war for all of written history. The presumed “innate” nature of the epistemological history of these technologies abdicates our role in giving them meaning.

This is important to this discussion for two reasons. First, AI isn’t actually racist, ableist, sexist, and so forth, we are. Plantations and mercantile trade didn’t commit genocide in Africa and the Americas, humans did that. The cotton gin and textiles didn’t produce slavery, we did. Atom power didn’t produce the Cold War, we did. Narcotics didn’t produce the Opioid Epidemic, we did. AI reflects the ethics of our society. If we want to ensure that AI doesn’t perpetuate violence, then we have to deliberately and purposefully address our own societal ethics and morality.

Second, AI is amoral just like all other technologies. We determine the morality of AI. Jane Bennett points out in the early chapters of her book, Vibrant Matter, that trash is only “trash” because of how we have to come to understand “trash”. The only reason an idea or material takes on a morality or meaning is because of how we have come to understand it. What is the difference between trash, refuse, waste, garbage, litter, debris, scraps, and junk? How does “trash” transform into any of these other descriptions? When we make sense of AI what are we pulling from? Are we conjuring C3PO, Siri, The Terminator, Clippy, or Hal 9000? With all of the buzz around AI in 2023, how much of what we understand about AI is simply a product of feedback loops and groupthink? As we grasp for language to understand AI, how is that language crafting that understanding?

AI is not human

I feel pretty confident AI won’t take away our jobs, at least not in the sense that it will completely erase entire vocations overnight. It’s safe to say that AI and other similar technologies are going to continue to disrupt and augment work, but realistically, it will change our world in the same way the internet did. The World Wide Web has to be one of the most incredible achievements of mankind, and while our lives before and after have changed dramatically, things are also remarkably familiar.

Labor as a whole can’t be replaced by AI (for the foreseeable future) because AI so heavily requires human labor to produce results. Without human input, maintenance, and governance, AI effectively runs itself into the ground. All of these items are constant and ongoing tasks too; as society evolves and changes, AI needs to be shepherded to keep up.

Finally, I want to again reinforce the agency we have to choose what AI does and does not do. AI will not destroy jobs if we do not allow it to destroy them. Our leaders, politicians, technologists, and societies control what AI does and how we use it. AI is not inevitable and we are not powerless.

--

--

Tom Seiple
CodeX
Writer for

Informatics expert and software designer. Former scientist, urban planner, and public health researcher. The world is far too interesting for life to be dull.