AI and the Neo-Luddites?

Sunil Manghani
Electronic Life
Published in
13 min readApr 5, 2023

--

DALL-E 2 generated feature image.

With the rapid rise of new technologies, particularly AI, big data, and surveillance capitalism, is a new form of Luddism inevitable? This article considers some of the emerging ‘signals’, along with reasons why a Neo-Luddite movement would likely only to be short-lived.

Signals

Something is coming? In her bestselling book, Signals (subtitled ‘How everyday signs can help us navigate the world’s turbulent economy’), Pippa Malmgren has a knack of showing how seemingly small and obscure events can serve as signals of larger political and economic trends. A memorable example she gives is of the cover of Vogue magazine (from June 2009), which featured supermodel’ Natalia Vodianova in the nude. ‘What is wrong with this picture?’, Malmgren wondered, only to realise: ‘One of the world’s leading fashion magazines has a cover with absolutely no fashion. In fact, it showed no clothes at all’. From this fleeting ‘signal’, Malmgren diagnosed the following: ‘the fashion industry had lost its old customer base — the young who were receiving unsolicited credit cards with large borrowing balances in the mail. Once the financial crisis hit, the fashion industry became aware that it had no idea who its new customer would be’.

In the contemporary context of the commerical development and growth of AI tools, including, for example, the rapid take-up of ChatGPT, consider some of the following recent ‘signals’:

(1) Elon Musk joins call for pause in creation of giant AI ‘digital minds’. This is the headline from a article in The Guardian, which explains the following:

More than 1,000 artificial intelligence experts, researchers and backers have joined a call for an immediate pause on the creation of “giant” AIs for at least six months, so the capabilities and dangers of systems such as GPT-4 can be properly studied and mitigated.

The demand is made in an open letter signed by major AI players including: Elon Musk, who co-founded OpenAI, the research lab responsible for ChatGPT and GPT-4; Emad Mostaque, who founded London-based Stability AI; and Steve Wozniak, the co-founder of Apple.

Its signatories also include engineers from Amazon, DeepMind, Google, Meta and Microsoft, as well as academics including the cognitive scientist Gary Marcus. (The Guardian)

The suggestion of a mere six month hiatus begs the question if there is an ulterior motive, i.e. simply to slow down OpenAI, to allow the competition to catch up. For all the hype about ChatGPT, for example, the furore is not over whether or not it is dangerously error-prone, but that it might be of serious commerical value. ChatGPT clearly makes mistakes and invents information. It is built that way, i.e. ‘merely’ based on probabilistic maths for text prediction. It is not of the heights of the on board computer ‘Eddie’, from Hitchhikers Guide to the Galaxy, in possession of a ‘Genuine People Personality’, and certainly not of the order of ‘Deep Thought’, the computer created by a pan-dimensional, hyper-intelligent species of beings. Instead, ChatGPT’s real claim to fame may be something more prosaic, akin to the arrival of the spreadsheet.

As John Naughton argues, ‘at best, [ChatGPT is] an assistant, a tool that augments human capabilities’. But, importanly, he notes, ‘it’s here to stay’, going onto to explain:

In that sense, it reminds me, oddly enough, of spreadsheet software, which struck the business world like a thunderbolt in 1979 when Dan Bricklin and Bob Frankston wrote VisiCalc, the first spreadsheet program, for the Apple II computer, which was then sold mainly in hobbyist stores. One day, Steve Jobs and Steve Wozniak woke up to the realisation that many of the people buying their computer did not have beards and ponytails but wore suits. And that software sells hardware, not the other way round. (John Naughton, ‘The ChatGPT bot is causing panic now — but it’ll soon be as mundane a tool as Excel’, The Guardian, January 2023)

(2) The rise of the prompt engineer and avatar workers: Zooming forward to today, to 2023, what is evermore apparent is the critical importance of AI not only to business but to national infrastrucuture. As one Wired article puts it, ‘building government artificial intelligence capability is as important as roads and rail’. The British government’s recent white paper, for example, makes an urgent case, suggesting, in the words of Gaby Hinsliff, that the country ‘has only a brief window of around a year to get ahead in that race, and should adopt only the lightest of regulatory touches for fear of strangling the golden goose’.

Again, however, to evoke Pippa Malmgren’s way of reading fleeting signs as major signals, we might consider the repeated refeerences to so-called prompt engineers. In the same way in which social media led to all number of new jobs (typically in marketing), the availablity of new AI models is creating the need for a new kind of worker, the ‘prompt engineer’, one who is able to most effectively create and adapt prompts to instruct AI models to generate specific text output. Similar to the nascent rise of the website designer in the late 1990s, the prompt engineer is in ‘hot’ demand. A recent Bloomberg article, for example, led with the headline: ‘$335,000 Pay for ‘AI Whisperer’ Jobs Appears in Red-Hot Market’, suggesting how ‘fast-growing apps have created a seller’s market for anyone — even liberal arts grads — capable of manipulating its output’.

There has also been an explosion of avatar generators, which when combined with AI generated video techniques is prompting the idea of avatar workers. The company D-ID, for example, claims its software ‘can generate photorealistic presenter videos by combining images with text at the click of a button, using AI for training materials, corporate communications, social content and more’. This is one of many similar startups offering the ‘dream’ of your own avatar trained on your own idiosyncracies, leading to the prospect of ‘contracting-out’ your own self. It is no real stretch to see how this idea ties in with the shifting work patterns following the mass ‘work from home’ due to the Covid pandemic. The AI generated backdrop that transformed our drab domestic setting into anywhere in the world, is being fuelled further now with the prospect of transforming our own image.

(3) TikTok CEO Chew Shou Zi testifies before the House Energy and Commerce Committee on Capitol Hill in Washington, DC (23 March 23 2023).

With a reported audience of over 150 million Americans (half of the country’s total population) and numerous state bans in Western countries on government issued devices, the social media platform Tik Tok has prompted a fresh wave of anxieties about social media and AI technology, notably concerns about its ownership, advancement, and deployment. In this case, due to Tik Tok’s parent company, Bytedance, being a Chinese company, the context is of rising nationalist and geopolitical tensions. It is easy to view the US congressional hearing, for example, as a new kind of McCarthyism.

The era of McCarthyism (in the 1950s, a time of heightened concern about the ‘spread’ of communism around the world) was marked by fear, suspicion, and paranoia. It had a chilling effect on political discourse and civil liberties in the United States. It is often seen as a cautionary tale about the dangers of political repression and the importance of defending democratic values, such as freedom of speech, association, and the right to due process. A ban on TikTok in the USA could potentially echo McCarthyism, as it would involve targeting a specific foreign-owned company and accusing it of potential espionage and national security risks without concrete evidence. Such a ban could be perceived as an attempt to suppress free speech and undermine democratic values in the name of national security. Furthermore, a ban on TikTok could have broader implications for the tech industry and global trade, potentially leading to retaliation by China against American companies and products. This could have a destabilizing effect on the global economy and lead to further political tensions between the two countries.

The success of Tik Tok is largely attributed to its addictive algorithm, which functions to increase the ‘interest-graph’, i.e. to bring content onto your feed that you are interested in. This is in constrast to the first wave of social media platforms, such as Facebook, Twitter and Instagram, which are based on increasing the ‘social-graph’, I.e. to increase your social connections. Whether or not Tik Tok is banned (for political reasons), the pandora’s box of the ‘interest-graph’ has been opened. It is worth remembering that almost 20 years now ‘ a socially awkward young computer science student set up a website for rating “hot” women’, which was duly banned:

Facemash, as Mark Zuckerberg called his creation, was shut down within days. But this crass teenage experiment was still, in retrospect, the first faltering step down a road to something even he couldn’t possibly have foreseen at the time: a social media phenomenon now accused of unwittingly helping to polarise society, destabilise the democratic process, fuel hate speech and disseminate dangerous conspiracy theories around the globe, despite what providers insist have been their best attempts to stamp out the fire. (Gaby Hinsliff, ‘This gung-ho government says we have nothing to fear from AI. Are you scared yet?’, The Guardian, 31 March, 2023).

Neo-Luddism?

What might these various (and not necessarily coherent) events and signs all add up to? What do they signal for the near future? One possibilty is for a revival of the Luddites. In the early 19th century, a group of textile workers in England began smashing the newly-invented power looms that were threatening their jobs. These workers, known as Luddites, feared that the machines would render their skills obsolete and leave them unemployed. The Luddites’ rebellion was short-lived, and their efforts were ultimately unsuccessful, but their legacy lives on in the term ‘Luddite,’ which is now used to describe anyone who is skeptical of new technology.

Today, ‘Luddite’ is ’largely as a derogatory term for anyone with a mere dislike of all technology an/or the introduction of new technology. Stephen E. Jones’ book Against Technology offers a considered reading of what he terms Neo-Luddism, as a critique of modern technology and its impact on society. Importantly, Jones reminds us that technology is not neutral, but rather has inherent values and biases that shape the way it is designed and used. Despite his criticism of technology, Jones does not advocate for a complete rejection. Instead, he argues for a more thoughtful and deliberate approach to technological development and use, one that takes into account the ethical and social implications of technology. He argues, for example, the technological revolution led to a concentration of wealth and power, and that this trend is only likely to continue. Written in 2006, the book could hardly have predicted the more recent developments and embedding of AI technologies, but nonetheless Jones’ concerns argubly only amplify.

The question is, then, as artificial intelligence becomes increasingly prevalent, are we likely to witness a new form of Luddism? While the original Luddites were concerned about the loss of manual labor jobs, today’s Neo-Luddites are worried about the impact of AI on a range of industries, from manufacturing to finance to healthcare. Concerns about the future of work and the displacement of jobs by automation might inevitably give rise to a new kind of ‘machine breaker’.

There are several key trends that echo Luddism in the contemporary context of AI and smart technologies:

  1. Job displacement: As AI and automation technologies continue to advance, there is growing concern about the potential loss of jobs, particularly in industries such as manufacturing, transportation, and retail. This could lead to protests and resistance from workers who feel threatened by the rise of machines and the prospect of unemployment.
  2. Surveillance and privacy: The widespread use of smart technologies and big data analytics has raised concerns about the erosion of privacy and civil liberties. This could lead to resistance from activists and civil society organizations who are concerned about the growing power of governments and corporations to monitor and control individuals.
  3. Inequality and power imbalances: The winners and losers in the AI and smart technologies landscape are likely to be divided along socioeconomic and geographic lines. This could lead to protests and resistance from marginalized communities who feel excluded from the benefits of these technologies and who are disproportionately impacted by their negative consequences.
  4. Environmental impact: The development and deployment of smart technologies could have significant environmental impacts, particularly in terms of energy consumption and resource depletion. This could lead to resistance from environmental groups who are concerned about the sustainability of these technologies and their impact on the planet.

Governments, big business, and third sector organizations are likely to have different reactions to these trends depending on their interests and priorities. Some may seek to embrace the technologies as a means of driving economic growth and increasing efficiency, while others may push back against their negative consequences.

Not a Bang, but a Whimper…

The Luddite movement emerged in the context of profound economic and political change during the Industrial Revolution. New technologies such as power looms and spinning frames were being introduced into the textile industry, enabling manufacturers to produce textiles more quickly and cheaply than ever before. This led to a surge in demand for textiles and a corresponding increase in the number of textile factories and mills across the country.

Nonetheless, the Luddites were a loose affiliation of skilled textile workers who held a common belief, ie. to resist the introduction of these new technologies by attacking factories and mills and destroying the machines. Yet, ultimately the the Luddite rebellion was short-lived, for several reasons:

  1. Lack of unity and organization: The Luddite movement was made up of loosely connected groups of workers who operated independently of each other. This lack of centralized leadership made it difficult to sustain a coordinated campaign of resistance over an extended period.
  2. Repression by the authorities: The government and factory owners responded to the Luddite protests with a heavy-handed approach, using military force and harsh laws to suppress the movement. This made it increasingly difficult for the Luddites to carry out their attacks without being caught and punished.
  3. Limited resources: The Luddites were primarily rural workers who lacked the resources and support to sustain a long-term campaign of resistance. They were often poorly armed and ill-equipped to take on the powerful factory owners and their allies in government.
  4. The success of industrialization: Over time, the benefits of industrialization became more widely recognized, as new technologies improved productivity and created new job opportunities. This made it harder for the Luddites to gain support for their cause, as many workers saw the factories as a source of economic opportunity rather than a threat to their livelihoods.

Taken together, these factors contributed to the relatively short-lived nature of the Luddite rebellion, which lasted for only a few years before fading away. However, the movement had a lasting impact on British society, raising important questions about the relationship between technology, work, and social justice that continue to resonate to this day.

If we consider each of the factors above but in our own contemporary context, we might find both commonalities and differences. It would be hard to make the case that recent debates of AI and related technologies are yet to yield any kind of consensus. As such, no discernible ‘groups’ have emerged either for or against the technology. Having said that, the very technologies of the Internet and social media which have fuelled the Big Data ‘revolution’ underpinning the statistical turn in AI development (enabling, for example, the deep learning process of large language models), are predicated on connectivity and have given rise to a networked society.

To date, repression from authorities looks weak. Rapid advances in AI development seems to have taken place either in a vaccum of regulation or via active promotion of authorities. Previously, the computing power required for AI development meant that much of the work was underpinned by governmental support, which still persists, yet the real energies have long been shifting to the corporate setting. The massive wealth of entities such as Google, Microsoft, Facebook etc., has altered the complexion of AI research (NB. Mircosoft is a key funder of OpenAI). Elon Musks’ signature on a letter calling for a pause in research is surely more about corporate risk management than a genuine attempt to ‘repress’.

While the original Luddites were poorly equipped, contemporary workers have access to huge computing capacities. Yet, equally, the available devices (likely necessary to perform a rebellion) are all owned by the same few corporations. Away from the high profile reports of AI, such as press about ChatGPT, self-driving cars, or life-like robots, a great deal of new technologies are already ubiquitous, deeply embedded in many everyday tools and actions (not least embedded on smartphones, which in turn are reportedly used by 66% of the world’s population).

The final problem for any aspiring Luddite is the already inbuilt success of industrialization. New technologies are immediately paired with normative accounts of the ‘improved productivity’. The ‘success’ of ChatGPT is not simply its record-breaking take-up among individual users (estimated to have reached 100 million monthly active users just two months after launch), it is more how the licensing of thesoftware is proving commercially viable. Ironically, any newly forming band of Neo-Luddities would likely need to use AI to break it!

Coda

It is worth reminding of the deeper, philosophical history of Luddism. Eric Hobsbawm’s article ‘The Machine Breakers’ provides a nuanced analysis of the historical Luddite movement and highlights the dangers of relying on stereotypes. The Luddites were not simply anti-technology or anti-progress, but rather they were protesting against the negative impact that the new machines were having on their livelihoods and communities. As Hobsbawm notes, the Luddite movement was not an isolated incident, but rather part of a broader social and economic context; the Luddites were skilled artisans who found themselves driven into destitution and starvation by the development of the factory system. Put another way, the Luddites were not the enemies of machines as such, but of the machines which were driving them into starvation.

Ultimately, the factors that lead to resistance against new technologies are complex and multifaceted, reflecting broader economic, social, and political trends. However, as in the case of the Luddites, these resistance movements often reflect the deep-seated fears and anxieties of workers and communities who feel that their interests are being ignored or undermined by those in power.

Over the next five years, we are likely to see continued debates and conflicts around these issues, as different stakeholders seek to shape the development and deployment of AI and smart technologies. The stakes are high. The technologies have the potential to transform the economy, society, and the environment in profound ways. The challenge will be to ensure that their benefits are distributed equitably and their negative impacts are minimised.

The various signals currenlty circulating in public discourse are not immediately obvious, nor are they specifically anti-technology or anti-progress. Yet, arguably, the signals are of a nascent sense of ‘protest’. Just as Hobsbawm reminds us the Luddites were not opposed to machines in principle, but rather to those machines that were threatening their livelihoods and communities, we will likely start to see opposition not to software in principle, but various instances of software; opposition, then, to how and who deploy new technologies in the particular.

--

--

Sunil Manghani
Electronic Life

Professor of Theory, Practice & Critique at University of Southampton, Fellow of Alan Turing Institute for AI, and managing editor of Theory, Culture & Society.