The Silent Revolutionaries in the Ideological Battlefield of Politics

Philipp Wirth
Greetings from the Frontier
6 min readMar 31, 2023
Source: imgflip.com

The impact of technology on our society has been one of the defining features of the modern era. As machines and automation replace human labour, we see a transformation in the nature of work itself. But the implications of this shift extend far beyond just economic concerns. With the rise of AI, we enter the philosophical realm on different levels. Who controls the current AI models, and what hidden agendas might lurk beneath its programming? Should we allow the development of autonomous general intelligence and what are the consequences of doing so? And as AI increasingly influences our politics and social interactions, what kind of society are we building for ourselves?

What’s going on?

Three months ago we wrote a piece about ChatGPT. Ever since then, the developments around this new technology have been lightning-fast. In March, only around 5 months after the initial launch of ChatGPT, OpenAI published its successor GPT4. ChatGPT is also the fastest-growing consumer app ever, reaching an estimated 100 million active users after only two months of its existence. As a comparison, Instagram needed 2.5 years and the internet itself 7 years. With the fast adoption of AI tools, warning voices become louder and louder. A group of researchers and Tech CEOs including Elon Musk only recently advocated for a slowdown in AI development. While said Tech CEO could be seen as a somewhat controversial persona himself, the signatories of this open letter are not fundamentally wrong.

While we technically understand how such Large Language Models (LLMs) function, at last we built them, we might not yet be fully aware of how they develop themselves. “Develop themselves?” you might wonder. There have been allegations very recently that Google’s Bard was trained with responses from ChatGPT. Of course, Google denies this. But in the world of Machine Learning and AI, this is not an uncommon process. Ultimately, playing against versions of itself made AlphaGo the best Go player in the world. Which, by the way, was developed by DeepMind, a subsidiary of Google.

But even if LLMs or Generative AI models, in general, aren’t directly trained by interactions with themselves, GPT5 or GPT6 might as well contain a lot of data created by its predecessors. OpenAI trains the models on internet data, which will inevitably contain more and more of its own texts. Texts of a generative AI that we know are biased. As the data sets these models are trained with typically contain the biases of our society, this might not even necessarily be the AI’s fault. Nevertheless, they are prevalent. In addition to theses biases, there are many more risks associated with this technology, that even one of its creators acknowledges. AI could be used to write harmful code or to spread large-scale misinformation.

Besides that, there is still an influence on the labour market. In a recent report, Goldman Sachs predicted that up to 300 million jobs might be influenced by AI in one way or another. While some of these jobs might only be changed by a small fraction, the report states that up to a quarter of current work could be replaced by AI.

This report instantly reminded the author of a thought experiment conducted during his undergraduate studies, which suddenly seems much more timely than in 2018. So, let’s take a walk down the author’s memory lane.

AI as a human labour substitute — cause for political radicalisation?

With the loss of jobs comes the question of what happens to the suddenly unemployed. While some argue, that many jobs people will eventually end up working in don’t even exist yet, it is safe to assume that advances in AI and robotics will result in higher unemployment. When conducting this thought experiment, some of the available research at that time pointed out that the impact of task automation driven by ML, AI and robotics can be particularly felt in lower-skilled jobs. Examples vary from advanced robotics in factories to self-service checkouts in supermarkets (Lordan & Neumark, 2018). In another paper, Graetz and Michaels (2015) showed that improvements in AI and robotics have a huge impact on economic growth. Though contradictory to the increasing GDP, they found working hours for low- and middle-skilled workers were decreasing during the same period.

This perspective changed. In fact, LLMs are more likely to hit higher-skilled jobs at a much faster speed. Information-processing jobs such as accountants and public relations specialists could be in need to find new occupations. And let’s not forget about legal personnel, given the scores GPT4 achieved in the LSAT and bar exam.

While there has been much research on the economic effects of AI, few studies have linked these developments to politics, for instance, the general public’s voting behaviour. According to the Economic Voting theory, macro- and microeconomic developments, such as employment rates and inflation, can directly influence political voting behaviour (Kiewiet, 1981). Economic factors are a part of many election campaigns throughout parties from the far right to the far left (Inglehart and Norris, 2016). And we have certainly seen examples of this influence in the past. So much so, that researchers found evidence for the impact of the economy on voters in US presidential elections all the way back to George Washington (Guntermann et al., 2021).

So do we have to worry about the repetition of the 1930s, when high inflation and unemployment rates in Germany were at least beneficial for the rise of a radical party that would bring terror over the world in the following years?

1984 in 2023

To answer the previously raised question, no one knows. But there are surely ways to mitigate the impact of AI at least in terms of unemployment and economic issues. With rising automation, the way forward might only be a universal basic income (UBI). If the economic output rises driven by automation and task optimisation, the profits could be used to fund a UBI. At least it’s time to thoroughly discuss the pros and cons of UBIs under these new and rapid developments.

However, that still leaves the possibility of individuals influencing elections by using LLMs to spread false information, a fear that Sam Altman rightfully raised. This is much more difficult to tackle. Countermeasures could potentially include advanced software that detects and deletes false information or AI-created texts. Which directly brings us back to one of the initially raised questions about hidden agendas in AI models. Some right-wing commentators and politicians already accused ChatGPT of being woke. While this might or might not be true, it’s ultimately up to the creators of an algorithm with which data the models are trained, and which biases are inherent to their AI.

Lastly, what about a sentient AI which suddenly decides that democracy is bad and acts in a way to undermine or overturn existing political systems? Sounds like science fiction? After all, a former Google engineer claims that LaMDA, the AI that powers Googles Bard, is already sentient. And OpenAIs GPT4 apparently shows sparks of Artificial General Intelligence.

Given the risks associated with these Large Language Models, it is somewhat understandable that people shout for a slowdown in developments. In reality, this probably won’t happen without the intervention of governments all around the world. And it might not be necessary if we stay by a ground principle in AI research. The availability of open-source software and public research. Unfortunately, OpenAI decided to go a different way, citing the dangers of everyone having access to a potent AGI. The question is if this is worse than having to fully trust a single entity that owns this technology.

AI won’t disappear again. And despite the somewhat cautious tone of this article, the author is an AI advocate and frequent user for various tasks such as conducting financial research, asset pricing or lecturing. But both, more research and new policies are needed to steer the development of AI in an ethical and sustainable direction. If used correctly, new technologies will benefit and progress our society ever so further.

--

--