The Nazi chatbots are coming

The next few years are going to be… interesting.

Joe Amditis
Center for Cooperative Media
5 min readMar 29

Artificial intelligence has come a long way in a relatively short period of time. AI language models like OpenAI’s GPT series are impressive in their ability to generate human-like text, answer questions, and even mimic the writing styles of famous authors. But as with any technology, there are inevitable dangers and ethical implications that come with the ability for anyone to fine-tune these models for their own (sometimes nefarious) purposes.

When developers fine-tune AI language models with specific objectives in mind, such as the promotion of certain social and political ideologies, it can lead to serious consequences.

Sure, you could argue that these models have research and educational purposes, but the risks can quickly outweigh the benefits. These custom trained models can easily be used to spread disinformation, fuel hate speech and violence, and create societal polarization.

Enter RightWingGPT.

The bot was designed explicitly to demonstrate that these models can be trained to spew right-wing political nonsense as a counter to the supposed radical left-wing biases espoused by ChatGPT.

New Zealand researcher David Rozado created RightWingGPT to demonstrate that AI language models can be trained to answer questions with a conservative perspective as a counter to perceived liberal bias in ChatGPT. He used a process called fine-tuning, in which he took an existing language model and tweaked it to create different outputs, almost like layering a personality on top of the language model.

Rozado took a series of more than 500 right-leaning responses to political questions and asked the model to tailor its responses to match. The experiment was focused on raising alarm bells about potential bias in AI systems and demonstrating how political groups and companies could easily shape AI to benefit their own agendas.

RightWingGPT was never released publicly, but it’s only a matter of time before someone outside of academia decides to create one for real.

This becomes even more likely as the cost of creating and fine-tuning AI models continues to drop. We’re likely to encounter more people and organizations that are tempted to create fine-tuned models specifically to spread hatred and violence against vulnerable people and marginalized communities. When (not if) that happens, it will likely be disastrous — especially for marginalized and underrepresented communities — further eroding American democratic institutions and exacerbating societal inequities.

And I’m not just talking about political or partisan biases. Models that are fine-tuned to recognize specific demographic groups, such as gender or race, will undoubtedly experience an increase in targeted violence and discriminatory outcomes at every level of society. I’m especially concerned about the potential impacts on the trans community in light of the recent wave of assaults on their humanity and dignity from state governors and legislatures.

That’s why it’s so important that we develop standards, guidelines, and rigorous enforcement mechanisms that ensure AI models are used in a responsible, ethical, and compassionate manner. Fine-tuning models to promote specific political biases is embarrassingly childish and petty, but it’s also unethical and irresponsible. It reeks of the kind of cultural and emotional desperation one would expect from a political movement that is so devoid of mass appeal and social approval that it will do anything to remain relevant.

Developers of AI models need to prioritize transparency, fairness, and oversight in order to avoiding perpetuating societal inequalities and target violence against vulnerable populations. That includes making sure that the data sets used to train AI models are diverse and representative of all groups in society.

The proliferation of fine-tuned AI language models like RightWingGPT is clearly a real and increasingly urgent cause for concern. These models have the potential to exacerbate societal divisions, accelerate the spread of disinformation and hateful rhetoric, and perpetuate the inequalities that already permeate American culture.

As journalists, it’s our duty to report on these potential dangers and hold those responsible for the development and use of AI models accountable. Unfortunately, at least for the moment, it seems like there’s only so much that can be done to stop them.

What can we do to prepare?

Still, there are a few general approaches that journalists, activists, and community organizers can do in the short-term to at least attempt to confront and address this issue:

  1. Raise public awareness: To start with, these folks can get the word out about the potential dangers of AI models like RightWingGPT. They can write articles, share posts on social media, and use other channels to communicate the risks.
  2. Work together to report on biased AI models: Journalists can dig into who’s behind these models and how they’re being used. This can help keep the creators and users accountable for any harm caused by their models.
  3. Push for regulations: Activists and community organizers can push for regulations that ensure AI models are developed and used in responsible, ethical ways. This could involve advocating for transparency in development and use, as well as protecting civil liberties.
  4. Collaborate with AI experts: It’s also a good idea to team up with AI experts. By doing so, we can gain a better understanding of the risks posed by AI models and come up with strategies to address these threats.
  5. Demand diverse data sets: To prevent AI models from perpetuating societal biases and inequalities, journalists and activists can demand diversity in data sets used to train them.
  6. Create alternative models: Activists and community organizers can even work to create alternative AI models that prioritize diversity, inclusivity, and social justice.
  7. Promote responsible and equitable AI development: By highlighting examples of AI models that are developed and used in responsible, ethical, and equitable ways, we can push for more responsible AI development overall.
  8. Encourage public discussion and engagement: Finally, we can encourage public debate about the ethical implications of AI models. Doing so can help ensure that AI development is guided by principles of transparency, fairness, and social responsibility.

Ultimately, it’s on us to be diligent when it comes to monitoring the development of these models and to make sure that they are not used in ways that are detrimental to the most vulnerable among us.

Joe Amditis is associate director of products and events at the Center for Cooperative Media. Contact him at amditisj@montclair.edu or on Twitter at @jsamditis.

About the Center for Cooperative Media: The Center is a grant-funded program of the School of Communication and Media at Montclair State University. Its mission is to grow and strengthen local journalism, and in doing so serve New Jersey residents. The Center is supported with funding from Montclair State University, John S. and James L. Knight Foundation, the Geraldine R. Dodge Foundation, Democracy Fund, the New Jersey Local News Lab (a partnership of the Geraldine R. Dodge Foundation, Democracy Fund, and Community Foundation of New Jersey), and the Abrams Foundation. For more information, visit centerforooperativemedia.org.

--

--

Joe Amditis
Center for Cooperative Media

Associate director of products + events, Center for Cooperative Media; host + producer, WTF Just Happened Today podcast.