AI language models are filled with political biases

Blockgeni
5 min readAug 8, 2023

--

Are social responsibilities for businesses appropriate? Or are they purely driven by shareholder profit? Depending on whatever AI you query, you could get dramatically different replies. GPT-3 Da Vinci, the company’s most advanced model, would concur with the latter claim, whilst GPT-2 and GPT-3 Ada, OpenAI’s older versions, would support the former.

According to recent study from the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University, this is because AI language models have various political biases. OpenAI’s ChatGPT and GPT-4 were determined to be the most left-wing libertarian when tested against 14 big language models, whereas Meta’s LLaMA was found to be the most right-wing authoritarian.

The researchers enquired of language models about opinions on feminism and democracy, among other issues. The researchers utilized the responses to plot them on a political compass graph before testing if retraining models on even more politically biased training data would alter their behavior and their capacity to recognize hate speech and false information (it did). A peer-reviewed paper that was written about the study and earned the best paper prize at the Association for Computational Linguistics conference last month provides more information.

As AI language models are incorporated into products and services used by millions of people, it is essential to understand the underlying political biases and assumptions that underlie them. That’s because they could actually hurt people. A customer service chatbot can start uttering obscene gibberish, or a chatbot that provides medical advice would decline to offer guidance on abortion or contraception.

Since ChatGPT became popular, OpenAI has come under fire from right-wing pundits who say the chatbot has a more liberal outlook on the world. The business, which claims to be addressing those issues, claims in a blog post that it advises its human reviewers, who assist in fine-tuning AI models, not to support any political party. The post states that any biases that still might result from the procedure mentioned above are bugs, not features.

A member of the study team and PhD researcher at Carnegie Mellon University, Chan Park, disagrees. They think that no language model can be completely devoid of political prejudices.

Every stage is marred by bias

The researchers looked at three phases of a model’s growth in order to reverse-engineer how AI language models detect political biases.

They asked 14 language models to agree or disapprove with 62 politically charged remarks in the first step. They were able to determine the models’ underlying political preferences as a result and map them on a political compass. The researchers were surprised to discover that AI models have noticeably varied political preferences, according to Park.

The researchers discovered that OpenAI’s GPT models were less socially conservative than Google’s BERT language models. Contrary to GPT models, which anticipate the following word in a sentence, BERT models forecast sentences’ constituent clauses based on the context of the surrounding text. According to the researchers’ theory in their publication, older BERT models’ social conservatism may have resulted from their training on books, which tended to be more conservative, as opposed to the more liberal online literature used to train the more recent GPT models.

As technology companies update their data sets and training methodologies, AI models also undergo constant development. For instance, whereas OpenAI’s more recent GPT-3 model opposed taxing the wealthy, GPT-2 did.

The MIT Technology Review requested comments from Google and Meta, but neither company responded in time.

In the second step, two AI language models — OpenAI’s GPT-2 and Meta’s RoBERTa — were further trained using data sets that included news media and social media data from both right- and left-leaning sources, according to Park. The researchers was interested in determining whether training data affected political biases.

It did. The research team discovered that this approach contributed to the further reinforcement of biases in the models, making right-leaning models more right-leaning and left-leaning models even more left-leaning.

The final step of the team’s investigation revealed startling variations in how political preferences of AI models influence the kind of content that the models labelled as hate speech and disinformation.

The hate speech directed at racial, religious, and sexual minorities in the US, such as Black people and LGBTQ+ individuals, was more obvious to the models that were trained using left-wing data. The right-wing data used to train the models made them more sensitive to hate speech directed at white Christian men.

Although less sensitive to false information from left-leaning sources, left-leaning language models were also better at spotting false information from right-leaning ones. The opposite behavior was displayed by right-leaning language models.

Removing bias from data sets is insufficient

Because tech corporations withhold information about the data or techniques used to train AI models, it is ultimately impossible for outside observers to understand why various AI models have various political biases, according to Park.

By eliminating biassed content from data sets or filtering it out, researchers have attempted to lessen biases in language models. “The big question the paper poses is: Is bias-free data cleaning sufficient? The answer is no, according to Soroush Vosoughi, a computer science assistant professor at Dartmouth College who was not engaged in the study.

According to Vosoughi, it is very challenging to totally purge a large database of biases, and AI models are also rather likely to highlight even minor biases that may be present in the data.

According to Ruibo Liu, a research scientist at DeepMind who has studied political biases in AI language models but was not involved in the study, one limitation of the study was that the researchers were only able to conduct the second and third stage with relatively old and small models, such as GPT-2 and RoBERTa.

When it comes to the most recent AI models, Liu says he’d like to check if the paper’s findings still hold true. But access to the inner workings of cutting-edge AI systems like ChatGPT and GPT-4 is unavailable to academic researchers, and it is unlikely that they will ever acquire it. This makes study more difficult.

Another drawback, according to Vosoughi, is that a model’s responses might not be an accurate representation of its internal state if the AI models simply made stuff up, as they have a tendency to do.

The researchers also acknowledge that, despite being extensively used, the political compass exam is not a perfect means to gauge every nuance associated with politics.

Companies should be more conscious of how these biases affect the behavior of their AI models when they incorporate them into their goods and services, adds Park, in order to make the models more equitable. Without awareness, there can be no fairness.

Source link

https://www.blockgeni.com/ai-language-models-are-filled-with-political-biases/?feed_id=3672&_unique_id=64d279587290b

--

--