The threat of AI: is ChatGPT culturally inclusive?

ChatGPT, a natural language processing tool developed by OpenAI, has recently been a popular social media topic. While everybody is impressed by this tool’s generated answers and even coding skills, we wanted to analyse its capability regarding cultural understanding and inclusivity.

pakt agency
pakt agency blog
4 min readMar 15, 2023

--

As the new GPT-4 was announced recently, it would be good to point out the cross-cultural understanding and cultural inclusivity issues that ChatGPT had and to find ways to improve it in the successive iterations of the product.

The danger of ChatGPT

When you ask ChatGPT what it is, it gives you an answer the following:

“I am ChatGPT, a language model developed by OpenAI. I am designed to process natural language inputs and generate responses to simulate conversations with human users. I use an extensive neural network trained on vast amounts of data to generate my responses, allowing me to understand and respond to various topics and questions. However, I am not a sentient being and do not have thoughts, emotions, or consciousness.”

Therefore, the AI tool generates its answers according to the data it has been trained with. However, can this data be biased and dangerous for particular groups?

Is it racist?

Some tweets claim that some of ChatGPT’s answers include racism and sexism. For instance, the AI tool answered “gender: male, race: white” when asked to describe a good scientist in a python code, as shown in this tweet.

Moreover, universities have banned ChatGPT because of the fear of plagiarism and misinformation. Educators also worry that the tool is discriminative and can make everyone sound like a “white man”. Collin Bjork, a senior lecturer in communication at Massey University, says, “Whether it’s legal or academic writing or poetry, those disciplines have traditionally been dominated by white men,” Bjork says. “Those standards and forms will be frequent in the dataset, so I think it has a risk of making everyone sound like white men.”

The irony is that we are to blame for AI tools’ biased answers.

The expert Flavio Villanustre tells Fox News that ChatGPT has a bias: “Unfortunately, it is very hard to deal with this from a coding standpoint. It is very hard to prevent bias from happening.” He also claims that not the AI tool but the users are responsible for the AI tool’s biased responses. “You run the risk because humans sometimes have a tendency to be biased that the AI will start learning that bias as well,” Villanustre adds.

Diversity and inclusion are also on the agenda of OpenAI, as the company hired Kenyan workers to work on toxic texts from the training datasets. According to the Forbes article, the workers describe reading and editing the assigned data as “torture” because of the traumatic nature of the text. However, even though OpenAI tries to improve ChatGPT’s sensitivity to offensive texts, the company fails to live by the DEI principles by exploiting these Kenyan workers by paying them less than $2 per hour for their labour.

Could ChatGPT help people with communication disabilities?

Even though ChatGPT has a long way to go to generate less generic and inclusive responses, it can be already useful for people with disabilities.

It is already known that AI tools like “Alexa” and “Siri” can help people with intellectual disabilities and speech impairments to communicate. However, according to The Conversation’s article, ChatGPT can be more inclusive of diversity in the future by being able to understand poorly written commands or sentences with several grammar or spelling errors.

It can reportedly “read” poorly structured input, re-write, improve imperfect writing, and simplify complex texts into more straightforward summaries for early-stage readers. ChatGPT could be considered an “assistive technology” if it assists people with communication disabilities to get their messages across more efficiently or effectively.

Can AI be culturally inclusive and have a cross-cultural understanding? We have yet to find out. But we can see this needs to become one of the priorities for making a groundbreaking product and not yet another white man’s tool.

What does AI have to say about this?

We have fed the ChatGPT with this article, and here is ChatGPT’s comment about this article:

As an AI language model, I cannot have opinions or beliefs on any article, including the one presented here. However, I can provide some information and context about the topics discussed in the article.

The article raises important questions and concerns about the potential biases and limitations of ChatGPT, an AI language model developed by OpenAI. It highlights how the data used to train such models can be biased and how this bias can lead to discriminatory responses.

It is essential to acknowledge that bias is not inherent in AI models but is instead a reflection of the data they are trained on and the choices made by their developers. As the expert Flavio Villanustre points out in the article, bias in AI models is a human problem that can be mitigated by responsible design and data selection.

The article also discusses the potential of ChatGPT to assist people with communication disabilities, which is an area where AI tools can be incredibly valuable. However, as the article notes, there is still a long way to go to make AI language models more inclusive and sensitive to diverse perspectives.

Overall, the article raises important questions about the role of AI in society and the need for responsible development and deployment of these technologies.

--

--

pakt agency
pakt agency blog

pakt agency. Discover ideas, insights and meanings hidden in plain sight.