AI and Misinformation: Why Sometimes Having an Answer is Worse than “I Don’t Know”

Cheng (Miles) Li
Kigumi Group
Published in
8 min readMay 1, 2023

In the results of a recent survey sent out by Forbes Advisor, a majority of respondents answered they were “very likely” (37%) or “somewhat likely” (30%) to use ChatGPT as their go-to search engine over Google. Indeed, because of the design of the user interface of the newly-born artificial intelligence, our questions can seem to be answered in no time with ease. No more scrolling through page after page of somewhat relevant information. By simply feeding our questions to the AI in one or two sentences, we can get the answer without hours of searching.

Photo by Mikael Blomkvist on Pexels

But, just because ChatGPT can give us answers at lightspeed, does it mean that the answers are trustworthy? In most cases, the answer could be ‘yes’. However, accuracy of answers is never guaranteed. When it comes to materials that the AI is not trained with, it is hard for the AI to provide the user with an accurate answer; instead, the AI will sometimes make up an answer to the question that the user raised.

Take this quote from Bertolt Brecht, for instance.

“I say to you:

Take care that when you leave the world

You have not merely been good, but are leaving

A better world.”

— Bertolt Brecht

This is a quote from the play Saint Joan of the Stockyards, a play which hasn’t yet been included in the materials the AI has been trained on. When ChatGPT is asked to identify the origin of this quote, this is the response that the user is provided with:

The response generated by ChatGPT

Q: where is “I say to you: Take care that when you leave the world/ You have not merely been good, but are leaving/ A better world.” from?

A: This quote is attributed to the German philosopher Friedrich Nietzsche. However, it should be noted that this exact phrasing does not appear in any of Nietzsche’s works and it’s unclear where it originated from. Nevertheless, it captures the essence of Nietzsche’s philosophy and his critique of morality. He often wrote about the importance of living a life that is true to one’s own values, rather than simply conforming to conventional moral norms.

Even when the author of the quote is specified, the AI still fails to provide the correct answer:

The response generated by ChatGPT

Q: where does Brecht’s quote: “I say to you: Take care that when you leave the world/ You have not merely been good, but are leaving/ A better world.” come from?

A: The quote “I say to you: Take care that when you leave the world/ You have not merely been good, but are leaving/ A better world.” is attributed to Bertolt Brecht, a German playwright and poet known for his contributions to the theatre of the 20th century. The quote is often cited as part of Brecht’s poem “To Posterity” (An die Nachgeborenen), which was written in 1938 during his exile from Nazi Germany. The poem encourages future generations to learn from the mistakes of the past and work towards a better future.

Photo by Aline Viana Prado on Pexels

The AI then proceeds to provide a version of “To Posterity” by Brecht that includes such a quote, when in reality, if one were to search up the original poem, one would find a completely different poem by Brecht with the same title; the version ChatGPT provides is entirely fabricated by the AI, and written, based on the style of the poet, on the spot by the AI in response to the prodding question.

Like Google and other search engines, ChatGPT has its own limitations. However, as we may see in the case above, it is sometimes designed to prioritise providing an answer to its users rather than saying “I don’t know”.

How do we know which of the information it provides is true? How do we know if the answer it gives is not completely made-up? Indeed, because of its interactive nature and convenience, ChatGPT can sometimes be a good alternative to traditional search engines; nevertheless, the very impression of “search engine alternative” can also blind us to the fact that it is AI at its core, and, like much other artificial intelligence, it can provide us with really good lies and misinformation rather than the answer we are looking for.

Photo by Raduz on Pexels

AI, Misinformation, and Deepfakes

Artificial intelligence and its ability to aid the spread of misinformation are not a new topic of discussion. The New York Times, The Guardian, and NPR all have published articles addressing the problems of artificial intelligence and misinformation in recent years, especially on the topic of deepfakes.

Deepfake is a type of editing that uses deep learning to forge videos, audio, and images. With this technology, people’s faces and voices can be used in occasions that suit whatever interest the creator has in mind. It can be used to edit videos in which politicians make claims that they never did in real life, swap faces of pornstars with that of celebrities or other individuals, or force company owners to support initiatives or beliefs that they do not necessarily subscribe to in real life.

Photo by Ketut Subiyanto on Pexels

Everyone with the proper device and software can make a deepfake, and yet not everyone can spot a deepfake upon encountering one immediately. Fabricated videos, if not detected or identified, can be used against someone in court as fake evidence; face, voice, and other biometric information can also be falsely used to bypass security systems and access the private information of individuals; the prevalence of deepfakes and misinformation can also lead the public to lose trust in authoritative figures and hold doubt against any information that they encounter in the future. With the algorithms and deep learning behind deepfakes becoming more and more sophisticated and accessible, detecting deepfakes becomes more difficult and urgent.

What are the solutions?

Photo by cottonbro studio on Pexels

Prevention through Education

Deepfakes, just like many other fabricated AI content, are not impossible to spot once one knows where to look. As stated by German Federal Office for Information Security (BSI), there are several things that one can look out for to detect deepfakes:

“It is common for [deepfake] videos to have visible transitions between movements and stiff facial expressions of individuals in the footage. It is also very common for faces to have ambiguous lighting and contours on them. Edited audio often contains voice delay, unnatural noise and tone, false dictions and pronunciations, and can often carry a metallic sound to it.”

Photo by Matilda Wormwood on Pexels

Prevention through Law and Policy Regulations

Deepfakes and misinformation generated and spread by artificial intelligence are still emerging problems whose impact and threats are being evaluated as time progresses. Nevertheless, it is not to say that there are no actions being taken.

In 2021, the European Commission proposed its “Regulatory framework proposal on artificial intelligence,” in which the Commission recognized and categorized the risk of AI technology via a risk-based approach and the response to each level of risk. With this approach, AI systems would be evaluated and sorted into one of four categories: unacceptable risk, high risk, limited risk, and minimal risk. Responses to different AI systems include banning the use of such AI entirely, having detailed documentation, clearly informing the users, and maintaining a high level of security and logging of activities. The proposal is projected to be in full effect in 2024.

Similar to the European Union, the United States is also attempting to find ways to combat misinformation generated by AI or encourage researchers to develop responsive measures against such misinformation. Misleading information used by companies and businesses, as reported by the Federal Trade Commission (FTC), can result in FTC actions. However, it is not to say that the actions that the Commission currently is proposing are satisfactory. The time is still high to call for better and more suitable actions, especially as the 2024 US election period approaches.

Photo by Pixabay on Pexels

Prevention through AI Detection Tools

Finally, one of the most effective ways is using artificial intelligence to detect misinformation generated by other AI. Many researchers across the world have been working on the development of artificial intelligence specifically for the detection of misinformation. Though at the current stage, the technology still suffers from problems of generalization and AI-specific attacks, it is a first step in the direction of battling against AI misinformation with immense potential.

Photo by cottonbro studio on Pexels

Artificial Intelligence As a Tool

It is not to say that all uses of artificial intelligence are bad for us. The same tools we use to forge deepfake videos and images can also be used on old films and pictures, bringing those from the past back to life; chatbots, aside from being a search engine, can also be used as a tool that helps us write letters, emails, and resumes. After all, artificial intelligence is nothing but tools in our hands whose purpose lies and depends on us. It is we that can give purpose and meaning to the works that AI creates for us, and such purpose, whether good or bad, is our responsibility to decide for ourselves.

References:

Chat.openai.com (no date). Available at: https://chat.openai.com/ (Accessed: April 21, 2023).

Deep fakes — threats and countermeasures (2022) Federal Office for Information Security. Available at: https://www.bsi.bund.de/EN/Themen/Unternehmen-und-Organisationen/Informationen-und-Empfehlungen/Kuenstliche-Intelligenz/Deepfakes/deepfakes_node.html#doc1011620bodyText6 (Accessed: April 21, 2023).

Haan, K. (2023) Over 75% of consumers are concerned about misinformation from artificial intelligence, Forbes. Forbes Magazine. Available at: https://www.forbes.com/advisor/business/artificial-intelligence-consumer-sentiment/ (Accessed: April 21, 2023).

Parks, M. and Bond, S. (2023) Ai deepfakes could advance misinformation in the run up to the 2024 election, NPR. NPR. Available at: https://www.npr.org/2023/03/26/1166114364/ai-deepfakes-could-advance-misinformation-in-the-run-up-to-the-2024-election (Accessed: April 21, 2023).

Regulatory framework proposal on Artificial Intelligence (no date) Shaping Europe’s digital future. Available at: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai (Accessed: April 21, 2023).

Saint Joan of the stockyards : Bertolt Brecht : Free Download, borrow, and streaming (1998) Internet Archive. Arcade. Available at: https://archive.org/details/saintjoanofstock00brec_0 (Accessed: April 21, 2023).

Thompson, S.A. (2023) Making deepfakes gets cheaper and easier thanks to A.I., The New York Times. The New York Times. Available at: https://www.nytimes.com/2023/03/12/technology/deepfakes-cheapfakes-videos-ai.html (Accessed: April 21, 2023).

Vedova, H. et al. (2023) Chatbots, deepfakes, and voice clones: Ai deception for sale, Federal Trade Commission. Available at: https://www.ftc.gov/business-guidance/blog/2023/03/chatbots-deepfakes-voice-clones-ai-deception-sale (Accessed: April 21, 2023).

What are deepfakes — and how can you spot them? (2020) The Guardian. Guardian News and Media. Available at: https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them (Accessed: April 21, 2023).

--

--