Do Not Humanise Artificial Intelligence (in research, at least)

5th #AI4BetterScience series blog

Quentin Loisel
4 min readJul 22, 2024

Have you ever added “please” at the end of your query to ChatGPT? Why would you be polite with an artificial intelligence (AI) model? This question highlights a fascinating aspect of how we interact with these technologies and has more implications than we might think.

Algorithm or Mind?

OpenAI and other companies strive to make AI interactions feel like human conversations. The chat interface revolution epitomised by ChatGPT allows users to access the powerful GPT model through a conversational format. The latest iteration, GPT-4o, illustrates this strategy by proposing multimodal interaction and human-like reactions from the model, pushing this anthropomorphism further than ever before.

A presentation video from OpenAI that showcases GPT-4o human-like interaction.

By designing interfaces and models that understand and respond to natural language, interpret emotions, and communicate with them, these companies aim to make AI technology more user-friendly and accessible. This ease of use is a cornerstone of generative AI adoption and generalisation.

However, this design choice also leads us to attribute human-like qualities to these technologies. Despite their sophisticated mimicry of intelligent conversation, large language models (LLMs) do not possess consciousness or human-like thinking. This distinction is crucial, underscoring AI capabilities’ potential for confusion and misinterpretation.

The Growing Confusion

The confusion doesn’t start with generative AI. For a long time, the term artificial “intelligence” has been used to qualify these technologies and criticised by cognitive and computer scientists. The definition of intelligence is still subject to debate. While the term “machine learning” is widely accepted since these models learn from data, “intelligence” was chosen during the Dartmouth Conference in 1956 to define this new field of science. At the time, it was more about defining an aspirational goal than describing the current state of the technology. While being a powerful marketing tool, the term creates confusion.

In this story, the rise of large language models marks a significant milestone. Their capabilities prompt intriguing questions about machine representation, intentionality, and even consciousness. It’s a fascinating time to study this field! Although we won’t delve into this topic here, its implications are profound. For instance, debates about whether an AI can possess consciousness directly impact moral considerations, such as the attribution of rights. These questions trigger emotions, leading people to jump to conclusions about the nature of technology. When the public’s experience with these anthropomorphised technologies becomes widespread, it can exacerbate confusion and lead to problematic misconceptions.

Misconceptions and Risks

You might assume that scientists, with their rigorous training, are immune to this confusion. However, the surprising capabilities of these technologies have raised many questions within the scientific community. A notable example is the debate over citing ChatGPT as an author in scientific publications. This controversy highlights the confusion. While AI can generate text and suggest analyses, it lacks proper agency, intentionality, and critical judgment — essential traits for authorship.

The naive experience of these technologies can mislead even scientists into believing these systems possess attributes and qualities they don’t have. Such misconceptions can lead to misplaced trust in AI, undermining the rigour and critical analysis essential for scientific research. Here are some specific risks:

  1. Overestimating Capabilities: Scientists might trust AI-generated results without rigorous verification.
  2. Bias in Data Interpretation: Misinterpreting AI responses as human judgments can introduce biases.
  3. Reduction of Critical Thinking: Humanizing AI might encourage accepting AI suggestions without thorough analysis.
  4. Communication and Collaboration: Seeing AI as colleagues can overshadow the need for genuine human collaboration.
  5. Excessive Dependence: Over-humanisation can lead to relying too much on AI, weakening human expertise.

Understanding AI: A Necessity

As AI becomes more integrated into research, scientists must understand how these technologies work. Misunderstanding AI’s capabilities and limitations can lead to suboptimal use and risks, like reduced accountability or cyber vulnerabilities. Viewing AI as a sophisticated computational tool rather than a human-like entity promotes responsible and effective use. We need clear guidelines for using AI efficiently and responsibly. Key practices for scientists include:

  1. Clear Understanding: Promote a clear grasp of AI’s capabilities and limitations.
  2. Rigorous Verification: Encourage thorough practices for validating AI-generated results.
  3. Maintain Distinctions: Keep a clear line between technological tools and human interactions.
  4. Training and Education: Train scientists to use AI as supportive tools, not autonomous entities.
An abstract, minimalist graphic depicts a man, a woman, and an AI-human robot, all dressed as scientists, engaged in a debate — generated with DALL·E 2024–07.

Conclusion

Generative AI technologies offer incredible advancements but come with the risk of being misunderstood. While exciting questions about the cognitive nature of these technologies are explored, the field is moving fast, and scientists are not waiting for answers before implementing them in research. By recognising AI as a tool with specific capabilities, we can optimise its use and avoid the pitfalls of anthropomorphism. Understanding what AI can and cannot do ensures it enhances rather than hinders scientific progress. Remember, AI is a tool, not a colleague. Use it wisely and responsibly.

Thank you for reading!

Keeping going with the following issue: The Accessibility Illusion of Generative AI in Research
Or read the previous issue: A Human-AI Knowledge Interaction Model for Trustworthy Generative AI

--

--

Quentin Loisel

Just questioning how generative AI impacts Science, human cognition, and our civilisations. Let's dig into it!