CHAT-GPT: Why does it apologise?

Shay Moradi
Thoughts on Design
Published in
2 min readJul 26, 2023
An LLM masquerades as an intelligent chatbot.

“I’m sorry Dave.”– Do you ever think about how OpenAI made the decision to make ChatGPT apologise?

Who designed that interaction and why? Who made the decision to give it conversational characteristics? Its a fascinating pattern of human mimicry in technology, in how a system can flirt with our expectations and shape them. It’s important to think on these things if you’re designing for the future of human x AI collaboration. If we’re thinking of AI as an intellectual partner / tool / catalyst..or whatever (even as a car based personal assistant) we certainly shouldn’t leave these things to whimsy or unchallenged as they shape future patterns of interaction.

Let’s start with, why it apologises so much. Apologies are frequently used as a sign of politeness and respect, but over using them can be insincere or manipulative.

As an interaction designer you might think… , well these responses are there to promote and reinforce respectful and useful user interaction… but this is a predictive language model (an impressive one at that), its ability to mirror human conversation — including apologising — shouldn’t obscure the fact that it operates in a strictly mechanistic way, devoid of the classic definition of consciousness or emotion.

Why should a machine that is doing our bidding apologise to us at all? The good question to ask here is, what is it masking, and how is it manipulating our expectations by design.

With an extensive reach of over 100 million users (1.6 billion visits last month). There’s a good chance many of those users don’t really know how it functions. I would be worried that this seemingly relatable characteristic, (regardless of the disclaimers) gives it interaction superpowers that are underserved and no one, likes underserved authority when they finally figure it out.

So what’s the interim solution, I think it does a good job of reminding us of its limitations already, just stop apologising, even saying,
“Let’s try this again…”, is better, to indicate the system is attempting to provide a new or clarified answer to a previous query or task. It’s an example of the language model’s capacity to facilitate smoother and more natural conversations, although it doesn’t imply any form of consciousness or self-awareness on the AI’s part. Maybe it breaks down the fourth wall in what is effectively a performance, an LLM pretending its an intelligent chatbot. What is the non diegetic way it could refer to its programming? If you want to get playful, at the cost of diminishing simplicity, alter the way it displays the content?

--

--