Clarity is a Question

Miah Wilde
3 min readJul 16, 2024

--

It’s common to see people dismiss generative AI systems, LLMs, or ChatGPT because they ask a simple question and receive an incorrect answer.

“Clear queries, clear sight” — illustrated with illious.ai.

Here’s a recent example from X where the author is dismissing ChatGPT on account of an incorrect answer:

ChatGPT is such a doofus.

Take a moment to consider: Is a mistake like this grounds for dismissing ChatGPT (or LLMs) altogether?

Have you made up your mind? Good. Let’s dive deeper.

What was the actual mistake here?

The obvious error seems to be that ChatGPT claimed 9.11 is larger than 9.9 when it should have said the opposite, right?

Wrong.

Where’s the real mistake?

In fact, there are two mistakes: one by ChatGPT and one by the human posing the question.

ChatGPT’s ideal response should have been: “I need more information to accurately answer your question. Please clarify if 9.11 and 9.9 are decimal numbers, version numbers, dates, or something else.”

The human’s mistake was asking an ambiguous question. Here’s a more precise version:

Who’s the doofus now?

ChatGPT answers both of these correctly, even though 9.9 is “bigger” in one case and 9.11 in the other.

LLMs can infer an incredible amount of latent (hidden) context, but they’re not mind readers. They answer the questions we write, not the ones we think.

LLMs are often trained to provide a single answer regardless of question ambiguity or their internal knowledge (they rarely admit uncertainty). This is a significant issue to consider when using these tools.

However, it’s not an insurmountable problem. It can be mitigated and worked around.

There are two main reasons you’ll get an incorrect answer from an LLM:

  1. The question is flawed, and the LLM correctly answers the wrong question.
  2. The question is correct, but the LLM answers it incorrectly.

In my experience, about 80% of the time it’s the question that’s wrong. Only about 20% of the time is the LLM genuinely incapable of answering a well-formulated question.

Should LLMs be able to identify flawed questions and help the asker correct their question? Absolutely.

Should a user dismiss LLMs because they currently require (at a minimum) thoughtfully crafted questions? Absolutely not.

Clarity is found in the question, not the answer

It’s tempting to think that confusion arises from not knowing an answer and that clarity comes from finding it. I believe this perspective is flawed.

Ignorance is the correct term for the state of not knowing an answer. Knowledge is the correct term for the state possessing the answer.

Clarity, however, is the state of knowing the right question.
And confusion, is not knowing the right question.

As technology enhances our ability to quickly and accurately answer more questions, the value of knowledge will diminish while the importance of clarity will grow.

The real question isn’t “What’s the answer?”
The real question is:

What’s the right question?

And that’s the tickle for this week,
As always, be well and die later,
Miah

I’m Miah. #biohacker, #natureboy, #coder and #reluctantphilosopher. If you’re new, subscribe at miahwilde.com, for more content like this. I write about tech-leveraged and data-driven health and prosperity.

--

--

Miah Wilde

0.69 DunedinPace, 600+ nights outside, 1M+ lines of code, ignorance is bliss? miahwilde.com.