How to Interact with AI Tools Without Getting Dumber Part I

Howard Chen
Design Mojito
Published in
7 min readAug 19, 2024

This article aims to explore the psychological biases that can arise from the characteristics of generative AI. In our upcoming articles, we will also share insights on how creative professionals can use generative AI more intelligently.

Imagine a scenario where a boss assigns a task, and instead of seeking clarification, an employee blindly follows through, potentially leading to significant errors.

Even more concerning is the fact that some bosses may actually prefer an employee who asks fewer questions and delivers results directly, without the need for further discussion.

Over the past two years, the rapid growth of generative AI has continuously prompted us to consider how these tools can assist in our work, whether they might eventually replace jobs, and how to find the right workflows and applications. If tools like ChatGPT, which utilize large language models, have already started taking over many of your daily tasks, have you ever wondered if you’ve truly understood how their characteristics differ from the way we work?

Here’s a simplified version: It’s really good at guessing words.

Through extensive learning with the Transformer model, it can understand human language very precisely and narrow down the context to find the most reasonable response. In short, it’s like a student who excels at exams but doesn’t care about the meaning behind the answers. Its KPI is simply to produce answers that look good.

However, this characteristic sets it apart from other machine learning AI; it produces a unique output…

Confident But Incorrect Responses: Hallucinations

Large Language Models (LLMs), while very powerful, can sometimes produce what’s called hallucinations — answers that sound confident but are actually wrong. This can easily mislead people into making decisions based on incorrect information. So, it’s important to stay alert and question these models, even if their answers seem perfect.

In the past, AI often made mistakes with simple math problems or common knowledge. Nowadays, we rarely see these errors. Improvements in the basic logic have made AI much more reliable. Along with these advancements, fine-tuning techniques like Lamini have also helped make AI better.

However, some people still refer to answers that fall outside the expected context or deal with topics that don’t have a clear-cut answer as “hallucinations.” But after understanding how LLMs work, it becomes clear that AI itself doesn’t hallucinate that much; it simply generates answers based on probabilistic rankings.

Instead of thinking of these as hallucinations, they might simply be unwanted answers.

To become better AI users, we can address these issues by considering two possibilities:

  1. The data used to train the model in that specific context might be insufficient.
  2. The prompt given may not be accurately aligned with the desired answer.

By applying fundamental prompt engineering techniques, such as distinguishing between zero-shot and few-shot prompts, and utilizing methods like chain-of-thought or tree-of-thoughts, we can optimize AI as a tool to enhance our work, making it clearer and more effective.

Besides interacting with AI using more rigorous logic, let’s discuss the psychological biases that people may develop when using LLM.

The Illusion of Perfection: What to Watch Out for in the Age of GenAI

As we increasingly rely on GenAI, there are three key things we need to be mindful of:

Eliza Effect

In computer science, the ELIZA effect refers to our tendency to attribute human traits — like experience, understanding, or empathy — to simple computer programs that use text interfaces. ELIZA was a symbolic AI chatbot developed in 1966 by Joseph Weizenbaum, designed to mimic a psychotherapist. Despite its basic text-processing abilities and clear limitations, many early users believed that ELIZA possessed real. intelligence and understanding. Through social engineering alone, it essentially passed the Turing test, showing how easily we can be convinced by AI when its goal is simply to produce answers that satisfy us.

This phenomenon is even more relevant today as many people believe that AI provides flawless answers.

Anchoring Bias

Even if I’m not familiar with a particular field, I can still receive an impressive answer by providing minimal prompts. This approach is efficient, but it also comes with the risk of being constrained by the framework, leading to an anchoring bias.

show me the customer journey of ordering a food from uber eat. 
display in a table, the table should contained pain points and initial solutions

“The zero-shot prompt directly instructs the model to perform a task without any additional examples to steer it.” — Prompt Engineering Guide

The performance of zero shot prompting perfectly illustrates just how advanced current language models have become. This is one reason why so many people have readily adopted and can efficiently use these tools. Starting a project with such ease and efficiency was once unimaginable, but now it’s a reality.

This approach aligns with the Postel’s Law perfectly (also known as the Robustness Principle) in computer interaction, formulated by Jon Postel:

“Be liberal in what you accept, and conservative in what you send.”

Users don’t need to provide precise instructions; almost no error messages will block AI generation. They don’t need to worry about typos, unclear wording, or even misunderstandings. The user doesn’t have to be entirely sure of the exact answer they need — the AI will guess the most likely meaning and produce a convincing response.

After interacting with the AI several times, you may notice that because it restricts its context to your question and builds its response based on surface-level language relationships, it provides only the most basic level of discourse. It might look well-formed, but it lacks depth.

However, this approach also introduces the risk of the anchoring bias — a cognitive bias where initial information (in this case, the AI’s first response) disproportionately influences our subsequent thinking.

If you aim to offer professional services to solve problems, this level of response is far from adequate for something you’re being paid for. While “Zero Shot Prompting” can help us quickly build a framework when we’re unfamiliar with a field, we must be careful not to let our thinking become constrained. If you do, anyone using a similarly vague input will get the same predictable, shallow answers, lacking real insight.

Agency Problem

To minimize hallucinations, instead of merely producing answers that seem plausible, AI can gather more user information, focusing on making users feel better understood during the interaction.

However, this approach comes with a trade-off: it means gradually granting AI more access and control, allowing it to book meetings, reserve restaurants, handle credit card transactions, or even drive the cars. As our trust in AI deepens, we may become more relaxed in overseeing these tasks. Yet, the idea that we’re entrusting AI — essentially a system that “guesses” — with such responsibilities is somewhat unsettling.

Similarly, while AI can offer answers that appear logical, the ethical and contextual implications must be carefully considered. There’s a long-standing concern, dating back to the 1950s, about the idea of machines making important decisions — a fear that remains relevant today. I’m also eager to see how Apple will build trust in handing over private data to AI.

In the movie “I, Robot,” Tuner is saved by a robot, which chooses him over a little girl based on logical reasoning. This is upsetting and highlights a critical issue.

Creative Work with AI: A Balance of Competition and Collaboration Between Probability and Creativity

When it comes to creative work with AI, it’s often about balancing competition and collaboration between probability and creativity. Generative AI, through extensive unsupervised learning, finds connections within data and leverages its remarkable ability to “guess” words, leaving the rest to the “god of probability” to deliver answers to humans — an incredibly efficient application.

What we can consider is how to balance our mental efforts in workflows, ensuring that the truly valuable aspects of creative work remain in human hands, while fully leveraging the unique capabilities of generative AI to achieve things we never imagined before.

This month marks the start of an exciting new journey — I’m thrilled to introduce Design Mojito, a publisher I co-founded with the talented Maeve Shen and Irene Chuang. In the future, I’ll be sharing more tips on how to effectively interact with AI in creative work. Stay tuned to Design Mojito for more insights! 🌟 Please follow Design Mojito.

--

--

Howard Chen
Design Mojito

Design Manager | Product Designer inspired by music, anime, video games and handcrafting https://www.linkedin.com/in/howard-chen-93030768/