Intuition Machine

Artificial Intuition, Artificial Fluency, Artificial Empathy, Semiosis Architectonic

Conversations with AI: The Art of Iterative Prompting

--

Here’s a clear, intuitive, and easy-to-understand explanation of a practical methodology for getting the most out of Large Language Models (LLMs), such as GPT or Claude, given their known limitations and their pattern-matching behavior.

Think of working with an LLM as similar to working collaboratively with a very talented — but sometimes forgetful and easily distracted — assistant. To work effectively, you need to guide it clearly, help it stay on track, confirm its understanding regularly, and double-check its results.

Let’s break this down into straightforward steps, with analogies and examples.

📖 Step 1: Clearly Define Your Goal (“Set the Destination”)

Analogy: Imagine you’re giving directions to someone who’s never been to your city.

  • If you say, “Take me downtown,” that’s vague. But if you say, “Take me to the Starbucks near 3rd Street and Main, but avoid the highway,” it’s precise and helpful.

What to do with an LLM:

  • Clearly describe your exact task, including any critical details or restrictions.
  • Provide examples or specific constraints upfront.

Example:

Vague: “Help me fix this bug.”
Clear: “I have a React web app where the login page crashes when users enter special characters in their username. The code uses React Router v6 and Redux. Could you help me debug this?”

🧭 Step 2: Use an Iterative, Conversational Approach (“Navigate Together”)

Analogy: Imagine working with an assistant to solve a puzzle.

  • Rather than asking, “What’s the solution?” and expecting perfection immediately, you’d ask, “What approaches might we try? Which one looks promising?”

What to do with an LLM:

  • Ask it first to brainstorm multiple ideas or solutions.
  • Discuss options before choosing the best path.
  • Once decided, ask the LLM to detail that solution clearly.

Example:

✅ You might say,

  • “Could you list three ways to handle usernames with special characters?”
  • “Which approach do you think would best handle both security and simplicity?”

📌 Step 3: Frequently Summarize & Restate (“Check the Map Often”)

Analogy: Think of a long meeting or class discussion.

  • At intervals, someone might say, “Okay, so far we’ve agreed on points A and B, and we’re still discussing point C.” This keeps everyone on track.

What to do with an LLM:

  • Regularly ask it to summarize what’s been decided so far.
  • Restate your goals and constraints at regular intervals to prevent confusion.

Example:

✅ “Let’s pause. Can you briefly summarize the solution we’re considering for the username bug and list the constraints we agreed on?”

🔎 Step 4: Encourage “Thinking Out Loud” (“Show Your Work”)

Analogy: Like a math teacher insisting you show your calculations, not just your final answer.

  • When the assistant explains each step, you can easily catch mistakes.

What to do with an LLM:

  • Ask explicitly for step-by-step reasoning.
  • Request self-critiques: have the LLM identify potential issues proactively.

Example:

✅ “Please walk me step-by-step through your solution. Tell me why each step is necessary, and let me know if you spot potential pitfalls.”

✅ Step 5: Validate and Test (“Double-Check Your Work”)

Analogy: Think of proofreading your essay or testing a recipe you’ve found online — trust but verify!

  • Even if the LLM sounds confident, always double-check the output.

What to do with an LLM:

  • If it’s code, test it in your environment.
  • If it’s facts, double-check important points with external sources.

Example:

✅ “I’ve copied your solution into my app. It gives me this error message. Could you help me understand why?”

🧹 Step 6: Mitigate Confusion and Repetition (“Keep the Workspace Tidy”)

Analogy: Imagine a desk cluttered with five unfinished tasks — it can cause confusion about where to start.

  • Clear away irrelevant details and focus on one solution at a time.

What to do with an LLM:

  • Avoid overwhelming the model with too many conflicting paths at once.
  • Explicitly instruct it to reuse existing solutions instead of reinventing them.

Example:

✅ “Please reuse our existing function validateUsername() instead of creating a new one."

🔄 Step 7: Use External Fact-Checking (“Consult an Expert”)

Analogy: If you have doubts about medical advice found online, you’d probably ask a doctor for confirmation. Similarly, if an LLM makes a critical claim, double-check elsewhere.

What to do with an LLM:

  • Cross-reference the most critical or sensitive answers using authoritative sources or official documentation.
  • Consider using a separate LLM instance as a “second opinion” to critique or verify.

Example:

✅ “You suggested using useNavigate from React Router v6. I'll double-check that in the official documentation."

📝 Step 8: Continual Improvement (“Take Notes & Improve Your Technique”)

Analogy: Like a chef adjusting a recipe each time to improve flavor, you should refine your prompts and workflows based on past successes and mistakes.

What to do with an LLM:

  • Keep track of prompts that yielded good results.
  • Learn from conversations that got confusing or off-track.

Example:

✅ “Last time, starting with explicit constraints really helped. I’ll do that again this time.”

⚠️ Why is This Method Necessary? (“The Limitations Explained Simply”)

LLMs, even advanced ones, have inherent limitations due to their design:

  • Incomplete training data: They’re like talented assistants who’ve read many books, but not every book in existence. They don’t know every detail perfectly.
  • Difficulty retrieving correct info: They don’t have perfect recall, like trying to find a quote from memory without having the book in hand.
  • Ambiguous intent: Like misunderstanding a friend’s text message, LLMs may misinterpret exactly what you mean if you’re unclear.
  • Hallucinations: They can confidently make mistakes — like confidently giving wrong directions because they feel familiar but are actually incorrect.
  • Imperfect fact-checking: They can’t fully verify their own answers, similar to proofreading your own work and still missing typos.

Given these limitations, your role is critical in clearly guiding, verifying, and iteratively improving the outcome.

🛠️ In Short: The Methodology in a Nutshell

  • Clearly define what you want (destination).
  • Explore options through dialogue (navigate together).
  • Summarize and confirm regularly (check the map).
  • Request step-by-step thinking (show the work).
  • Always verify results (double-check).
  • Avoid confusion by keeping context clean (tidy workspace).
  • Fact-check externally for important claims (consult an expert).
  • Continuously improve your prompting (refine your approach).

🗝️ Final Analogy (To Summarize Everything):

Working with an LLM is like working with a brilliant but somewhat absent-minded assistant.
You wouldn’t give them one unclear instruction and walk away expecting perfection. Instead, you guide them step by step, double-check their results, clear confusion along the way, and constantly learn how best to communicate. Doing this, you transform a potentially chaotic collaboration into a productive partnership.

This comprehensive, intuitive methodology helps you reliably harness an LLM’s capabilities while minimizing its inherent limitations.

Appendix

Below is a structured methodology you can adopt to work effectively with Large Language Models (LLMs), given their pattern-based nature and known limitations. The goal is to maximize accuracy, reduce hallucinations, and retain control over final outputs.

1. Define & Scope the Problem Precisely

The methodology begins with defining the task clearly, ensuring that the LLM starts with an accurate understanding of the user’s needs. This step mitigates hallucinations and misinterpretations by grounding the model in a structured framework. Several key prompting patterns are used in this phase:

Key Prompting Patterns Used:

  • State the Role Explicitly (1.1):
    The methodology advises providing the LLM with a clear definition of its role within the context of the problem. For instance, if the model is assisting with debugging, it should be framed as a software engineering assistant with domain-specific expertise. This helps prevent generic or unfocused responses.
  • Declare the Objective & Constraints (1.2):
    Users are encouraged to articulate the scope, constraints, and assumptions upfront. For example, specifying that a solution must work with an existing React frontend and an Express backend avoids solutions that suggest an entirely different architecture.
  • Establish the Knowledge Scope (1.3):
    The methodology advises providing relevant background, existing code snippets, or known data. This aligns with the pattern of defining an explicit knowledge boundary to ensure that the LLM only generates responses within the provided framework, avoiding guesses.
  • Constraint Emphasis (2.5):
    The methodology focuses on emphasizing constraints early to ensure practical, realistic responses. For example, explicitly stating that the database schema cannot be changed ensures that solutions do not introduce schema-altering recommendations.
  • Relevancy Check (2.6):
    Ensuring only relevant context is provided prevents information overload. The methodology advises sharing only necessary code or domain details, which aligns with systematically verifying that each provided detail contributes to the task.

Why These Patterns Matter:
By carefully defining scope and constraints, the methodology
prevents the LLM from misinterpreting vague prompts, reducing hallucinations, and ensuring relevance. These patterns serve as guardrails that keep responses domain-specific and actionable.

2. Plan an Iterative Workflow

The methodology moves beyond single-shot prompting, encouraging an iterative process where responses are refined through a series of prompts. This approach prevents errors from propagating and leverages the model’s pattern recognition capabilities.

Key Prompting Patterns Used:

  • Layered Prompting (3.1):
    Instead of expecting a fully correct response in one query, the methodology advises prompting in structured layers.
    Example:
  1. “What are three possible fixes for a timeout issue in a file upload process?”
  2. “Which of these approaches is least likely to affect database performance?”
  3. “Can you rewrite the solution with non-blocking operations?”
  • Progressive Synthesis (3.2):
    As multiple responses are generated, the methodology advises synthesizing key insights. This aligns with summarizing key takeaways at different checkpoints to ensure coherence before moving forward.
  • Iterative Clarification (3.3):
    Users are encouraged to ask follow-up questions that refine the LLM’s answers, ensuring that ambiguities are resolved early.
    Example: “You suggested approach #2. What potential conflicts might it cause with our database schema?”
  • Task Reprioritization (2.8):
    The methodology supports revisiting earlier steps to adjust priorities. If an initial solution is too costly, users can prompt the LLM to refactor it within a budget constraint.

Why These Patterns Matter:
The
iterative approach improves accuracy by gradually refining responses instead of relying on a single prompt. This process ensures that solutions evolve dynamically, reducing the risk of premature or incorrect conclusions.

3. Maintain Context & Consolidate Regularly

A significant limitation of LLMs is context loss over long conversations. The methodology recommends summarizing and reinforcing important information to prevent inconsistencies and drift.

Key Prompting Patterns Used:

  • Maintain a Single Source of Truth (4.1):
    Users should periodically recap essential details to re-anchor the conversation.
    Example: “We have decided on approach #2, which avoids modifying the schema. Let’s proceed with implementation details.”
  • Reference Previous Responses Explicitly (4.2):
    Instead of assuming the model remembers everything, users should explicitly remind it of prior answers.
    Example: “Based on your previous suggestion about using a queuing system, how would you optimize message processing?”
  • Use Recurring Summaries to Check Alignment (4.5):
    The methodology advises prompting the model to summarize agreements at regular intervals.
    Example: “Can you summarize our decision so far? Highlight any trade-offs we considered.”

Why These Patterns Matter:
These patterns counteract
context loss and inconsistency, ensuring that the LLM remains aligned with previous conclusions and doesn’t contradict earlier insights.

4. Leverage Chain-of-Thought or Self-Critique

LLMs perform better when encouraged to explain their reasoning process. This methodology embeds self-critique and structured thinking into interactions.

Key Prompting Patterns Used:

  • Articulate Reasoning (9.1):
    Encouraging the model to break down its thought process improves response quality.
    Example: “Walk me through your reasoning step by step before suggesting a fix.”
  • Self-Correction Prompts (9.2):
    The methodology advises asking the model to validate its own outputs before implementation.
    Example: “Review your response and identify any weak points or overlooked assumptions.”
  • Alternative Reasoning Paths (9.4):
    Prompting for alternative perspectives reduces tunnel vision.
    Example: “If we couldn’t use a queuing system, what other solution could work?”

Why These Patterns Matter:
These patterns
reinforce logical rigor and self-audit mechanisms, reducing the chance of overconfident yet incorrect responses.

5. Validate & Test

Before acting on LLM-generated output, the methodology advises verification through human oversight and testing.

Key Prompting Patterns Used:

  • Verification & Robustness (6.6):
    The methodology emphasizes testing outputs in real-world scenarios and cross-checking them.
  • Consistency Checks & Validation (6.5):
    Prompting the LLM to revalidate its solution against constraints before proceeding.

Why These Patterns Matter:
These validation steps ensure that
outputs are not blindly trusted, reinforcing a human-in-the-loop approach.

6. Mitigate Hallucination & Repetition

LLMs sometimes fabricate information. The methodology provides built-in safeguards to detect and eliminate hallucinations.

Key Prompting Patterns Used:

  • Relevancy Check (2.6):
    Prevents unnecessary or fabricated information from creeping in.
  • Shared Terminology (4.8):
    Ensures consistent use of domain-specific terms, preventing redefinition errors.
  • Confidence Appraisal (9.6):
    Asking the model to assess its own confidence level in its responses.

Why These Patterns Matter:
These safeguards
help detect and correct misleading information before it propagates into real-world use.

Conclusion

This methodology deeply integrates prompting patterns to guide the LLM towards structured, reliable responses. By systematically using context anchoring, iterative refinement, self-critique, and verification strategies, users can harness LLMs more effectively, reducing hallucinations and ensuring practical, well-validated solutions.

More detail here: https://intuitionmachine.gumroad.com/l/gpt4

--

--

Intuition Machine
Intuition Machine

Published in Intuition Machine

Artificial Intuition, Artificial Fluency, Artificial Empathy, Semiosis Architectonic

Carlos E. Perez
Carlos E. Perez

Written by Carlos E. Perez

Author of Artificial Intuition, Fluency and Empathy and the Pattern Language books on AI — https://intuitionmachine.gumroad.com/ http://linkedin.com/in/ceperez

Responses (2)