Sitemap
UI for AI

Explorations around creating new pieces of UI optimized for AI-powered products

Design Principles for AI

7 min readSep 28, 2025

--

As we’ve been working through our AI use case examples and research, it made sense to create some Design Principles to guide our work, both for AI in general and the individual use cases in particular.

Design principles are more than guidelines. They define success and set the direction for our work. They act as north stars, guiding us through ambiguity and helping us make better choices.

The process of writing these principles was exciting because we could finally give shape to burgeoning opinions around AI. However, it can be frustrating if a team ends up following rules they don’t believe in. That’s why we went through multiple rounds of iteration and group revision until the principles felt truly ours.

As we explored AI use cases and conducted research, we began to shape our own vision of how AI should evolve. Once we understood what already exists, we returned to the question that sparked this project in the first place: Why are current AI design paradigms missing the mark? Our answer was clear: The AI we want to see in the world must be human-centered and dynamic. But what does that really mean?

To keep ourselves from getting lost in endless sketching and prototyping, we developed a set of Design Principles. We aim to use these principles to ground our work moving forward and keep us aligned with our greater purpose of ensuring UI for AI serves human needs and values.

GENERAL DESIGN PRINCIPLES

The UI for AI Team

Control and Agency

Prioritize user intent over AI output. The AI should adapt to user intent, supporting their decisions rather than making them. For example, a designer can edit a header or merge layouts while keeping the rest intact. (Zeana)

Emphasize user agency. Enable users to constrain system defaults. Consider a user choosing which notes the AI retains or which interactions are logged without disruptions to their workflow. (Erica)

Handling AI Uncertainy

It’s more than just an algorithm. Account for how users interpret, trust, and act on outputs. (Claire)

Make uncertainty visible and actionable. Guide users to interpret and recover from errors even when the AI can’t tell it’s wrong. (Adithi)

Understanding AI Capabilities and Limitations

Deliver on needs, not necessarily commands. A user’s command might be poorly worded, incomplete, or even counterproductive. But underneath lies the real goal. AI should interpret intent, context, and desired outcomes, then respond to satisfy the underlying purpose. (Valerie)

Design interfaces and flows that make the system’s strengths and limitations obvious, so users can set realistic expectations. (Dave)

Make the next step clear. Spawn tools based on behavior and intent. (Dan)

Contextual History

Design for continuity. Ensure users can easily view, resume, and carry context across tasks. For example, resurface sources a user previously explored to connect past insights with new findings. (Hanara)

Flexible Workflows

Support adaptability over rigidity. Interfaces should flex to users’ changing goals and contexts, enabling multiple pathways instead of enforcing a single way to work. For example, switching views for different interactions: a linear chat view for conversation, a map/outline view for idea exploration. (Celine)

Iterative Exploration

Design for exploration and reflection. Support both the exploration of new alternatives and the ability to build on past ideas. For instance, allow users to annotate or bookmark key moments as they experiment. (Holly)

Design for reassurance. Let AI guide with clarity and empathy so users feel supported, not second-guessed. (Yukti)

Optimize for creativity. Make it easy for the AI to foster creativity without replacing it. (Dan)

Trust and Clarity

Provide clear communication of the actions the AI performed. (Cole)

Reveal the process. Demonstrate how the AI thinks and works to help users understand how to become better at using it. (Jason)

USE CASE SPECIFIC DESIGN PRINCIPLES

Press enter or click to view image in full size
AI Use Cases diagram by Celine Tseng

Blank Canvas

Cole, Hanara, and Yukti

Clearly Define the System’s Scope. Communicate what the AI can and cannot successfully achieve. Example: At the start of a session, the system makes its scope clear: “I can do X or suggest Y, but not Z.” This sets boundaries while still encouraging creative exploration.

Automate tasks while preserving human control. Provide scaffolding and automation, while preserving the user’s creative control and allowing for complete freedom of expression. Example: The system offers a few starting layouts based on the user’s prompt, and the user can move, swap, or delete any element. This accelerates brainstorming while the person stays in charge of shaping the final result.

Frame early actions as immediate momentum. Designing the initial interaction to feel like progress, not a start from a blank canvas. Example: The system provides starting points such as templates, sample structures, or draft variations, so users can react and build right away

Conversation Flow and Prompt Editing

Celine and Holly

Enable flexible and diverse options for refining prompts. Prompt editing should integrate naturally into conversation, should support diverse methods of prompt iteration, including both user-driven and AI-assisted approaches. Example: Users can iterate prompts through text editing, UI buttons, AI-assisted suggestions, or a combination of these methods.

Emphasize divergent exploration over forced linearity. Conversations should accommodate branching thought processes and iterative exploration over rigid, linear chat structures. Example: A tree-like view lets users fork off new directions, compare different paths, and move fluidly between ideas for reflection and evaluation.

Enable precise access along with high-level navigation. Users should be able to scan the conversation flow at a high level and jump directly into specific points for precise edits. Example: A zoomable map shows the overall conversation at a glance, while clicking any node opens its prompt for more granular edits.

Refinement Flow

Celine, Zeana, Erica

Support multi-level refinement from overview to fine details. Users should be able to adjust designs at different levels, from big-picture layout changes down to small tweaks like colors or text. Example: Users can refine at different levels — rewriting a whole paragraph for tone, adjusting a single sentence or word for clarity, or combining layout from one option with style from another.

Enable precision while preserving structure. Edits should target exactly what the user wants to change, while keeping the overall hierarchy and layout intact. Example: Users highlight multiple words simultaneously in a generated output (ex: “run,” “eat,” and “said”), the system revises synonyms while preserving structure and swapping multiple elements in the next output.

Emphasize co-authorship over replacement. AI should act as a collaborative partner, refining and building on user ideas instead of discarding them and starting fresh. Example: Inline suggestions instead of overwrite — Show AI edits as highlights or side-by-side alternatives, so the user can accept, reject, or merge them rather than losing their original text.

Selection and Prioritization

Claire and Valerie

Adapt to changing goals by filtering what matters from complex contexts.
Enable AI to adapt priorities in real time by distilling complexity into clarity, building user trust in its decisions.

Design for a person, not just a persona. A person isn’t static. Understanding the person means being attuned to situational signals rather than assuming their priorities based on preconceived notions.

Empower precise expression of needs to ensure alignment. AI should give people the tools and scaffolding to articulate their needs, guiding them even if they don’t know the “right” technical terms.

Multitasking and Context Switching

Hanara and Holly

Preserve context so users can easily resume tasks. Systems should remember and represent the task flow with clear markers, so users don’t have to reconstruct their working memory each time they return. Example: When users return to a draft, the system reopens at their last edit with changes visible, so they can immediately pick up where they left off.

Maintain clear boundaries between different tasks. Interfaces should support compartmentalized task spaces, ensuring that progress and context in one workflow doesn’t interfere with another. Example: If someone is managing two projects in parallel, switching to Project B should show only its files and notes while keeping reminders and comments from Project A separate.

Anchor workflows in chronology to track fragmented flows.
Use temporal structures and visible anchors to help users trace, reenter, and orient themselves within their workflows. Example: A design tool lays out a simple timeline — previous mockups, newly integrated feedback — so users can quickly see the flow of work belonging to that project.

Error Detection, Reporting, and Awareness

Yukti and Adithi

Prevent user errors before they occur. Anticipate and flag potential input issues in real time so users can correct mistakes early and stay in flow. Example: When a prompt lacks context, flag it before submission instead of returning a vague, unhelpful response.

Surface system uncertainty before it misleads. Be transparent when the AI may be unsure or incomplete in its responses, helping users understand potential limitations and make informed decisions. Example: When an AI generates a confident-sounding answer with weak evidence, add — “I’m filling some gaps here, double-check important details.”

Guide iterations with actionable refinements. AI regeneration can produce unpredictable results. Offer targeted adjustment options to help users improve outputs rather than relying on “Try again” options. Example: When an AI writes a formal email but a casual tone is intended, surface options like “Make it more casual” instead of only “Retry.”

Memory Handling

Dave, Erica, Jason

Connect conversations, don’t archive them. Memory should connect threads across conversations, not log everything that happens. Focus on what helps users maintain flow and context, not comprehensive data storage. Example: When returning to a project discussion, surface key decisions and open questions that enable continuation, rather than chronologically listing every project mention.

Make memory visible to make it trustworthy. Users must see what’s remembered, when it influences responses, and why it was important. Invisible memory feels manipulative; visible memory feels collaborative. Example: Show “Based on your preference for concise updates…” with an option to view or edit that memory, rather than silently adjusting response style.

Design forgetting as carefully as remembering. AI should gracefully sunset, terminate, or reduce irrelevant or outdated information. Complement it with condensed, efficient storage of simple facts prioritized. Example: Project-specific details automatically become less prominent after completion, while communication preferences persist unless explicitly changed.

Preserve and augment human meaning, don’t dilute it. Memory should strengthen users’ context-making and recall while preserving their sense of authorship. Enhance human meaning rather than summarizing it away. Example: Maintain “I’m nervous about this presentation but excited about the topic” rather than reducing it to “presentation anxiety = moderate.”

Valerie Caña wrote the introduction to these principles.

--

--

UI for AI
UI for AI

Published in UI for AI

Explorations around creating new pieces of UI optimized for AI-powered products

Dan Saffer
Dan Saffer

Written by Dan Saffer

Designer. Product Leader. Author. Professor.

Responses (5)