Sitemap
UI for AI

Explorations around creating new pieces of UI optimized for AI-powered products

Diving Deep into AI Use Cases

10 min readSep 21, 2025

--

This week, the UI for AI team explored their use cases, seeking design opportunities and inspiration from products that are doing interesting things with AI in their use case.

Blank Canvas

Cole, Hanara, and Yukti

The “blank canvas” problem isn’t just about the absence of content, but about the lack of initial direction. When users are faced with a completely empty starting point, they’re forced to invent both the material and the structure in order to make progress. This often creates hesitation, slows momentum, and makes starting a task the most difficult part. We can reduce this friction by designing tools that scaffold users’ early steps, enabling them to efficiently find the direction towards their goal.

Gamma, for example, is an AI-powered tool for creating presentations, documents, and web pages. It avoids the blank page by first showing the formats it can generate, then letting users set preferences like slide count, dimensions, and language. To further guide them, it surfaces example prompts with a shuffle option, giving users a spark of inspiration to kick off their work. Similarly, Miro provides starter templates and frameworks for building on the empty canvas. Users don’t have to start from scratch; but rather they can work with pre-built boards, modify them, and quickly make progress without losing flexibility.

Press enter or click to view image in full size
Gamma

From these findings, the design implication is that AI tools should move beyond a blank input and provide structured entry points such as formats, example prompts, or templates that guide users towards action. These entry points help shape momentum with small but meaningful cues that turn hesitation into progress. The goal is not to replace creativity but to lower the barrier to starting by giving it a little structure, while still leaving room for exploration and personalization.

Refinement Flow

Celine, Zeana, Erica

Current generative AI designs make refinement challenging because of their linear structure in iteration, which limits the user’s ability to experiment and refine ideas efficiently. Tweaking specific details often requires re-generating an entire output, which makes it difficult to explore alternatives, merge multiple variants, and connect dissimilar prompts. Drawing inspiration from tools and research papers like Firefly and SpecifyUI, we want to explore how UI can support multi-level refinement — from overview to fine details — while encouraging iteration by making the process of regeneration feel seamless and lightweight rather than cumbersome.

SpecifyUI introduces SPEC, a structured representation that decomposes UIs into layout, regions, and styles, allowing designers to make targeted edits or merge features across variants rather than restarting from scratch. Another example is Adobe Firefly, an AI-powered tool that adds, removes, or expands content in images using simple text prompts. It shows this approach in practice by letting users highlight a section of an image or design and regenerate only that part, keeping the rest intact. Findings from SpecifyUI showed that this structured approach improved structural fidelity, gave designers more precise control and less frustration, and produced results closer to user intent compared to prompt-only systems like Google Stitch. Firefly demonstrates the value of editable outputs that let humans and AI co-author together, variation previews that support curation rather than constant correction, and granular refinement that allows iteration at different levels, from whole paragraphs down to single phrases.

Press enter or click to view image in full size
SpecifyUI

The design implications from our findings highlight the need for tools that give users more control and flexibility over refining multiple outputs. Multi-selection features such as highlighting or lassoing allow users to adjust multiple outputs or elements within a single output at once. This process also involves having generated structured starting points that set the foundation for interaction. Moreover, targeted refinement supports adjustments to specific regions of a design, while merging across variants allows users to combine layouts, styles, or features from various outputs to generate more tailored results. Together, these implications ensure that users can effectively create, combine, and refine outputs which reduces cognitive load in the customization process.

Multitasking and Context Switching

Hanara and Holly

One of the biggest challenges users face when multitasking is the cognitive load of reestablishing context every time they switch tasks. Instead of thinking about multitasking as doing multiple things simultaneously, our research suggests it is more often about leaving and returning to tasks without friction. We can reduce this burden by designing UI patterns to preserve and resurface context so that the users don’t need to rebuild it themselves.

Here are the two examples that we looked at to tackle these UI patterns: AIThing is a desktop tool designed to support multitasking by partitioning chat threads into discrete containers, each with their own memory. Each task, their chat histories, and their context remains isolated from each other while also running in parallel. On the other hand, Rewind AI is a personal AI assistant that supports context switching by running quietly in the background, recording your screen and audio. It lets users instantly find and pull up the exact slide, snippet, or site viewed at any time stamp, allowing them to pick up where they left off.

Press enter or click to view image in full size
AIthing

From our findings, a clear design implication emerges around how AI should handle multitasking contexts. The UI must not only keep contexts of each task separate, but also intelligently connect them across time in a chronological sequence, allowing users to trace back, reenter, or link fragmented workflows without losing the narrative of their work. This timeline view is especially effective because it leverages the way that people naturally recall their work — not by exact file names or search terms, but by when they last engaged with a task. By reducing the memory burden and piecing together work through a temporal lens, the timeline makes multitasking feel less like piecing together fragments and more like moving fluidly along a continuous thread of activity. The result is a system that respects boundaries between contexts, surfaces memories when needed, and lets users focus on moving their work forward.

Conversation Flow and Prompt Editing

Celine and Holly

Traditional linear, text-based chat interfaces often fail to capture the non-linear, exploratory nature of the human thought process. Additionally, the challenge of crafting an ideal prompt often requires iterative refinement– a process that can further elongate an already cluttered, linear, conversation thread. This challenge leads us to two primary questions we aim to explore: How do we make the iteration of prompts less disruptive to the overall conversation flow? And how can we non-linearly organize the output of these iterations to preserve context and encourage exploration of ideas?

To investigate these questions, we analyzed two examples that offer innovative solutions: Midjourney, for its approach to seamless prompt editing, and LAIERS, for its spatial, non-linear visualization of conversation history. MidJourney, an AI image generator on Discord, uses simple UI buttons to guide prompt refinement, making it easier to create desired outputs while keeping iteration streamlined and uncluttered. LAIERS is a platform that transforms the way users think with AI by creating spatial conversations, generating chat histories as a tree visualization that enables users to branch, compare, and revisit ideas without being confined to a linear thread.

Press enter or click to view image in full size
LAIERS

These examples highlight key design implications for rethinking conversational interfaces. Firstly, prompt refinement doesn’t have to be heavily text-based. Blending UI elements like buttons and remix mode with text editing reduces reliance on long, complex prompts for iteration. Buttons like “Upscale” and “Variation” let users explore multiple directions instead of limiting to a linear progression. Secondly, adopting a tree-like representation of a user’s workflow allows users to view their history at a high-level overview. Users can easily identify key branches, revisit prior ideas, and jump directly to relevant benchmarks throughout their process– supporting more intentional navigation and deeper engagement with past work. Together, these implications point towards a more adaptive system that actively supports iteration, exploration, and seamless navigation within a complex, non-linear idea space.

Error Detection, Reporting, and Awareness

Yukti and Adithi

AI-generated content can be surprisingly confident even when it contains errors, which makes it difficult for users to know what to trust. Without clear indicators of reliability, users may assume all outputs are correct, leading to mistakes or misinformation.

Some tools address this by making errors and uncertainty visible. Google’s Gemini, for instance, uses a double-check feature that highlights statements by visually indicating and highlighting them. It shows green when Google Search finds content that’s likely similar to the statements, orange when it is likely different, and no highlight when there’s not enough information to evaluate the statements. Hallucination Probes provides a streaming detector that flags potential hallucinations in real-time for long-form generation, giving users immediate feedback on parts of the response that may be unreliable. The probes identify fabricated entities during generation, with higher scores meaning more likely hallucinations.

Press enter or click to view image in full size
Gemini

These examples demonstrate that surfacing confidence, verification, and error warnings directly in the interface helps users engage critically with AI outputs. By combining visual cues and real-time alerts, AI systems can transform opaque responses into transparent interactions, enabling users to trust the information they rely on while remaining aware of potential mistakes.

Memory Handling

Dave, Erica, Jason

Unlike stateless AI systems that treat every conversation as new, memory-enabled AI introduces persistence, giving interactions a sense of continuity and personalization. Across the tools we explored, memory was framed not just as storage, but as a design challenge: how to balance convenience with privacy, how to give users meaningful control, and how to prevent overwhelm as memories accumulate. The most compelling designs treated memory as a visible, user-facing feature rather than a hidden backend function.

Press enter or click to view image in full size
ChatGPT

OpenAI’s ChatGPT memory demonstrates this approach by giving users explicit agency over what is remembered. People can ask the system to “remember this,” review or edit stored memories in settings, and opt for temporary chats when they prefer not to save anything. This combination of saved memories and ongoing chat history supports both short- and long-term context while offering transparency and a sense of safety. It represents a model where memory is lightweight, controllable and framed as something the user actively manages. Microsoft Recall, in contrast, takes a far more comprehensive approach by capturing screen snapshots every few seconds and making them searchable through semantic indexing and a timeline view. This provides unprecedented continuity and context and the ability to resurface past context instantly, but it also raises serious questions about privacy, storage requirements, and cognitive load. While Recall shows the potential of memory as a powerful productivity tool, it also highlights the risks of automatic capture that users may not fully understand or control.

Together, these examples illustrate the design space of memory handling: from selective and transparent user-driven memory to bold but contentious visions of total recall. The design implications suggest that future systems will need to combine the trust and control of explicit memory with the convenience and continuity of comprehensive capture, all while making memory legible, editable, and respectful of user boundaries.

Selection and Prioritization

Claire and Valerie

One of the key takeaways from our research is the difference between selection and prioritization in AI systems. Selection is about narrowing down inputs; considering which data points, features, and actions make it past the filter. Prioritization then decides which of those selected items carry the most weight. Together, they ensure that, given limited resources like time, cost, or technical capacity, a system can focus on the features with the highest impact. This is a core part of how modern AI works, but it also creates challenges. Algorithmic decisions about what to keep and what to rank are computationally expensive, hard to scale, and often opaque to users.

To make sense of this, researchers often describe three families of prioritization techniques. Fuzzy logic helps systems handle ambiguity, like when a user input isn’t binary. Optimization algorithms allocate resources to maximize value, dynamically learning what matters most. Machine learning enables adaptive personalization, continuously tuning prioritization based on patterns in the data. Applying this lens to design highlights important trade-offs: should a chatbot invest more in explainability or in response speed? Should clarity matter more than personalization? These choices are rarely neutral and directly affect how users experience and trust AI.

Tools like Uizard select the most salient design elements from sketches or screenshots, then prioritize keeping layout constraints stable while experimenting with color or imagery. In healthcare, Heidi Health filters out small talk, flags action items like prescriptions, and prioritizes diagnoses and treatment plans, a real-time application of selection and ranking that influences both clinicians and patients. ChatGPT uses sparse attention mechanisms to select only the most relevant tokens from context windows. Netflix assigns engagement probabilities to shows, ranking and serving only the top items.

Press enter or click to view image in full size
Uizard

The bottom line is that selection and prioritization happen on both sides: the AI chooses and ranks, but users also prioritize how they respond to what the system surfaces. That human layer of prioritization has been largely untapped. We’ve been so focused on algorithms that we forget design must also account for how users interpret, trust, and act on those outputs. Building interfaces that make prioritization transparent could help users align systems with their own values and build trust.

--

--

UI for AI
UI for AI

Published in UI for AI

Explorations around creating new pieces of UI optimized for AI-powered products

Dan Saffer
Dan Saffer

Written by Dan Saffer

Designer. Product Leader. Author. Professor.

No responses yet