Elevating UX for AI-first products

Marc Lequime
7 min readSep 2, 2024

--

The past three years have been revolutionary. We’ve seen an explosion in AI capabilities, transforming products and user experience (UX) design.
As a product designer, I’m sure I’m not the only one feeling a bit breathless!

With the release of ChatGPT, everyone — from tech giants to lean startups — scrambled to explore and develop new AI applications. The UX landscape consequently has gone under a rapid transformation as we pioneer new ways to interact with devices.

The new frontier: Interacting with AI

We find ourselves in a period of inspiration, exploration and discovery.
It feels a lot like the early decades of the Web — there’s a sense of freedom to try out new interfaces and interaction models. And now, just like then, we’re going to see some brilliant innovations alongside some… more interesting experiments.

This cycle of innovation is something that is seen time and time again with new technologies, like the aforementioned start of the web, pioneers of early video games, the first wave of 3D video games (respectively) and more recently VR/AR (Spatial, anyone?) computing.

A hallway that extends infinitely with varied doors either side, showcasing infinite possibilities.
Infinite possibilities: but which door to choose?

When a new way of interacting with technology emerges, there are no established standards or patterns yet. This puts both the freedom and daunting responsibility in the hands of UX designers to figure out what “feels right”, what enhances the app’s storytelling, and how to help users harness the full potential of the exciting new tech.

Classifying AI components of systems

AI-powered tasks can be loosely classified into two categories: open-ended and closed-ended tasks.

In an open-ended task, the user provides some free input to the system, which is processed in some way, like being combined with a system prompt, to get the results they are looking for. ChatGPT is a common example of this.

Closed-ended tasks are akin to ‘goal-specific’ systems: fixed actions are taking place — like in a more traditional piece of software — and LLMs are used to drive the business logic, under-the-hood.

The key differentiating factor in these systems is what is expected of the user. Open ended systems provide a far wider spectrum of possible input and possible output, but place a larger burden on the user to understand how to control the system. Closed-ended systems provide rails for user interaction, but limit the scope of what the system can achieve with the AI. There’s also plenty of alternative terms for each, but the definition of these two classifications keeps emerging.

Current challenges of AI UX…

1. The paralysis of infinite possibilities

A man holds his hand over his mouth in a gesture of deep thought.

Have you ever been told to “think of anything you want”, and suddenly found your mind going blank? (This was, and largely still, is my process when doing guided meditation). It’s also a common experience for users when they’re presented with AI’s apparent endless capabilities. It’s akin to being given a blank canvas and every colour of paint imaginable. For many, it’s simply overwhelming.

2. Prompt engineering is a science… sorta

Leading on from this, prompt engineering is a certainly a new way of interacting with computers.

When we (at thestartupfactory.tech) built Clio Books (https://cliobooks.ai/), an AI-assisted book-writing platform, it involved conducting experiments over a fortnight to try obtain the best possible prompts for the goals we were trying to achieve with the platform. Our overall sentiment? Prompt engineering is a bit like negotiation. There’s a language that AI understands best, and it’s close to natural language, but not quite.

So — just to get started building — it took a team of engineers over a week of studying, experiments, and tinkering to get the ideal prompts and the necessary results. Users are less likely to have as much success.

3. Knowledge in the world, not just in the head

Many current AI integrations require users to recall what they were recently working on, context of their workflow, or previous conversations. This puts a cognitive burden on the user. Keeping relevant information in context and have it accessible reduces the need for users to try to keep everything in their heads.

4. ‘Good Input, Bad Output’

We all know about Garbage In, Garbage Out (GIGO). We all know AI can often produce low quality or irrelevant responses or, most dangerously, convincing falsehoods (See the paper ChatGPT is Bullshit [Hicks, Humphries and Slater, 2024]).

This can be frustrating and erode trust both in the app the user is using, but also in AI interactions in general.

… and solutions:

Apple’s recent announcement at WWDC gave us a glimpse into how they’re approaching these challenge. Almost ironically, they co-opt the term AI itself as ‘Apple Intelligence’ (probably the most Apple move imaginable).

But they also showed how they’re not just throwing in LLM generation and calling it a day; they’re exploring the interplay of AI and UI in these emerging systems. Drawing inspiration from some of the techniques demonstrated in these previews, and from my personal experience building AI systems, there’s a few clear hallmarks of good AI/UX design:

Guiding creativity

The key is to provide anchors or starting points. Just like in those aforementioned meditation sessions, where your attention is directed onto specific things, AI interfaces benefit from offering prompts, suggestions or a structure to fuel creativity.

Apple’s approach focuses on some of the more human-centred concepts that have cropped up in the AI scene over the last few years. The interfaces shown provide a framework for user creativity while helping to guide the users in their actions. This UI wrapper could make it easier for users to understand what’s possible, when faced with near infinite possibilities.

A screenshot from Apple Intelligence on the iPhone. It shows an ‘Ask Siri’ input, with three suggestions for what the user can enter above: Get directions Home, Play Road Trip Classics, and Share ETA with Chad.
‘Ask Siri’ implementation from Apple Intelligence

As discussed briefly before, AI systems benefit from subtle guides to users to understand what type of task they’re starting, what input is required from them, and what the possibilities are. Not only does Apple provide some examples of full prompts, but even the suggested first words on the keyboard reflect common starters of commands for the system.

But prompt writing is still tricky for most users. Having experienced engineers build these prompts and effectively ‘shortcut’ them for users with relevant, intuitive UI elements makes it possible for users to employ the power of well-made prompts at their fingertips, without the expertise.

Guiding quality input

With Clio Books, the assistant collects spoken information from writers about their goals, target reader profiles, and the content they wish to include in their books. This involves a lot of speaking, and as a result the quality of the information is crucial.

Clio’s prior human-led process ensured key points were discussed because a person was there to directly intervene. In the new AI-driven system, however, if users don’t provide detailed input on important aspects of their book, the quality of the resulting talking plan suffers.

A screenshot from Clio Books, showing a sample cue given to users to constrain input. It says, “What challenges does your reader face?”
Sample setup user cue from the web version of Clio Books

The solution? Guiding users through a multi-step process, breaking down the information into smaller, focused prompts, which are then fed into the AI system. The result: Far more accurate talking plans that reflect the content that the users actually want in their books.

Another way we focused users onto providing good quality input was by evaluating whether they’d actually answered the question asked of them. This was also achieved with AI, and an example of AI working under the hood to provide a transparent interface.

A screenshot from Clio Books, showing the interface where a user has replied with a nonsensical statement. The AI assistant has highlighted this, and asks if it is relevant. The user is prompted to restart or confirm that the text is relevant.
Clio’s AI assistant highlighting a potentially irrelevant answer during the writing process.

Transparency

Lastly, these interfaces feel really powerful when you can forget that it’s an LLM integration at all. To use an oft-overused quote from Arthur C. Clarke: ‘Any sufficiently advanced technology is indistinguishable from magic.’ So why not let it feel like magic!

Interfaces feel best when they’re powerful yet transparent — and it’s not an oxymoron. No user enjoys a user experience where they feel hand held or oversteered, but simply providing options that are visible, yet not the primary focus of the screen, can guide their input.

Priority Inbox, an AI-powered integration for highlighting important emails

When LLMs can be used to power really unique features in an interface, when these features knit with the existing UI to highlight their utility but not necessarily the AI power underneath is when they really shine.

To wrap it up

Handling how we interact with AI systems is undeniably important, as many companies race to provide AI-enabled integrations within existing software and many founders tap into inspiration for novel products.

As UX designers navigate into these uncharted waters, they must balance creative freedom with a responsibility to create intuitive, usable interfaces. Through a focus on clear guidance, context, and the refinement of AI interactions, designers can realise the full potential of this new wave of powerful systems, while ensuring that the AI component enhances rather than complicates the user experience.

The future of UX design lies not just in understanding the potential of what LLMs can achieve, but in mastering how to make those capabilities accessible, intuitive, and beneficial to all users.

Curious about our projects?

TheStartupFactory is a venture studio focused on crafting quality, tech-based startup projects for entrepreneurs. We’re based in Manchester, UK.

--

--

Marc Lequime

UX designer and full stack engineer. Helping make startup magic happen at thestartupfactory.tech