The Direction of Generative AI in UX Design as Seen from AI Engineer Summit

Asuka Uda
3 min readOct 19, 2023

--

Hello, my name is Asuka Uda, and I’m an experienced UX designer specializing in conversational AI. Thank you for reaching out to my article.

I watched AI Engineer Summit, held from October 8th to 10th, 2023, where I delved into presentations with a particular focus on UX. In this context, I contemplated the future direction of UX in generative AI.

While it might sound a bit tacky, the impression I gathered is that the very near future of UX/ Service design in generative AI is going to be all about ‘Hyper personalization’.

What caught my attention were the following three presentations.

See, Hear, Speak, Draw — We’re heading towards a multimodal world. by Simón Fishman & Logan Kilpatrick from OpenAI

Climbing the Ladder of Abstraction — How might we use AI to build products focused not just on working faster, but on transforming how we work? by Amelia Wattenberger from Adept

The Intelligent Interface — On building AI Products From First Principles by Samantha Whitmore & Jason Yuan from New Computer

Simon and Logan from OpenAI mentioned that 2023 is the year of chatbots and 2024 will be the year of multimodal models. It’s not just about chat; it’s the year of generation across various modes, including images and audio. Logan shared a fascinating idea in generative imagery — the concept of users generating their ideal rooms and receiving product recommendations from e-commerce sites that match those preferences. This seems like a perfect fit for e-commerce companies.

Adept’s Amelia has provided a glimpse into the near-future direction of generative AI. For example, when it comes to generating AI-based summaries, the idea is not just to provide users with summaries at the same level, but also to allow users to choose the depth of their preferred summary. Also, given that scenarios such as user product selection may have different search criteria for each user, the concept is to facilitate customized searches to suit individual preferences.

From New Computer, Samantha and Jason showcased a demo where the UI/UX adapts to the user’s situation. For instance, it switches to voice-based interaction when the user is away from the monitor and switches to text-based when the user is closer. They aim to provide a next-level experience where it’s not just about voice, text, images, or gestures in isolation, but a seamless integration of all these elements tailored to the user’s context.

From these insights, I believe what’s emerging is a concept of “Hyper personalization.” Currently, especially in contexts like e-commerce, we’ve offered various search options for users. For instance, in the case of clothing, users could select their preferences such as gender, clothing type, color, brand, and more. Looking ahead, true personalization will involve users searching e-commerce sites based on their individual criteria, with results presented in UI formats tailored to each user. This level of personalization will extend beyond just chat; it will encompass a multimodal experience. Even if the interface is primarily chat-based, the information delivered to users is likely to become multimodal.

So, how should UX and service designers, along with product managers, approach product design in this context? Generative AI is incredibly fluid and capable of shaping experiences tailored to individual users. It’s imperative for us to elevate our perspective and think deeply about the kind of experiences we want to deliver. We need to stand at an even higher vantage point, looking down on the landscape, and envisioning what these experiences can be. Furthermore, we should put ourselves in the user’s shoes, creating experiences that truly resonate with them.

--

--

Asuka Uda

Conversational AI UX designer. Conversation design, Service design. Master's degree in User Experience Design. Japan