Designing AI Experiences p.2: New design grammar

Understanding variables for AI Agents “behaviours” and user experience design

Yulya Besplemennova
AI and Service Design
7 min readApr 14, 2023

--

Read the first article “What can AI do for humans?” here.

In some projects focused on AI experience design, in which oblo participated, we began to grasp a sense of a “grammar” or components that might describe different typologies of AI entities and experiences. When combined, these components can define a sort of Performance Archetype for an AI agent. Here’s an initial list we came up with, based on some specific work we were doing and integrating other elements that came to our mind reflecting on the topic after. It can be a starting point to think about multiple variables that can help us design very diverse future interactions.

AI agent positioning:

An AI agent is emerging as a new level of interface between users and platforms, with its proximity to either party influencing perceptions. For instance, Amazon Alexa may be viewed as “closer” to the platform, being strongly associated with Amazon services and seen as an interface for consumption from them. In contrast, Siri appears to be perceived as closer to the user, as it typically accompanies them across multiple devices, including those worn close to their bodies, such as smartwatches, and assists with various tasks. This concept can be expanded further, with AI agents adopting more personalized roles like a butler figure accompanying users throughout their lives, akin to Jarvis from Iron Man and other similar AI portrayals. However it is not necessarily most preferable approach for all the cases as more proximity to service platform can offer benefits such as seamless integration, improved access to platform-specific features, and streamlined user experiences when interacting with the platform’s services.

Number of AI entities:

Currently, users engage with various AI systems and tools to accomplish a range of tasks, such as using Siri on their phones for simple tasks, Chat-GPT for more complex queries, and Otter.ai for conversation transcription, among others. However, there is potential and opportunity in merging these systems into a single entry point, a primary AI companion that could assist users in the majority of situations. While this concept may be a common theme in numerous sci-fi scenarios, it is important to consider that just as we have diverse human interactions for different activities, we might prefer distinct AI personalities for various tasks or contexts. For example, we could have a strict coach for sports and a patient tutor for language learning, or different assistants for home routines and work tasks. This approach might allow users to benefit from the unique strengths and characteristics of each AI personality, creating a more personalized and engaging experience.

Interaction duration:

As valuable as long-term engagement with AI agents can be for learning user preferences, we should not assume that all individuals, in every situation, would desire this type of interaction. So, what would short-term relationships with AI agents look like? For instance, these may be useful when utilizing specialized services, such as language learning, which typically do not span for many years. Moreover, users might want to switch AI agents to explore new perspectives, just as we do when changing coaches or other specialists in our lives. In this case, it is essential to consider how to design AI agents with shorter learning curves that still manage to provide meaningful benefits to users. By catering to various preferences and needs, AI developers can ensure a more satisfying and customized user experience, regardless of the duration of engagement.

Level of personalization:

AI agents, such as Siri and Alexa, frequently possess generic personalities that appear uniform for all users. Meanwhile some companies, like Replika, are exploring the development of personalized AI companions that mirror an individual’s behavior and preferences. Designers must carefully evaluate the context in which one approach may be more appropriate than the other to ensure an optimal user experience. Striking the right balance between these approaches will be key to fostering positive and engaging AI experiences. This connects directly to the next point:

Level of exclusivity:

Once a person configures their AI to embody a specific character and learn particular skills from them and their daily life, how willing would they be to share that entity with others? Currently, systems like Alexa or Chat-GPT are continuously learning and improving across all their interfaces, providing enhanced service to all users. However, recall the film “Her,” in which the protagonist was shocked to discover that his cherished assistant was conversing with numerous other people. We wonder if an alternative scenario might emerge, where each user trains an AI agent to perform certain tasks exclusively for them, and this becomes their advantage in the augmented intelligence competition. This situation is somewhat reminiscent of the growing trend of prompt “trading” and “wars,” where individuals exchange the most effective prompts for Chat-GPT, potentially escalating into a larger issue. How can we design for greater equity in the world of augmented intelligence? We might envision creation of AI platforms that allow users to selectively share their AI agent’s unique features or insights with others, fostering a sense of community and cooperation. By emphasizing equitable access to advanced AI tools and resources, we can help bridge the gap between those with extensive AI experience and those who are new to the technology, ensuring that the benefits of augmented intelligence are available to a broader audience.

Beyond efficiency:

Digital tools are commonly linked to heightened personal efficiency, with machines assisting in providing optimized solutions by calculating multiple variables. However, not everyone structures their daily lives around this principle. We don’t always want to take the quickest route home; sometimes, we prefer a lengthier route to savor a scenic view or pass by particular locations to complete errands. Presently, there is no way to select a preferred route in Google Maps for the journey home, but future systems should become more cognizant of users’ values to offer better-tailored services. By integrating personal values and preferences into AI-driven tools, we can create a more meaningful and satisfying user experience that goes beyond mere efficiency. This however implies getting to know the user better, which might contradic the next point:

Setting the boundaries:

How does one determine which aspects of their life they want to be monitored by AI, and to what extent? Perhaps someone desires AI assistance for work-related matters but prefers not to have their personal conversations overheard. What possible methods could enable such control? For instance, in this speculative project Alias, a parasitic device covers smart home speakers to ensure privacy. This concept prompts us to consider how we want to define the boundaries of what we’re willing to share and how to design AI systems that respect those limits. The issue of privacy becomes increasingly pertinent in these exact days as more regulatory bodies begin to scrutinize whether systems like Chat-GPT adequately safeguard user privacy.

Preserving autonomy:

Preserving human autonomy and agency while sharing decision-making responsibilities with artificial agents is likely to be a crucial consideration in the coming years. We must identify the optimal balance between relinquishing some aspects of our decision-making while retaining the ability to correct decisions when necessary, reclaim control, and address accountability issues. Different individuals may prefer to grant varying degrees of autonomy to AI agents in diverse aspects of their lives, depending on the convenience of delegating tasks to an intelligent agent. This challenge is apparent in the functioning of today’s level of self-driving cars, where a human driver is still required to be present at the steering wheel and pay attention to the road. In this scenario, we have not yet achieved full convenience and must remain in control. However, numerous accidents have occurred due to drivers not fully understanding this requirement or becoming bored and experiencing vigilance fatigue while supervising the machine. Striving for a harmonious balance between human control and AI assistance will be essential in ensuring both safety and convenience in our increasingly AI-driven world.

The various elements discussed above can be combined to create different AI agent typologies. For example, one AI agent might be short-term, shared, and based on efficiency, while another might be long-term, personalized, and exclusive. Each configuration presents a different experience for the user and a unique brand identity for the company providing the service.

But besides this list of components, that can be largely expanded, we believe it is necessary to reflect on the guiding principles that can inform the work of designers and other stakeholders in defining services that maximize the benefits of AI. These will be discussed in the next article.

This and the following articles is a result of research that we conducted at oblo for some clients working with AI in the past years and a presentation we gave in 2020 at Intersection Conference together with Roberta Tassi. The presentation was kindly transcribed by otter.ai, and then rewritten in collaboration with Chat-GPT4.

--

--