Designing AI Experiences p.1: What can AI do for humans?

With the recent explosion of interest in AI services, led by OpenAI’s Chat-GPT success, it is the perfect time to reflect on ways to design better AI experiences.

Yulya Besplemennova
AI and Service Design
7 min readApr 7, 2023

--

In 1951, Paul Fitts worked on the so-called “function allocation research” to determine which operations in human-machine systems should be entrusted to which actor. He developed the “HABA MABA” model, which stands for “Humans Are Better At — Machines Are Better At,” based on the capabilities of machines at that time (long before Chat-GPT and similar).

For a long time, it remained an essential principle in designing human-computer interactions and eventually transitioned to the design of AI interactions. For example, Google put it this way when presenting their experimental Clip device in 2017: “Let people do what they do best and let machines do what people do worst… because, in order for us to build trust in the impact of AI, we must feel reassured, included, and informed.” But as AI becomes more advanced, should it substitute humans in all fields where it excels?

If we consider the differences in human and machine intelligence evolution, the outcomes of this model might not be the most advantageous for us. For millions of years, humans developed basic motor skills for survival within the animal world, and only at some point did cerebral development spike, bringing us to a different level of intelligence. In contrast, machine intelligence was created to help us with the high complexity of calculations, something humans started doing relatively recently in our developmental history and needed additional assistance with.

However, even after decades of progress, Moravec’s paradox, formulated in the 1980s and stating that “it is comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility,” still holds true. We may see impressive developments in machine learning with language models and generative networks, but despite Elon Musk’s past claims, we are far from fully autonomous cars, delivery or household helping robots, and other ways of incorporating machines into our daily lives as they are still unable to break free of controlled environments to embrace the complexity of our everyday surroundings.

So what’s happening now is that AI is developing the same skills which we have long perceived as the most “human,” distinguishing ourselves from non-human animals thanks to our “sapient” nature and intelligence, and yet we still cannot delegate the most mundane physical routine tasks of daily life to AI.

This highlights how the scenario we’re living in today is potentially even more dystopian than the long-feared full automation. It’s true that many blue-collar jobs have already been replaced by machines in the controlled environments of factories, and many more jobs can be automated now in white-collar sectors. But in addition to that, platform capitalism has been using humans to substitute for what machines cannot do for some time already. Consider ride-hailing or delivery services, or Amazon warehouse workers — “the worker’s activity, reduced to a mere abstraction of activity, is determined and regulated on all sides by the movement of the machinery, and not the opposite.” We might think that we’ve moved past the age of capitalism in Manchester factories that influenced Marx writing these words in the 19th century, but they seem just as applicable today.

Besides the complexity we will have to navigate in defining our place and relationships with AI in operations and production, there is another side of it, very important for service design and not yet fully acknowledged — the automation of consumption. This means that an AI agent could take over some daily tasks and use services on behalf of the user, saving their time and optimizing resources. This could change both the way we think of a service as an organized process and the touchpoints as its interfaces. It should lead to new approaches to design, just as the initial digital transformation required significant adjustments. We can look at some examples of how this is already happening and also extrapolate and speculate about future possibilities.

Simple automation algorithms, not even requiring AI, are changing consumption models with subscriptions or repetitive suggestions. This alters user journeys, as people might potentially spend less time browsing for things to buy and discovering new, different products beyond the ones they’ve subscribed to.

Complete substitution of human interactions and even self-service interfaces, as experimented by Amazon Go, requires much more complex technology and a lot of computational power, and might not be profitable at all. However, it presents a very interesting case of an almost zero-interaction service. This new paradigm demands a different approach to user journey planning and the issue of transparency and explicability of the service, which, with no interactions, also eliminates feedback and reassurance opportunities for customers.

There might be no need for a shop or a website when AI agents can integrate interfaces, becoming access points for multiple services, this presents numerous challenges in transmitting brand values and identity through such minimal interaction. What are the ways to perceive the difference between calling Lyft or Uber when talking to Alexa, and how can we design them? Gradually, we arrive at the point when we might not even need to have a voice assistant device itself or take out our phones to interact with AI and services, doing it through wearables like earbuds or watches, which get ever closer to the human body and are constantly present, to the point of considering implantable chips. We still interact with those objects nowadays, but the more AI learns and the more proactive it becomes, the fewer interactions will be needed.

How long will it take until AI and connected devices ecosystems can eliminate the service itself in terms of making it almost invisible for the user? (For example, if you have a smart fridge connected to a smart home system, we can imagine AI eventually analyzing your consumption habits and organizing delivery directly to the fridge (yes, that’s already a thing!) while you’re at work, opening the doors to the delivery person or a robot.)

Using predictive agentive technology, the data about your behavior becomes a kind of interface and a way to interact with the system in a back-end without any user actions. Until the point that the user can be fully substituted:

(From Google assistant presentation)

At this point, we return to the need to understand how to design complex collaborative human-AI user journeys and which tasks would be better addressed by one or the other, and how to build relationships between these actors. This time, it is based not only on competencies, but also on preferences of what to experience. We might want to save time waiting in line at a grocery store and spend it instead at a favorite restaurant or spa, enjoying their service as a nurturing experience. In this case, an AI agent with a deep understanding of its user can be very useful in augmenting their experience by informing the service staff about their preferences, allowing the service providers to “anticipate needs and outdo expectations,” as John Maeda puts it in “How to Speak Machine,” comparing future augmented user experiences with the best of Japanese hospitality traditions.

Designing these complex experiences and relationships requires a new approach to understanding how we design experience journeys, which become more like blueprints with the constant presence of AI entities that serve as additional mediators between users and services.

This requires a new grammar and set of principles that we will discuss in the following articles. As we continue to explore the possibilities of AI experiences, we must remember to prioritize the human experience and strike a balance between efficiency and meaningful interactions. By doing so, we can create a future where AI and humans work together seamlessly, enhancing each other’s strengths and minimizing their weaknesses.

This and the following articles is a result of research that we conducted at oblo for some clients working with AI in the past years and a presentation we gave in 2020 at Intersection Conference together with Roberta Tassi. The presentation was kindly transcribed by otter.ai, and then rewritten in collaboration with Chat-GPT4.

--

--