🤖 Automation VS. Augmentation 🦸‍♂️ in AI: Designing Mixed-Initiative User Interfaces

Steven M.Moore
AIxDESIGN
Published in
6 min readDec 8, 2022

_________________________________________________________________

PSA: It is a key value for us at AIxDESIGN to open-source our work and research. The forced paywalls here have led us to stop using Medium so while you can still read the article below, future writings & resources will be published on other platforms. Learn more at aixdesign.co or come hang with us on any of our other channels. Hope to see you there 👋

Augmentation beats Automation (for now)

Will AI take our jobs away? The answer is: Not yet! AI is far from replacing humans — and it should not be the goal — but it is here to assist us. Instead of automating human tasks, think: How can we make lawyers better lawyers? How can we make creatives more creative with AI? Here are two examples from the industry worth looking at….

Autonomous Driving

The long-term goal may be to reach L5 level of automation and replace the human driver with AI, but (despite Elon Musk’s promises) we’re not there yet. Tesla’s Autopilot can navigate through low-complexity environments. However, in more complex situations — think driving in a European city with windy and narrow roads or in a congested city in India — the autonomous system needs to (a) recognize its limitations and (b) relinquish control to the human driver to avoid accidents. Depending on the situation, the human or the AI can take control.

Cancer Screening Software

Similarly, cancer screening software helps radiologists distinguish between malignant and healthy breast exams. It’s probably not a good idea to let the AI make all the decisions autonomously, considering what’s at stake and that the AI will never be perfect. So why not let the AI filter thousands of exams that appear healthy? And why not have the doctor focus on the cases that seem suspicious? In this way, we can leverage the strengths of both humans and AI. A successful design of a mixed-initiative or human-AI collaboration.

Mixed-Initiative Design in AI

Unlike traditional user interfaces, the introduction of AI creates an element of uncertainty. Will the AI understand you correctly? Will it draw the right conclusions? When designing AI-powered user experiences, product designers must ask themselves how much initiative the AI versus the human should take. In what situations should the AI act autonomously, when should it make suggestions, and when should the AI system completely scale back its services and let the human take the initiative?

More Examples of mixed-initiative design

Let’s say we are developing an AI to help creatives sketch. In a mixed-initiative approach, both the AI and the creative professional take an active role in the process. One could imagine a user experience that resembles a game of table tennis.

An example of a mixed-initiative design process for the task of sketching.

Here’s another example. Recently, Cosmopolitan editors and a digital artist named Karen X. Cheng used DALLE-2 from Open AI to create the world’s first AI-generated magazine cover. DALLE-2 is a new generative model that takes a text description as input and generates images according to the prompt. The magazine’s claim “And [the image] was created in just 20 seconds” couldn’t be further from the truth. In fact, Karen spent many hours trying hundreds of prompts until she found a promising image. This creative process between a digital artist and a generative model is another fantastic example of mixed-initiative design.

The final prompt (left) used to create the world’s first AI-generated cover magazine (right).

Practical guidance

To create mixed-initiative experiences, designers and product managers should figure out which tasks or subtasks the AI should take over and which the user should remain in control of.

When should people maintain control?

  • People enjoy the task
  • People feel personally responsible for the outcome
  • The stakes of the situation are high
  • Personal preferences are hard to communicate

When can people give up control?

  • When people are unable to do a task
  • When a task is unpleasant or unsafe

More practical guidance

Suppose we develop an AI-based scheduling assistant that reads an email and automatically schedules a meeting if the email contains a meeting request. At a high level, there are four scenarios.

a) The AI takes an action AND is correct (the AI schedules a meeting and the email actually contained a meeting request).

b) The AI takes an action BUT is wrong (the AI schedules a meeting but there was no meeting request in the email).

c) The AI takes no action AND is correct (AI does not schedule a meeting and there was no meeting request in the email).

(d) The AI takes no action BUT is wrong (AI does not schedule a meeting but there was a meeting request in the email).

Hence, the question arises whether or not to invoke the services performed by the AI. In other words: We have to make a decision about action or inaction. Generally, autonomous actions should only be taken when the AI agent believes they will provide greater expected value than when doing nothing. To make such a decision, we can look at utility (how much value do we provide to the user in a given situation?) and the probability that the AI is correct (how confident is the system in a given situation?). If the confidence of the AI is low, the user should remain in control (no action). If confidence rises beyond a threshold, the AI can take the initiative (action). Where to set the transition point depends heavily on the situation. If there is a lot at stake (e.g., in health care to education), you should limit the scope of the AI. On the other hand, if little is at stake and mistakes don’t really have negative consequences you may want to expand the scope of AI. Finally, don’t forget: AI systems can learn from past interactions. The more data they see, the more robust and less error-prone they get. So you will consistently need to ask yourself how much initiative should lie on the human versus AI side.

Mixed-initiative: If the probability of the AI being right is higher than a certain threshold perform an action. If it’s lower don’t act.

Closing

Mixed-initiative design is an effective way to design useful, safe and pleasurable AI experiences. Start with the AI in the background and gradually increase the scope of the system with time.

Try it!

Credits 👏

Changhoon Oh, Jungwoo Song, Jinhan Choi, Seonghyeon Kim, Sungwoo Lee, and Bongwon Suh. 2018. I Lead, You Help but Only with Enough Details: Understanding User Experience of Co-Creation with Artificial Intelligence. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ‘18). Association for Computing Machinery, New York, NY, USA, Paper 649, 1–13. https://doi.org/10.1145/3173574.3174223

Eric Horvitz. 1999. Principles of mixed-initiative user interfaces. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems (CHI ‘99). Association for Computing Machinery, New York, NY, USA, 159–166. https://doi.org/10.1145/302979.303030

People + AI Guidebook (PAIR) — Chapter Feedback + Control https://pair.withgoogle.com/chapter/feedback-controls/

3 Learnings about Human-Centered AI Principles from Designing a Breast Cancer Screening Platform for Radiologists. https://medium.com/aixdesign/ai-design-use-case-the-ai-radiologist-collaboration-549d45bc171e

https://www.cosmopolitan.com/lifestyle/a40314356/dall-e-2-artificial-intelligence-cover/

--

--

Steven M.Moore
AIxDESIGN

Sharing my ideas and thoughts on artificial intelligence, innovation & entrepreneurship.