Designing for the perception of convenience

Maui Francis
8 min readJan 28, 2020

--

co-authored by Peter Vachon

To meet expectation you must meet ones perception of an experience. Perception is the direct reaction to the expression, and in the case of AI, this known complexity and nuance is what we aim to design for.

As I reflect about conveniences and their value, my perception is usually about things that make my life easier. Paying my bills online through autopay, my coveted iPhone, and having my groceries delivered to me instead of braving the store on a lazy Sunday afternoon. Within each of these little moments of triumph, there’s an essential need that’s being addressed to challenge the perceptions of these social norms — I can do more, with less effort. That’s where AI, or Augmented Intelligence, becomes our unseen friend.

Over the course of any given measure of time, we may experience, seemingly invisible, AI/ML technology at work — continuously running in the background, quietly waiting for a moment to step in. There’s a direct correlation between user behavior, and what’s offered as a result of it by technology, sometimes convenient and sometimes not so much. As designers though, we go about giving life to the expression of magic, convenience, control…and as a byproduct, the perception of that technology at work.

When I first moved to Austin, every morning, Monday through Friday, I would put in the address for IBM, as I had no idea where I was going. Over time I stopped relying on Maps to get me to work, but then started noticing something different. I would turn on my car, and be prompted with a message telling me the status of my commute. It would indicate if traffic was heavy, moderate or light, with an estimated time of the drive. A simple convenience that is useful for me to make the decision to try and find an alternate route, choose if I’ll take a meeting from the car or from home and take the trip in when rush hour ends. A great nudge for me; cool if it’s there, cool if it’s not, but I can live without it. The ultimate choice of sitting in traffic is mine, the technology simply gives me a cue of what to expect, but at the same time allows me to form my perception based on unexpected nudges from my behaviors.

Think about it. With the fierce push and competitive nature in the market to provide smart experiences, are we hiding behind the allure of what this technology can do, or should these experiences be a subtle reminder and byproduct of our behavior? Could it be possible to create more delightful experiences when it is not expected? There is an intersection between what I notice as convenience, my perception, and what I then expect to follow, my expectation. To meet expectation we must meet ones perception of an experience. Perception is the direct reaction to an expression, and in the case of AI, this known complexity and nuance is what we aim to design for.

“Perception is the direct reaction to an expression, and in the case of AI, this known complexity and nuance is what we aim to design for.”- Peter Vachon, Design Principal, IBM Security

Now, expectation is inherent, but your perception is your reality. The way we interpret and understand the world around us through our senses becomes the way we believe things are. There are lots of different cues to communicate to me that the convenience of AI is a positive augmentation in my life vs. a burden and unhelpful. What kind of expressions do we consider as humans in the physical and digital world and how do we as designers think about designing the expressions for the perception of convenience?

There are many forms of expressions, like the human face for example (I mean I know as soon as you read this you either made a facial expression or thought of one). You can look at a persons face which can inform us about personality, sex, age, health, ethnicity, social rank, attractiveness, emotion, and to some extent, the intelligence of the bearer. When most people think of AI, Sci-Fi movies have done an incredible job at creating amazing forms of expression as well as invented our perception on the convenience they may or may not offer. Embodiments of AI like Her, BB-8, Kit from Knight Rider, the Terminator, and Ex-Machina where more human like expressions can inform someone of those same attributes. Even though those are fictional AI characters, it’s in the perception of those expressions that feeds our understanding of whether or not the AI could be intelligent, dumb, happy, angry, joyful, etc.

If we start to think about those different attributes as “Tiers of Expression”, we can break up the individual patterns that could inform us as designers how we design for the perception of AI in our products today. As designers, we need to think about the multiple tiers of expression. Understanding these tiers will allow us to design for depth and dimension across a variety of interfaces.

Tiers of Expression

As researched and created by Liz Holz, Maui Francis, Yael Alkalay, Andrew Thompson, and Peter Vachon, the Tiers of Expression is meant to understand how humans perceive augmented intelligence. Through behavior and emotional studies, we were able to capture a core set of principles, and guidelines that relate to expressing intelligence as it applies to most sensory experiences.

Classifying an expression

Expressing through behavior

Calculating an expression

Visualizing different dimensions

Examples that come to mind

Text Analyzers

Grammerly checks the tone of your messages and displays an emoji based on what you have input. There is no interaction really, it automatically is displayed. However, what is unique about this expression is that it uses a familiar form of communication to reveal sentiment.

Smart replies

A work in progress I’d say, as most find ways to turn it off or look for ways to influence the ‘smart replies’ found in Gmail to be more accurate or share the sentiment of who it may be coming from, or going to. As seen in the example, based on the contents of the email, and use of punctuation, the technology provides recommended responses, with options for tone.

Virtual Assistants

Siri is a pseudo-intelligent digital personal assistant. She uses machine-learning technology to get smarter and better able to predict and understand our natural-language questions and requests. In the case of Siri or Alexa, they use multiple dimensions of expressions (visual, amplification, time and space) to try and build a trusted relationship with a person in order to set the users expectation that they can be perceived intelligent. A couple of ways they achieve this is through tone, context, animation, and emotion which creates a personality that integrates with your everyday life.

There are many more ways we experience on a day to day basis such as mobile check deposits using image recognition to online shopping and streaming services leveraging recommendation engines. You can read about more of these at https://towardsdatascience.com/how-artificial-intelligence-is-impacting-our-everyday-lives-eae3b63379e1

Considerations and Tactical guidance

As we approach designing experiences that are a reaction or reflection of behavior, there are many factors. Becoming mindful of the tiers of expression may help anticipate perception which in turn helps us meet expectation, instilling trust and delight. Here are some quick tips to consider when designing for the perception of human behavior within augmented intelligence.

Initializing

Make clear what the system can do, and how well the system can do what it can do.

e.g. PowerPoint’s QuickStarter illustrates what the system can do. QuickStarter is a feature that helps you build an outline. Notice how QuickStarter provides explanatory text and suggested topics that help you understand the feature’s capabilities.

During Interaction

  • Time services based on context, and show contextually relevant information.
  • Match relevant social norms, and mitigate social biases.

e.g. Acronyms in Word highlights, Show contextually relevant information. It displays the meaning of abbreviations employed in your own work environment relative to the current open document.

Getting it wrong

  • Support efficient dismissal and correction.
  • Scope services when in doubt, and why the system did what it did. Most AI services have some rate of failure.

e.g. Auto Alt Text automatically generates alt text for photographs by using intelligent services in the cloud. It illustrates, Support efficient correction, because automatic descriptions can be easily modified by clicking the Alt Text button in the ribbon.

Time after time

  • Remember recent interactions, and learn from user behavior.
  • Update and adapt cautiously, and encourage granular feedback.
  • Convey the consequences of user actions, and provide global controls.
  • Notify users about changes.

e.g. Ideas in Excel empowers users to understand their data through high-level visual summaries, trends, and patterns. It encourages granular feedback on each suggestion by asking, “Is this helpful?”

In conclusion

As we approach designing experiences that are a reaction or reflection of behavior, there are many factors, like multiple dimensions of expressions. For us as designers, our purpose is providing the best connection to the world around us — in a physical or a digital place. Becoming mindful of the many tiers of expressions may help anticipate perception which in turn helps us meet expectation, instilling what everyone looks for in a healthy relationship…trust and delight.

Maui Francis is a Design Principal for Cloud, Data & AI at IBM, based in Austin, and Peter Vachon is a Design Principal for IBM Security at IBM, based in Austin, TX. The above article is personal and does not necessarily represent IBM’s positions, strategies or opinions.

--

--