Day 63 — Future of Design series 4/7: “Humanized User Interface”

Roger Tsai & Design
Daily Agile UX
Published in
7 min readMay 2, 2019
Original Photo by xandtor on Unsplash

Other than computer mouse and keyboard, what are the potential ways we can better interact with technology? Is there any interface or interaction that “feels natural”? Thanks to the advancement on hardware, software, and design, Humanized user interface (HUI) like touchscreens and voice UI are now available for us to help us achieve our goals much easier. For humanized user interface, what are the various input/output methods? And how can we design around them? In this article, I’m going to share my knowledge around designing Human Interface.

  • What is Humanized User Interface
  • General Design Principles
  • Design Challenges

What is Humanized User Interface

Definition

Humanized User Interface is the interface that utilize/ mimic human sensory and natural action/behavior as the input/output methods. In other words, these interfaces are utilizing human voice, natural gestures, and other common behavior to interact with technology around us. With new technology’s help, HUI is the effort to identify intuitive way input/output of the system, and distinguished itself from traditional computer interface that requires training/practices (e.g. learning what right mouse click is, or how do ctrl, alt, shift, tab keys mean & work).

Image source: The Fangirl Initiative

Below is a list of more well-known humanized user interface examples:

Input

  • Touchscreen
  • Voice UI
  • Gesture-control interface (e.g. XBOX Kinect, Magic Leap, Nitendo Wii)

Output

  • Voice UI/ Sound
  • Rendered imagery (e.g. Augmented Reality on your phone)
  • Movement or status change of physical objects (e.g. turn on the fan to simulate wind blowing in Virtual or Mixed Reality)

[Imagery of the TREE project here YY]

This list above sounds pretty broad, doesn’t it? How do we know if the interface is “humanized”? I’ll give you an example. Three years ago, my friend told me that her 10-month-old niece, holding a Polaroid picture in her tiny hands, trying to use two fingers to “zoom-in” on a physical Polaroid picture. Or another example, a 16-month-old boy swiped on YouTube video banner to get rid of the banner. These gestures design mimic real world gesture-object interaction and make it easier for people to understand it, learn it, and utilize it; literally even a baby/toddler can do it. For lack of better words, I’ll just call it “TT: Toddler Test.” If a toddler can easily learn/do it, it’s humanized.

General Design Principles

Before we introduce the design principles, let’s break it down by the types of sensory:

  1. Natural voice input
  2. Natural gesture input
  3. Other input

Each categories deal with specific types of sensory and recognition models, therefore the corresponding design principles vary. Let’s introduce the principles one category by another:

Sound/voice input

  • Voice is the most commonly used communication tool between human, (until text messaging came to our lives). It’s only “natural” to create a system with voice user interface (VUI) that can mimic that natural human dialog (yes, and also chatbot for text messaging). The most commonly used these days are voice-driven agentive technology (e.g. Amazon Alexa, Apple Siri, etc.) In my past post, I talked about how to adopt voice user interface in product offerings from product design perspective.
  • Other than human voice, there are other sound input driven system or device. For example, if you clap twice, your living room lamp will turn on, or if you whistle, your garage door will shut.
  • The key thing is to understand the major differences between VUI and GUI (graphical user interface): 1) there’s “time” element in verbal communications, like I have to say one way after another to form a sentence, say 10 words at the same time; this constraint limits the amount of information can be presented at one time; 2) VUI is still a new area and the natural language processing is yet to be fully mature, therefore there’s a lot of “unhappy” path needs to be considered.
  • Design Principles: 1) Keep it simple, stupid; only utilize VUI for simple utility functions; 2) consider possible unhappy paths, and design corresponding response and reactions; 3) consider ambient noise and multi user scenarios; 4) if VUI simply can’t resolve users request, design a fall-back plan of solution with other user interface.

Lighting / imagery input

  • A lighting input driven system detect the change of light as the input. For example, if you put your hand close to a faucet or a soap dispenser, water and soap will come out. A imagery input driven system use the computer vision to as input to trigger corresponding actions. For example, if you point your iPhone X to your face, your phone will unlock; and when you point your camera to your friends, the camera will recognize human faces and shift focus onto their faces.
  • Design Principles: 1) provide clear feedback to users if the system has successfully detected the proper input or not, we all have that experience that we’re frustrated in front of those sensory driven faucet or paper towel dispenser, wondering if we’re not doing it right, or the system broke down or was poorly designed; 2) upfront education for users about what’s an effective way to operate these technologies; 3) consider the influence from ambient lighting/imagery visual noises.

Gesture input

  • Touchscreen allows lots of 2 dimensional gesture control. For example, tapping, swiping, pinching, drag-n-drop, etc. Also, with the visual aid like Augmented Reality (AR), users can easily see how the gesture actually impact the object they’re interacting with in real time. This fast cause-effect feedback allows users to learn quickly about how to interact with the interface.
  • System like XBOX Kinect which captures your posture movement through “computer vision”, a camera with image processing software, to determine your movement. Technically it’s a imagery input, but because of the nature of the interaction, it can be seen as gesture input, too.
  • Design Principles: 1) keep it simple, and make sure those gestures are commonly understood and easy to remember; 2) provide new users with guidance of how to use these gesture control; for posture input, show a mirror image of the user or a “trainer” avatar to guide user how the expected gesture input should be done; 3) make sure they are not being used with task that are associated with big risk; provide alternative way to do it, and has a way to revert back.

Design Challenges

Data Cleaning

Some of the main challenges reside in the system efficacy of distinguishing noise vs. useful data. For example, the computer vision needs to determine user gesture & movement quicker and more precise, by eliminating the background visual noise; similar logic can be applied to VUI

Various Format

Given most of these technologies and applications are still new, we’re still accumulating experiences and best practices. The exciting thing about these technologies is that they can be applied in many many different ways. This broad use also creates the challenge that one successful case sometimes cannot be applied to another. It will take time and patience to try different things with iterations in order to optimize the experience.

Future of HUI

With the help from AI and machine learning, also the rapid growth of CPU capacity, the system will be able to process faster and create better quality rendering of 2D/3D model/images. The quality of output and the accuracy of input capturing hopefully can be enhanced.

More examples

Learn about more projects here:

Projects on HUI Central

Conclusion

  1. Humanized User Interface covers lots of different types of devices, media, and systems; the distinct factor is that it’s focused on “close to natural human input/output” instead of “training/practices required behavior” (like a computer keyboard);
  2. Designing humanized user interface (HUI) shares basic human centered design approach, however there’s an important focus on the sensory input and output, and the gap between users’ mental model and system model;
  3. Because they are so new, it will take time to iterate and learn from experiences to optimize the end results.

Do you have any experience designing HUI related applications? What was your approach, and what did you learn? I’m eager to learn from you.

ABC. Always be clappin’.

To see more

All Daily Agile UX tips

The opinions expressed in this article are those of the author. They do not represent current or previous client or employer views.

--

--