Seamful Gestures: a body interface for mixed reality

Recently we went to Interaction 17’ and presented our speculative concept on gesture inputs. If you haven’t seen the talk yet, here is the link. It was impossible to unpack the entire concept in 8 minutes. Therefore, we would like to share a bit of our process and thinking behind the talk.

The lack of affordances makes free form gestures difficult to approach.

Mixed Reality has recently become a popular buzzword in design, but underlying interaction paradigms still need refinement. We are familiar with using trackpads and VIVE controllers. However, when we use Hololens, the lack of affordance makes free form gestures still hard to approach. In this work, we focus on free form gesture patterns and how we can utilize our own body when interacting in the Mixed Reality context.

Body interface and proprioception

Use proprioception as affordance for free form gesturesUse proprioception as affordance for free form gestures.

We have an innate understanding of our limbs, joints, and muscles without needing to rely on our five senses. This is called proprioception. We hypothesize that we should rely on proprioception for mixed reality input because it creates affordances by using the body without introducing external hardware interfaces. We see great potential for a more natural and flexible body interface in this approach.

Seamful v.s. Seamless

Traditionally, we appreciate seamless design: design that is invisible and is effortless to use. However, to approach free form gestures, we think the affordance has to be obvious for users to pick up. We need to better define the interaction space and range when designing gesture inputs. So here we argue for Seamful Gestures.

Testing prototypes to learn, fix and increase fidelity.

There is no better way to think and learn than making things. So we made several prototypes, tested them out with colleagues and quickly improved the design based on feedback.


Prototypes explored: the fidelity of controls increased from a single variable to a set of 3D variables.
  1. Seam
The distance in between thumb and index fingers is a seamful proprioceptive gesture with a clear affordance and an interaction boundary.

We define Seam as a clear affordance for an interaction boundary. For example, the distance in between our thumb and index fingers is a seamful proprioceptive interaction, since we intrinsically know the maximum and minimum of the span. This type of interaction is perfect for controlling transparency or other controls that have properties with upper and lower limits.

Assign the gesture to a continuous value with upper and lower limits.

2. Hierarchy

Hierarchy across fingers in terms of how powerful and sensitive they are.

Our finger muscles are trained differently on a daily basis: thumb and index are more frequently used than pinky. As a consequence, our fingers have a hierarchy in terms of how powerful and sensitive they are. For example, the affordance between thumb and index feels easier to use and has a higher fidelity than the affordance between thumb and pinky. There is an opportunity to utilize this existing hierarchy of fidelity. For example: the click between thumb and middle/ring fingers can be used for step controls of the object’s transparency.

Apply lower fidelity step controls to less powerful and sensitive fingers.

We have also thought about assigning the least frequently used tasks or even mission critical to the affordance between thumb and pinky, because it is not a common gesture and it takes more effort to activate. In addition, we had a prototype using the less dominant hand for binary controls to fulfill complementary functions such as activation/deactivation, while the dominant hand is assigned to more precise scrolling selection. The principle behind these two concepts is the same: our body interface has a hierarchy. Think about the level of fidelity when designing your input system.

3. Directionality

Use the vector reading between thumb and index fingers to control directionality feels unnatural.

We had a prototype utilizing the orientation of the wrist to control which axis(x, y, or z) to manipulate when scaling an object in 3D. Another attempt was using the vector reading between two fingers to select the axis. Both didn’t turn out effective. It feels unnatural to force the fingers into a precise x, y or z direction.

We feel proprioceptive gesture is powerful for fine tuning interactions, but not for directionality in space. For directionality controls, we still prefer the solution below: using the tilting of the hand to fly in Google map.

Use the tilting of the hand to control directionality feels more natural.

More questions, always

We definitely haven’t resolved everything. Here are some of the questions raised during our process. We would love to hear your thoughts.

  1. While proprioceptive gestures are already burdened with symbolic meaning, is there a way to utilize symbolic meaning in our interfaces? e.g. the finger point, or bow and arrow pull.
  2. We have learned that seamful and seamless gestures are great for different tasks. What happens when you combine them?
  3. How does changing context alter these principles? What is different in an autonomous car or connected home?