6 Principles of Leap Motion Interaction Design
Interaction design can be a delicate balancing act, especially when developing for VR.
In the process of building applications and various UX experiments at Leap Motion, we’ve come up with a useful set of heuristics to help us critically evaluate our gesture and interaction designs.
In the process of building applications and various UX experiments at Leap Motion, we’ve come up with a useful set of heuristics to help us critically evaluate our gesture and interaction designs. It’s important to note that these heuristics exist as lenses through which to critique and examine an interaction, not as hard and fast rules.
#1. Tracking consistency
The team at Leap Motion is constantly working to improve the accuracy and consistency of our tracking technology. That being said, there will always be limitations to any sensor technology, and the Leap Motion Controller is no exception. Spending the time to make sure tracking is consistent for your particular interactions early will save you headaches down the road.
When developing a motion or gesture, take the time to have multiple people perform the action, while you watch the resulting data in the diagnostic visualizer. (Be sure to check out our quick guide to human-driven UX design.) Take note of inconsistencies between multiple people and multiple attempts at the motion by a single person. For instance, hands near the edge of the device’s field of view are harder to track, as is the side of the hand (versus the palm or back of the hand).
#2. Ease of detection
Once you know the motion you’ve created has relatively consistent tracking, you’ll want to have a concept of how easy they are to detect. Are there obvious conditions that define the motion? How well is it separated from other things you might want to detect? Is it obvious when the motion has begun and ended?
On the surface, ease of detection might seem like a primarily technical concern, rather than being within the purview of design. In reality it, like many things, bleeds well into the space of design. For one, the easier the motions you’ve designed are to detect, the less time you’ll spend optimizing the detection code, and the more time can be spent improving the overall experience. Easier-to-detect motions will also have lower rates of false positive and negative detections, making the application experience more useable.
Secondly, and more concretely, the sooner you can accurately detect the beginnings of a motion or gesture, the sooner your interface can provide the proper feedback and behaviors. This will lead to an application that feels more responsive, and makes people feel more in control. It also means you can provide more ways for people to adjust for errors and subtly modify their interactions to fit their particular use patterns.
Occlusion from various motions commonly comes in two forms. The first, and most simple, is when something about the motion physically covers the sensor. When a person has to reach across their body and the sensor, their sleeve, arm, or jewelry (say a large watch or a loose bracelet) can prevent the controller from getting a clear view of their hands — reducing tracking accuracy or preventing it entirely.
The second form of occlusion is more subtle and can be particularly troublesome. When the controller can’t visibly see a part of the hand, it makes assumptions based on the data it has available and an understanding of how the human hand works. Often these assumptions prove quite accurate, but there are times where the system cannot reasonably provide highly accurate responses. This means that if your motion or interaction commonly involves occluded parts of the hand, the accuracy of tracking will be significantly reduced.
One hand covering another, movements of the fingers when the hand is upside down, movements of the fingers when the hand is sideways and off to one extreme side of the field of view, some motions when multiple fingers curl or come together — these can all result in this second type of occlusion. This also comes into play when the hand is presented side-on to the device, as a relatively small surface area is visible to the controller. This is also something that our tracking team is working to improve all the time.
In many cases, this comes down to testing your actions with the diagnostic visualizer in a variety of areas around the detectable field of view, and watching for inaccuracies caused by occlusion. The more that the gestures and motions used in your design can avoid situations that cause significant occlusion over an extended period of time, the more accurate and responsive your application will be.
As society has adopted computers more and more, we’ve come to understand that human bodies aren’t necessarily well-designed to be sitting at desks, typing on keyboards, and using mice for hours every day. Some companies have responded by making input devices which can be used in much more relaxed positions, and there are large research efforts underway to continually improve our posture and working environments.
Since we’re not designing a physical interface, our task as motion-controlled application makers is slightly different. As we are creating affordances and gestures, we have to consider how we’re asking users to move their bodies to perform interactions, and figure out whether those movements have the possibility for causing long-term harm or strain. Furthermore, we also need to see how tiring our interactions are, and if they can be performed from comfortable positions.
The most comfortable position for people to use most applications is with their elbows resting on the table or arms of a chair. From this position, each hand moves in a sphere around their elbow. The wrist provides some radial range, but it’s extremely limited. With the elbow on the table, wrist motion range is also incredibly limited. In particular, we must avoid repetitive wrist motions to avoid RSIs in the carpals (carpal tunnel syndrome). Certain actions (rolling the right hand counterclockwise) are particularly difficult from this position, and may require users to lift their elbow.
When considering transitions between motions, make sure you have a clear concept of what other motions someone is likely to perform using your application at any given moment. Knowing that set space, you’ll be able to better assess if any of the transitions has a high probability of being problematic. There are two primary ways in which a transition can be an issue for your application experience.
Interaction overlap. The first is a situation in which two possible actions are too similar to each other. This can cause issues, both for people using the application and gesture detection algorithms. Actions that are overly similar are difficult to remember and have a good chance of reducing the learnability of your application. Reserve actions that are similar to each other for situations in which the two actions have highly similar results.
Awkward switching. The more subtle transition issue is awkward switching. In-air motions are highly subject to interpretation by the person making the gesture. Where a motion or gesture begins can have a lot of influence on how people tend to perform that motion. Without any hardware to provide direct feedback, people may perform actions differently. This can wreak havoc with your motion detection code.
Awkward switches can also cause ergonomic issues where people have to move in uncomfortable or overly exaggerated manners. An “initialization” or “resting” pose from which many actions begin can be a good way to reduce the challenge of dealing with awkward switching. Make a point to analyze and test the various interaction and motion transitions in your application. Look for places where people are confused, see where you and your testers are uncomfortable, and be aware of how different transitions impact how people perform particular motions.
This Arm HUD prototype is triggered by flipping your arm so that the palm is facing towards the user, while buttons and sliders have distinct trigger states.
A lack of proper UI feedback can sink an otherwise well-designed interaction. When developing a new interaction, consider how you will provide feedback from the application to the person performing the gesture. The lack of hardware-based physical feedback in motion-based interactions leaves all the onus for communicating the state of the application (and the performance of the person using it) completely on the application’s user interface.
Consider how your interface will communicate if an action can be done at all, what it will do, and how will someone know what caused a false positive or a false negative detection (so the person using your app can adjust their behavior to avoid it). At a minimum, the visual feedback for a motion interaction should communicate three things: Where am I now in terms of performing this interaction? Where do I need to be to complete this interaction? How far and in what way do I need to move in physical space to complete or cancel this interaction?
- What Do VR Interfaces and Teapots Have in Common?
- Designing VR Tools: The Good, the Bad, and the Ugly
- Build-a-Button Workshop: VR Interaction Design from the Ground Up
- 4 Design Problems for VR Tracking (And How to Solve Them)
- 5 Experiments on the Bleeding Edge of VR Locomotion
This post brings together elements from Daniel Plemmons and Paul Mandel’s articles Introduction to Motion Control and Designing Intuitive Applications. Be sure to check them out for more insights on these heuristics, intuitive app design, and cool real-world examples.
In addition to these heuristics, our designers also make good use of classic tools for critical interaction design analysis. There are more than a few copies of Nielsen’s “10 Usability Heuristics” taped to our office walls. If you’re not familiar with these critical lenses, or the strategy of heuristic analysis, check out these links.
Originally published at blog.leapmotion.com on May 17, 2015.