Designing Microgestural Interactions

Apr 14, 2017 · 7 min read

A study on microgestural interactions and their use in future interfaces

Through the emergence of ever more intelligent devices and progressive technological development, the way we interact with devices surrounding us will change in the future.

Nowadays almost all everyday objects are equipped with highly complex technologies, sensors or Artificial Intelligence, whether it be modern cars, smart-home devices or the Internet of Things. This results in many new possibilities for designing user interfaces. While previously users could only interact with digital devices through certain input sources like a mouse or a touchscreen, we can now create new ways of interacting using gestures or voice input.

Due to Machine Learning, increasingly precise tracking technologies, as well as the networking of everyday objects, contactless interface control isn’t a dream of the future anymore: it is already realised to a certain point in the interface design of smart-home devices. In this context, the design of contactless interfaces gains in importance. For example, contactless interaction by voice control has become a standard feature in many modern smartphones or home assistant systems, like Alexa or Siri.

In the area of contactless interaction via gestures things have already changed a lot, too. For a long time tracking technologies have only been able to supply relatively imprecise or unreliable data, and so far application examples have been limited to entertainment electronics and the game industry. However, since tracking technologies have improved significantly over the last few years (a great example of this is Project Soli), today more precise interactions and applications can be designed with contactless gestures. For example some of the major car manufacturers have recently presented operating concepts for their cars, which make the control of the multimedia system of the car possible by means of gestures.

By now this sector of contactless gestures is very well developed from a technical point of view, but the design and application of these gestures are still largely unexplored. Especially the still relatively new sector of so-called »microgestures« is as good as unresearched.

We tried to close this gap and carried out a study on microgestures, in which we designed microgestures and corresponding use cases, and evaluated them afterwards with usability tests. The results of the study have now been summarized here in abbreviated form.


Microgestures

In order to define the term microgestures, we first had to consider gestures as a whole. Gestures are body movements which serve first for interpersonal communication. In the field of human-computer interaction, however, they are also used to interact with devices.

When performing a gesture, the particular body part used defines its type. Following this system, gestures can basically be categorized as whole body gestures, hand gestures and finger gestures (which we call microgestures). The most common gestures are performed with arms, hands, or a combination of both. A disadvantage of this type of gesture control is that, if the user performs the gesture repeatedly he can be overtaxed and the gestures can then no longer be performed efficiently enough. Microgestures, on the other hand, require only minimal finger movements and can be performed several times in a short time without exertion.

An exemplary microgesture to manipulate or switch between states

As a result of our research, we have defined the following definition for microgestures:

Microgesture control is a specific type of gesture control and deals with the non-verbal control and manipulation of virtual objects or devices in the field of human-computer interaction. Microgestures describe a defined touchless user interaction, which transmits signals and information through filigree and minimal body movements. The microinteraction consists of small, varied, intuitive movements, which require little effort and can be performed in a short time.


Taxonomy

In order to be able to examine microgestures exactly, a precise terminology of the gestures is necessary. Therefore, the particular gestures, as well as their movement sequence and their range of functions, are described and categorized. On the basis of our research we developed the following taxonomy for microgestures, in order to classify the gestures in the later course of the usability tests and thus to establish possible correlations. This taxonomy divides into six subitems, which respectively describe a different aspect of the microgesture.

The gestures, conceptualized subsequently, were conceived using the previously created definition. In addition, the gestures should cover the spectrum of the taxonomy as broadly as possible, so that all aspects can be compared and evaluated at a later stage.


Prototyping & Usability-Tests

In order to test the microgestures for their usability, we developed various functional prototypes. For the implementation of the prototypes, we selected the motion capture camera Leap Motion as tracking tool, since it has a well-documented JavaScript API which allowed us to quickly program the prototypes. Via the API, the nodes of the tracked hand can be read as vector data in space. With the help of this vector data, we were able to program the different movements of the gesture and the associated functions in JavaScript.

A user testing the functional prototype based on the leap motion

These prototypes were tested in the context of expert surveys and analysed based on criteria defined previously. The users first tested only the purely functional prototypes to evaluate the movement of the gesture itself. Afterwards, a fictitious use case was tested.


Principles

For the next stage, based on the user tests and the knowledge gained from them, we elaborated principles for designing microgestures, which should ensure a positive User Experience and an efficient use.

Affordance

Gesture-based interfaces have not yet been widely implemented in everyday use. This means that the handling and the control of these interfaces is still unknown to most people at the first use. Therefore, it is all the more important for the user to be able to determine what actions are possible and also to be aware of the current state of the device [1]. This can be solved by obviously recognizable signifiers, a feedback on state and function and a clear affordance by the device itself.

Discoverability and affordance has to be considered when designing the interface

Feedback

Feedback is one key aspect for a successful microgesture interaction. Touchless gestures have always had the problem that the user does not recognize when he is detected by the device. In the case of our user tests, the following problems emerged with regard to the feedback: When can I interact with the device? Was my intended function triggered? When will I be no longer tracked, so I can move freely again without triggering a function accidentally? In order to solve these problems and to provide the user with the full control during use, the microgesture interaction or the device must have a clear feedback. The interface needs a visual or auditory feedback to the user’s action, that will communicate the current state.

Metaphors

The simpler a movement can be carried out, the more stronger it is anchored in the brain. Although the human being is very gesticulated and interacting, the gestures are rarely perceived consciously in everyday life. However, the movement and coordination of the muscles are optimized for interactions with everyday physical objects. Particularly learned fine-motor microinteraction like the tweezers’ grip, scrolling on a smartphone or adjusting a rotary control can be the basis for metaphorical gestures. Since metaphoric gestures are much easier to remember and more comfortable for the user to perform, they are also much more suitable for the interaction with devices. This was confirmed by the analysis of the taxonomy and the usability tests. In this process, metaphoric gestures clearly achieved a better result compared to abstract or symbolic gestures.


A look ahead

There is no doubt that this technology has a lot of potential, which will become even more relevant in the future. As soon as technologies like the soli sensor from Google are precise enough and are available for the mass, this will show. This will certainly change the relationship between humans and computers, since the interaction becomes much more natural than it was before. The implementation of touchless interactions into everyday life may become the next paradigm shift in interface design.

However, until then this form of interaction has to be further developed and users have to be familiarized to this system. For now the concept of touchless interactions with microgestures leaves many questions open when designing the interface or a system with it: How are interactions evaluated when the device is operated by different users at the same time? How does the device recognize when the user tries to interact and when he does not? How can it be prevented that gesture recognition will interfere with your natural behavior?

In the second part of our study (soon to be published here on Medium), we dealt with these questions and the conception of microgestural interfaces and tried to clarify them.


Written by Max Mertens and Janis Walser-Cofalka.

Developed during an Invention Design course by Prof. Jörg Beck at Hochschule für Gestaltung Schwäbisch Gmünd.

References
[1] Don Norman: The Design of Everyday Things, 2013.

Max Mertens

Written by

Interaction Designer | www.max-mertens.com

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade