Feet: a study on foot-based interaction (Part 2)

Introduction

Using your feet to interact with digital systems — to most people this is a very new thought. In order to try to work with an experimental interface and make users feel comfortable with it we decided to prototype a test set-up and developed a set of user-centered introduction interfaces.

As interaction designers we wanted to set a different focus than the engineers that worked with this topic before: We focused on immediacy and tangibility, in order to be able to provide (haptic) feedback and validate our ideas.

Part 1 focuses on the preparations: We looked at historical applications of foot-based interaction, thought about contexts of use in a digital space and took a look at the research papers that have been written on this topic.

Part 2 focuses on the implementation: Our goal, tracking technology and the software we wrote as a base, as well es our user-centered introduction, observations and some ideas for future foot-based interfaces.


Content

Part 1:

  • Historical application of foot-based interaction
  • Possible contexts of use in a modern, digital space
  • Research

Part 2:

  • Defining the focus of our development
  • Technology
  • User-centered introduction
  • Observations
  • A look ahead
  • Closing thoughts
The technology: Some Bosch Frames, a Leap Motion and some code.

Defining the focus of our development

The idea of a digital feet-gesture interface (especially as a primary control) is a new kind of interface that has not been used in practice before. There are various ideas and theories about its ease of use. We tried to use the insight we gained from the research we read, which was mostly of technical nature, and think about what we, as interaction designers, can do differently to introduce users to foot-based interactions, make them comfortable with using ther feet as input and how we can verify our assumptions.

We decided that we needed a direct tangibility and an immediate responsive feedback to user input in order to verify our ideas and give users the best experience possible.

Because of this requirement we decided to program an implementation ourselves. This was the only way for us to be able to control every parameter of the tracking, gestures and feedback and therefore gain useful insight from a user’s perspective.


Technology

Tracking

A Leap Motion mountend upsidedown on Bosch Aluminium Structural Frames was used as a tracking device

During the beginning of the project we thought about how we would be able to register the inputs done by feet. Our first thought was to use matrix of pressure-sensors in a two-dimensional plane. This kind of tracking had been done before, but wasn’t suitable for our goal: All the gestures would have had to be performed on the ground with the tip of the foot, which is very uncomfortable after a short amount of time.

Instead we used a Leap Motion in order to track the foot in a three-dimensional space, which gave us exact positioning- and movement-information. This setup allowed us to register a large variaty of gestures very exactly while being very unintrusive: No sensors were fixed to the foot of the user with a cabel.

A cut-out hand was stuck to the foot in order for it to be tracked by the Leap Motion.

Admittedly some of this unintrusive beauty faded once we went into real world testing: We didn’t go into the actual Leap Motion-Library code. Since the Leap Motion is obviously optimised for tracking hands, tracking a foot is not working properly. We found a simple yet effective fix: We stuck a cut-out hand to the foot — It looks a bit weird, but it works great! We tried several different shapes and stages of abstractions but in the end we settled with a rather normal hand shape that we cut out of thin wood and fastened to the foot with hook-and-loop fastener.

Software

In order to have as much control over the tracking as possible we decided to program our own library for processing the data recorded by the Leap Motion. Based on the processed data we developed a middleware that provides various abstracted functions, which we then used in various demonstrations.

Of the data provided by the Leap Motion we only use two: The x- and y-position of the single “hand” within the tracking-area. This data is recorded as a “Moment” and is supplemented with additional data: a timestamp, a direction, a distance and a velocity.

A collection of Moments with a continuous direction amount to a “Movement”. A movement ist a period of time, it consists of a start- and end-time, a duration, a start- and end-position, a distance, an average velocity and of course a direction. Movements are recorded specifically for each dimension. They are created as soon as a movement in one direction occurs and are being updated until the end of the movement.

Withing the recorded Movements an algorithm is searching for certain patterns. If a pattern is recognized a “Gesture” is being created and a corresponding event is being fired. The definition of a gesture is defined individually for every Gesture. It can be a certain series of movement-directions, particular threshold values that one Movement has to exceed or something similar.

Additionally there is a debugging-environment with a text display and a simple visualization of the data as part of the backend.

The functionallity of the selection is abstracted into an object and can be called and configured from the frontend if needed. It uses the data from the backend to recognize MouseOver- or Toggle-states. The interface of the selection is being written into the frontend on initialization.

The audio-manager, that fires sound in case of certain events is part of the middleware. There are managers for sounds that happen singularly (MouseOver, Toggle, etc.) as well as managers for continuous sounds that are being loaded as audio-sprites.

The frontend has full access to all data (Moments, Movements and Gestures). Especially the position and velocity of the current Moment have proven an import addition to the gesture events.

In the fronted application-oriented functions, requests and interfaces or visualizations are being defined. The extend of the frontend code differs greatly between different applications.


The code we developed as a base for foot-interactions-based applications is available on GitHub: https://github.com/ChristophLabacher/LeapMotion-Foot-Gesture-Recognition


Tactile feedback

As an addition to the visual feedback we used tactile feedback. The focus of interface design has mainly been visual feedback in the last few years, while auditive, haptic and especially tactile feedback have been unattended in comparison.

We found out that creating patterns for auditive-tactile feedback that are intuitively understandable (called “Sound-Icons”) is very difficult, because the subjective impression greatly differs between various individuals. Without context it his not even possible to define what is regarded as positive or negative feedback. These Sound-Icons have to be learned for a specific context. This means, that tactile feedback cannot stand for itself (except when the only feedback is whether there is feedback at all). It works well as an augmentation, when the sense of touch is being stimulated synchronously with the visual sense. Only then does the tactile feedback operate as a support to communicate a specific message. Outsourcing information completely from visual to tactile feedback quickly overwhelms the user and makes him insecure. This is obvious if you look at the relationship between auditive and visual feedback: if there is a sound that doesn’t fit to whats shown on the screen the interface feels erroneous and causes confusion. Most research findings for auditive feedback can be transferred to tactile feedback.

In order to create the tactile feedback we first tried to find out which waveforms and frequencies work best. A clean sin-wave delivers the cleanest signal, but wasn’t convincing when used with frequency-modulation, because it seemed to “clean” and anorganic. A saturated sin-wave with an amout of saw-waves was better suited for our application, because the overtones in a range between around 40–150Hz were generating a far more interesting profile and therefore a better tactile experience. Next we studied various frequencies and found out that 20Hz, 39Hz and 80Hz resonated best and felt the most intense.

During the sound-design process we manly focused manifestation of the transient in comparision to the “tail” of the sound. We tried creating interesting patters through frequency- and volume-modulations and tried to translate well established visual patterns into audio.

Right now tactile feedback is mostly used in professional audio-creation, because the tactile sense is more differentiated than the auditive perception. Because the waves have a very long wavelength subtle changes in volume or frequency and small imprecisions like zero-crossing-mistakes are being percieved much stronger (they can be felt on one’s own body). It is therefore necessary to work with high precision and care.

The cussion is placed between the back of the user and the backrest of the chair.

In order to generate tactile feedback a vibratory module that reacts to audio-signal is needed. We used a so called “Bodyshaker” which is basically a loudspeaker without a membrane: it consists of a permanent magnet and a an inductor. We put it into the core of a cushion that was fitted with a certain foam that absorbs high frequencies. This creates a larger vibrant surface, which results in a larger subjective room perception. Vibrations in an audible range are being damped.

This cushion is fitted to the backrest of a chair and passes sensible vibration to the diaphragm. This creates a sense of space, since we can only notice vibration in very low frequencies when our upper body functions as a resonating body. This is why we didn’t fit it to the foot: The continous vibration would disturb sequences of movements and make it harder to learn the gestures. On the other hand the spacial sense of the acoustic pressure would be missing.


User-centered introduction

Various training applications as well as an image gallery were created for user testing

Feet-based interfaces are a completely new style of interface. Because of this starting with creating application examples would have been wrong: Users need to be introduced to this technology first. Our goal was to create a user-centered introduction that allows users to learn the interactions in an easy way. This required several stages of growing complexity.

Part 1: Developing a sense of space

Part 1

In this section the user can’t make any mistakes. As soon as the Leap Motion start to track a foot a cursor (a simple small circle) and an invitation are shown. There are no decisions the user has to make or performance requirements he has to meet.

It needs some time to get used to the fact that the z-position of to foot is irrelevant: only the height and the movement among the x-axis matter. We tried to create an environment in which the user can develope a sense of space, which he can use in later stages. This is aided by different fields that the user crosses when he leans his foot to the right: In each section there is on adjective (normal, slow, frantic, calm, loud, quiet); the behavior and shape of the cursor are slightly different in each of the sections. The tactile feedback is used to strengthen this experience.

The user learns to navigate horizontally and gets a feeling for the space in which he can move his foot to control the interface — there is a clear border between the sections. The variations keep him interested and give a sense of the broad width of tactile feedback possible.

Part 2: Introducing selections

Part 2 (Section 1)

The user has now developed a sense of space and can navigate somewhat more exact thanks to the contrasted comparisons in part 1.

In this section there are also several fields next to each other. On this fields selection elements are arranged.

In order not to overwhelm the user the user can’t select an element in the first section: When he crosses one of the four element it gets highlighted and a (short) tactile feedback is fired.

In the next section the selection elements are positioned somewhat higher. As the user explores this section he notices that the elements can be pulled down and be activated that way. This is aided by tactile feedback (MouseOver, Select, Deselct) and the visual depiction. Multiple selections are possible.

In the third section the selection elements are only implied at the top of the screen. The user tries to reach then and notices that once he moves his foot over a certain threshold the elements come down and can now be selected. Only a single selection is possible.

After finishing this section the user utilizes the pattern he just learned in order to get to the next part (He has to select a field to get there).

Part 3: Introducing swipes

Part 3

In the third part of the introduction there is a different kind of visualization than in those before: The user controls a point that is “caught” between to lines. The goal is to break through them. Our objective was for users to gain a mental model of the interaction of a swipe.

Through breaking through the borders whose strength grows after each successful try allows the user to get a feeling for the different parameters that can control how a swipe is recognized by the system.

Part 4: Example application

Part 4

The last part of our introduction is a simple example application, in which the user can use everything he has learned so far.

It’s an image gallery, that can be navigated through forwards and backwards using swipes. There is a selection that is usually hidden but once the user lifts his foot high enough it become visible and the user can decide to mark an image as favorite or delete it. All of this is of course accompanied by tactile feedback.


Observations

We tried our prototype ourselves continuously through development and also had fellow students take a turn. At the end of the semester we presented it at the exhibition of our university and let a lot of people of many different background test it.

Most of our assumptions turned out to be right: Though a little afraid at first people quickly started to get the hang of it during Part 1 of the introduction. The tactile feedback came as a surprise to many but was generally taken very positively. Some struggled at first with Part 2, but the slowly increasing complexity helped them to get used to the mapping and understand the principle of the selection. Part 3 was a big success: The visualization of the swipe was a picture that everyone understood and when they tried to break through the walls they had already forgotten they were actually using their feet. Even after our introduction the image gallery in Part 4 was often demanding too much: Using swipes and selection in a functional so quickly the first touching point was challenging for most people. Usually they found the way after a quick guiding by us.


A look ahead

While working on this project we thought about some further application scenarios in which foot-based interaction might turn out to be helpful. These are some of our ideas:

Mail client

A mail client could, just like the image gallery, allow a relaxed batch processing. Via foot-gestures the user could navigate through his mails and mark or sort them. Interaction that derive from the context of the mail, like entering an appointment or saving conactdetails may also be interesting to look at in the context of foot-based interaction.

Textediting

An example from the focus context of use would be textedition or a writing environment. Through very simple, small gestures it would be possible to call function for editing parts of the text. User could simply select the last word or mark it as a heading with a tap. Deleting the last written word or paragraph might be imaginable. In the area to user group of programmers is especially interesting, because they already use a big amount of shortcuts and are often interested in an optimization of their workflow.

OS-Controls

Also in order not to lose focus foot-gestures might be suitable to control functions of the operating system, like switching between programs or spaces/desktops. Controlling the volume or an entire music-player without having to leave the main application would be possible too.

Gadgets

“Tweet with your feet” or “Feetexplorer for facebook” would be possibilities to use foot-gestures in a more loose application and engage users to try this novel kind of interaction in a fun way: They could navigate through their timelines with their feet and like or retweet posts.


Closing thoughts

Digital foot-based interaction is a field that bears great potential. Maybe not so much in everyday use since it is to specific for many people, but in any case in a medical and pro user context. It was interesting to build a foundation for these interfaces and try to figure out how to make them usable for general users. Tactile feedback is probably going to become a lot more present in the near future (speaking the taptic engine in the Apple Watch or the Basslet). It’s a great form of augmenting the experience of the user without being to intrusive, because only he notices the feedback.

We open-sourced our foundation code and wrote this article in order to draw some attention to this field and give people that might be interested in building experimental interfaces a starting point. If you have any questions, don’t hesitate to contact us!


References / List of literature

  1. Karmen Franinović (Ed.), Stefania Serafin (Ed.): Sonic interaction design. Cambridge, 2013.