Feet: a study on foot-based interaction (Part 1)

Introduction

Using your feet to interact with digital systems — to most people this is a very new thought. In order to try to work with an experimental interface and make users feel comfortable with it we decided to prototype a test set-up and developed a set of user-centered introduction interfaces.

As interaction designers we wanted to set a different focus than the engineers that worked with this topic before: We focused on immediacy and tangibility, in order to be able to provide (haptic) feedback and validate our ideas.

Part 1 focuses on the preparations: We looked at historical applications of foot-based interaction, thought about contexts of use in a digital space and took a look at the research papers that have been written on this topic.

Part 2 focuses on the implementation: Our goal, tracking technology and the software we wrote as a base, as well es our user-centered introduction, observations and some ideas for future foot-based interfaces.


Content

Part 1:

  • Historical application of foot-based interaction
  • Possible contexts of use in a modern, digital space
  • Research

Part 2:

  • Defining the focus of our development
  • Technology
  • User-centered introduction
  • Observations
  • A look ahead
  • Closing thoughts
The result: a working tracking solution and a collection of applications to introduce users to using their feet to control interfaces via gestures.

Historical application of foot-based interaction

One of the reasons for our interest in foot-based interaction is that even though gesture controlled interfaces have been around for a while there are almost no digital interaction concepts for feet. There have however been some applications of foot-based interactions in the analog world. In order to draw conclusions about their relevance and function we took a look at their contexts of use.

It can generally be said that in the past to foot came to application out of necessity and was only used for very simple, linear tasks during more complex activites were both hands were already in use or one couldn’t look away from the primary occupation. It was always a mechanical pedal and the only gesture was pressing it down. It was not necessary to learn gestures in todays understanding, because the expected action could be inferred from the shape of the “input device”.

Examples for this would be the work at a sewing machine, where the foot controls wether it is running or not; or a spinning-wheel, where the foot controls the speed by accelerating it itself. When transcriping audio records of interviews or speeches pedals are used that allow winding the record forwards, backwards or pausing it without having the remove ones hands from the keyboard. The most common everyday use is the accelerator pedal in vehicels where hand and eyes fulfill important functions already. People that work in sterile environments use legs and feet to operate mechanical buttons and switches at knee-height. A similar interaction is used when an emergency-off button is pushed on a dangerous machine.

Looking into the past reveals that unsurprisingly hands have been used for primary interactions because of their higher precision, while feet were only used for controlling low-complex secondary functions.


Possible contexts of use in a modern, digital space

Our goal was now to put foot-based interaction into a modern context: Thanks to tracking explicit gestures can be performed; in digital systems these can be assigned to a large variety of functions.

Focus/Work

If a user feels demanded, but not overwhelmed by his activity a “Flow”-experience can happen. Time and surroundings are fading away: the user completely loses himself in his activity, he is extremely focused. In this state a postural change is often disturbing and can break the concentration and the focus. The activity is interrupted and the concentration has to be built up once again, if thats possible.

Foot-based interaction seems appropriate for task, during which both hand have a fixed position and are busy. A big change of the hand’s actions, for example by moving the hand from the keyboard to the mouse is disruptive. Even small changes of position, like those that are needed for entering multi-key shortcuts of special characters slow the user down, because often one hand has to press to keys at once, which never happen during usual typing.

In contexts of use that require fast typing and in which the user is strongly focused on his activity foot-based interaction puts itself forward.

Relaxation/At home

Besided situations where highly concentrated working is necessary, there are also moments during which a users seeks distance from his activity — physically in order to reflect or mentally in order to relax. The user is either not fully focused or he has a bodily posture in which a hand-based interaction convinient: Not only interactions with a mouse but touch-based interactions require a tense poise. If the user is sitting leaned backwards it is pleasent for him to use is feet to control interactions: There isn’t much weight put on them, they can be moved easily and precisely. Simply because of the human physiognomy they are closed to the screen than any other extremities, so they can be easily tracked.

Foot-based interaction is also suggestiv in contexts of use were the user has relaxed posture and is seeking distance from the screen.

Research

In order to define the focus of our project we took a look at the research that has already been done in the field of foot-based interactions.

Emphasis in quotes was included by us.

Putting Your Best Foot Forward: Investigating Real-World Mappings for Foot-based Gestures [1]

The most intersting part of this paper is a user study, where useres were asked to propose arbitrary foot-gestures for certain interactions (3). The suggestions were very diverse (4); larger groups of interactions (e.g. “Double Tap” considered as “Tap”) had a much higher consensus, because many of the gestures were transferred from the usual finger-gestures (5). Of the possible kinds of gestures — kicking, tapping, rotation (1) — those that required only minimal movement of the feet were proposed more frequently (4, 7, 8). Gestures for different interactions can overlap, as long as they are used in different contexts. A smaller set of gestures proved itself in practice (6).

  1. “The literature has considered various foot gestures for different contexts and one can find three emerging categories: kicking, foot tapping and ankle rotations.”
  2. “The human foot is a highly dexterous system with advanced movements using multiple joints that increase in movement complexity from the hip to the ankle. […] Each leg joint provides multiple movements with varying ranges of motion. While the lower limb allows for a variety of maneuvers, studies comparing it to the arms show that it is not as precise as human hand and fingers.”
  3. “The study primarily consisted of a researcher presenting participants with a range of mobile interaction scenarios (first column of Table 1) for which each participant was requested to perform a foot-based gesture that they believed was appropriate for the required situation.”
  4. “Overall, we found a large diversity in the gestures participants selected for each command. Across all commands there was a mean agreement value of 0.13 (s.d. 0.08), with 90% of the commands having agreement values below 0.2. The gestures for shuffle (shake foot, 0.46), rotate clockwise (trace clockwise circle, 0.29) and rotate counter- clockwise (rotate foot counter-clockwise, 0.29) were the only three commands where a clear preference between participants was observed.”
  5. “Our findings suggest that many of the gestures are logical mappings from commands participants are already familiar with.”
  6. “Context of use can ensure small gesture sets: small gesture sets are easily remembered by users and can be encouraged in foot-based interaction by using the device’s context to eliminate ambiguity.”
  7. “[…] users struggle more with backwards kicks. Where possible, these should be avoided.”
  8. “Our results showed that users are faster, more accurate and prefer rate-based techniques over displacement based techniques.”

Novel Interaction Techniques Based on a Combination of Hand and Foot Gestures in Tabletop Environments [2]

In this paper foot-gestures were observed as an addition to hand-gestures in a multitouch-tabeltop environment. These insights seem particular important: Foot-gestures are already successfully used in various sitations (see “Historical application of foot-based interaction”) (2). They can provide a larger degree of freedom to the user and support him with his task that way (4). The foot-gestures have to be a simple addition to the hand-gestures and must happen as automatical as possible, in order to assist more than they distract (5). Therefore they should consist of small movements or the like (3, 6). Users get used to these foot-gestures fast and they don’t disrupt the workflow (7).

  1. “However, performing simultaneous tasks on tabletop systems is difficult even with a multi-touch capability. Previous studies of multi-touch systems have shown that a user typically coordinates both hands to perform a task where the dominant hand is the main controller and the nondominant hand provides support. Although the simultaneous performance of two different tasks with different hands is possible, it may increase user’s cognitive load and reduce task performance.”
  2. “The feet are human limbs which can be used occasionally as inputs to perform tasks while the hands are occupied. Although they are not as convenient to use as the hands, the feet are used to control numerous mechanisms in various everyday machines […]”
  3. “Considering a shared, limited space around tabletops, however, we argued that subtle gestures which require less movement, such as foot rotations or body weight shifting, are more suitable.”
  4. “Supplementary foot inputs can be used occasionally when working at workstations to increase the degrees of freedom of manipulations.”
  5. “According to human cognitive knowledge, the simultaneous performance of multiple activities can produce high cognitive loads and reduce task efficiency. Therefore, multimodal tasks must be divided into primary and secondary subtasks. The primary task requires the user’s attention whereas the secondary task demands less attention and it should preferably be processed automatically.”
  6. “All foot gestures in the pressure distribution change group had higher scores in terms of ease compared with the gestures in the foot rotation and transition group. […] foot gestures should be small movements that a user can perform with small parts of the foot.”
  7. “This experiment showed that novice users could combine hand and foot gestures to perform multiple tasks on a tabletop system and these tasks took approximately the same amount of time as when the hand only method was used. […] Several participants also reported that using the feet as a controller was fun and it made boring tasks more interesting.”

Foot-based mobile Interaction with Games [3]

Foot-based interactions are being looked at in the context of mobile games. The tracking is done via the camera of the mobile device. The paper was created in 2004, in line with this the performace of the used devices (PDAs) is very low. Especially interesting in this paper is the reaction of the users to the limited functionality (1, 2, 3).

  1. “[…] current hardware limits our system to the processing of 5–7 frames/second […]”
  2. “The overall feedback of the users was very positive.”
  3. “Nevertheless after a short time most users get used to the restriction […].”


References

  1. Jason Alexander, Teng Han, William Judd, Pourang Irani, Sriram Subramanian: Putting Your Best Foot Forward: Investigating Real-World Mappings for Foot-based Gestures. Austin, 2012.
    http://www.cs.bris.ac.uk/Publications/Papers/2001500.pdf [28.01.2015]
  2. Nuttapol Sangsuriyachot, Masanori Sugimoto: Novel Interaction Techniques Based on a Combination of Hand and Foot Gestures in Tabletop Environments. New York, 2012. http://dl.acm.org/citation.cfm?id=2350053 [28.01.2015]
  3. Volker Paelke, Christian Reimann, Dirk Stichling: Foot-based mobile Interaction with Games. New York, 2004. http://pdf.aminer.org/000/005/083/foot_based_mobile_interaction_with_games.pdf [28.01.2015]