Turning hand gestures into game controls: Temple Run 2.0

Anmol Singh
Bucknell HCI
Published in
9 min readOct 30, 2017

For this design project, we decided to remodel the controls of the very popular endless running” game, Temple Run. Using Leap Motion technology, we programmed a control scheme that interfaced up, down, left, and right swiping hand gestures as arrow keys. In doing this, we were able to rejuvenate the feeling of fun and nostalgia among our users, while introducing them to this new technology.

Video 1 — Showing the main difference between the old version and the Leap Motion Controller version of Temple Run.

Bridging the Learning Gap

Our team had never experienced the Leap Motion technology before. To get into this uncharted territory, we decided to try out a few games that came with the Leap Motion package.

Figure 1 — Practicing the motion of picking an object in a game via Leap Motion.

After playing with those games, we got to know that certain hand gestures, such as point and touch, picking an object, etc. were less accurate than others like finger rotations, open palm motion, and swipe gesture. Regardless of the challenge we faced, we enjoyed playing games with the Leap Motion. Thus, we unanimously decided to design Leap Motion gestures for playing a game of our choice. But which game, was the big question.

And the Game is…..

When choosing the game, we discussed a few different options that all seemed feasible and would create a fun user experience. The three best options that we narrowed our search down to were the following:

  1. Simon
  2. Temple Run 2
  3. 2048

Simon was a simpler game that would only require users to point and click on the appropriate sequence of colors. This would probably have been implemented with the simple functionality of the user directing the mouse with their hand location and moving their hand close to the screen in order to click. Through initial testing, however, it was a little bit difficult to click accurately using the Leap Motion and it could have been an issue that users would be so focused on trying to click with their hand that they would lose concentration on remembering the sequence of colors.

Figure 2 — The three games we had shorted for our project were the Simon game (left), Temple Run (middle), and 2048 (right).

Both 2048 and Temple Run 2 would have very similar gestures, allowing users to swipe in four directions (up, down, left, right) in order to play. Of these two, Temple Run 2 seemed to be more interactive and provide the users with a more fun experience. Though it would be a little more difficult than the other two, we decided upon this game in order to try to give the user the most fun experience possible.

Design Decisions

One important thing we learned from the article “Make Things Engaging” was this idea:

“The user’s character, skills, needs (short-term and long-term), mood, etc. determine the value of the interaction for an individual.”

With this in mind, we decided to test for the skills and needs of the user within the game, through the Wizard of Oz testing.

Wizard of Oz

The Wizard of Oz testing was done in the primary stage of our process. Before we started programming for the chosen hand gestures, we wanted to test whether the gestures we have chosen for controls are easy to perform and intuitive enough. For this reason, we decided to test with two sets of controls for the users:

  1. Swipe left — move left, swipe right — move right, swipe up — jump, swipe down — duck
  2. Swipe left — move left, swipe right — move right, fist — jump, one finger — duck
Figure 3 — Showing the Wizard of Oz User Testing, where the controller (person sitting on the right), is only observing the users and pressing the control keys according to their hand gesture/movement.

In the article “Make Things Engaging” another idea that resonated while choosing the controls was :

“A user may choose to work with a product despite it being difficult to use, because it is challenging, seductive, playful, surprising, memorable or rewarding, resulting in enjoyment of the experience.”

Matching our testing results and the idea above, we decided to stick with the first set of hand gestures as controls. These set of controls required lesser hand-changing expression and users understood them fairly easily.

When programming our Leap Motion to deal with swipes, we ran into a few problems that were unanticipated. The first was that after performing a swipe, the Leap Motion would sometimes recognize another gesture when the user was trying to return their hand back to the center. For this we decided to do another testing.

Testing while playing

One of the big challenges we faced was to determine the speed, distance and coordinates for each of our gestures. For the swiping gestures, seeing the swipe coordinates on a terminal window helped in determining how the swipe coordinates worked.

Figure 4 — The top portion shows our code to implement the Leap Motion control, while the bottom portion shows the exact coordinates being recorded by the Leap Motion for each swipe.

With the terminal window open, we recognized the x and y coordinates when the hand gesture would swipe in a particular direction. Going from that, we ended up using 0.2 of magnitude as our border condition between recognizing a swipe and not. This was also done to print multiple actions happening within a swipe, something that our game wasn’t supposed to handle.

Another problem with these hand gestures was that Leap Motion’s limited scale of Interaction Box, required the user to keep their hand relatively flat throughout the duration of the game. Many times, users were likely to flick their wrists rather than swipe or their hand would pull back in the negative z-direction. Both of these would sometimes lead to the Leap Motion not picking up on their actions and result in a user to quickly lose the game.

When users were able to get the hang of the motions, they found the game to be enjoyable. Temple Run is a game with no set end, but instead players will continue to try to beat high scores and the courses are different each time. This allows the user to never really get bored with the game and they always have something to strive for. Especially for competitive users, this provides a strong need to continue playing and improving on previous runs.

User Testing

Since the team members had already tested and played the game multiple times by now, some of us had perfected playing the game with leap motion controls. However, since the success of our design will be determined by users who may or may not have played temple run, we decided to do user testing.

Figure 5 — Showing the user testing of the game for the given hand gestures.

This not only helped us understand how the long the user takes to actually get the game, but also see their hand movement and motion. As a result, we were able to alter the minimum velocity and minimum length of swipe required for the Leap Motion to detect a swipe.

We also realized that there was a small, yet noticeable lag between the motion by the user and the actual detecting and execution of the control. This was a hardware shortcoming which was out of our control. In the next user testing, we informed our users about the lag and told them to do their motions a little before they would actually do with a regular keyboard. We had better results when the users were informed about it.

Results & Reflections

Our game was received with a lot of appreciation. Not only were the users excited to try it out, some wouldn’t leave the booth until they got it right. It was “weirdly addicting”, as mentioned by one of our users. After receiving feedback from classmates on demo day through the “I like, I wish, What if…” framework, strengths and weaknesses of the interface were examined.

Strengths included:

  • The intuitive, simple gestures used
  • Game choice
  • Satisfaction of success

Weaknesses included:

  • Reliability and responsiveness of gesture detection
  • Delay between user’s gesture and action in the game
  • Use of one hand

It is also important to mention the apparent limitations of working with the Leap Motion hardware. When testing and playing the game there appeared to be ~1 second delay between making the gesture and performing the action on-screen. This delay is known as latency, defined as the time elapsed between when a physical motion is performed by the user and when the system responds to it in a way that the user can see. Latencies larger than 100 ms are known to negatively impact user experience, so it makes sense why users were becoming frustrated with a latency of ~1 second in the interface we designed.

Figure 6 — Showing the Interaction Box (field of view) of the Leap Motion, which ended up becoming a limitation for our game design.

Another limitation of the Leap Motion was the field of view provided by the controller, known as the Interaction Box. If a user’s hand left the Interaction Box, the gesture would not be read by the controller. This became a problem if the user’s hand went too far up or down when jumping or sliding, as their motion would not be read due to the limiting Interaction Box.

Improvements

A number of improvements could be implemented if we had a chance to create a second iteration of our Temple Run interface. More research and testing would be done to better understand the velocity and distance indicators used for gesture detection.

Another way to combat the apparent frustration with gesture delay and detection would be to implement a ‘practice’ round where the user could develop an understanding of the strength and motion of the gesture necessary to control the game. For example, in the practice round the user would not be able to die contrary to the actual game. This would allow the user to experiment and try the controls without the consequence and frustration of losing.

Some users also reported that the use of both hands would make for a more fun experience. This was an interesting find as it is known that the use of two hands, or bi-manual input, enables compound tasks to be performed.

Figure 6 — Bi-manual input is a future improvement that can be made to make the game more engaging.

The idea of bi-manual input could be integrated into the designed interface through use of both hands. One hand could be used primarily for turning left and right, while the other could be used for jumping and sliding. In another scenario, the left hand could be used for turning left, while the right hand could be used for turning right. In this case, moving either or both hands up or down could indicate jumping and sliding. More extensive research, Wizard of Oz testing, and user testing would need to be conducted to better understand the implications of using both hands for a newly designed interface.

Conclusion

Overall, we created a new, fun way to experience the popular, already-familiar Temple Run game. The intuitive gestures and choice of a fast-paced, action-driven game made for a fun user experience. Although there were limitations due to the Leap Motion’s gesture detection such as an apparent latency, this made for a more satisfying experience once the user got the hang of the game. Through a number of design decisions, Wizard of Oz testing, and user testing, we developed a more fun way to experience a game familiar to many college students.

The source code for this project can be found at https://github.com/pjo006/LeapMotion.

--

--