The Essentials of Human-Driven UX Design

LeapMotion
9 min readAug 5, 2014

Swimming through oceans of qualitative human data can be exhausting. But for our User Experience Research team, anecdotal insights are their bread and butter.

We caught up with two of our most out-of-the-box UX designers to learn about their no-frills approach to testing 3D motion control applications in the wild — and why engineers of all stripes must learn to grit their teeth, walk into a public space, and let a group of complete strangers use their product for the first time.

Daniel Plemmons is a software engineer and user researcher who began exploring nontraditional interfaces as an Interaction Development major at the Savannah College of Art and Design: where real-world design is king.

“We had a professor who got us all really into physical computing, and really got into this idea of building our own interfaces. We were doing wall-scale infrared installation art, and we were building interactive tables. I got kind of obsessive about this layer — the layer right between the physical world and the digital world — and how that impacts the way people interact with computers, and how they interact with each other.”

Robotics-whiz-turned-UX-designer Paul Mandel guided teams through the design process at Leap Motion. He’s a graduate of Carnegie Mellon’s renowned Human Computer Interaction Institute, where he specialized in natural user interfaces (NUIs) that bring computing into the physical world.

“In my first job, I realized that software is only as good as the people who use it. We can design the coolest feature in the world, and if nobody uses or cares about the feature, it’s like the feature doesn’t exist. Getting the chance to work on the future of computer interfaces has been absolutely fantastic for me. It’s kind of like a dream come true.”

What are three key ingredients for a successful user test?

P: Every single person on the team, including the engineers, must watch the user test. You must have physical separation so that the team watching can communicate with each other freely without disrupting the user test. And lastly, the test administrator must make it clear to the subject from the beginning that they are not the one being tested, the design is the thing being tested. The subject can do no wrong.

D: Different applications need different things. If you’re building something that needs to reach a mass audience, then the things you need to care about are going to be very different than if you’re building an application for a very niche set of power users, or yourself. Understanding who your application needs to reach is really important. It’s important to go in with a set of questions and ways to judge what you see.

The design is the thing being tested. The subject can do no wrong.

We’ll actually construct the test, and our entire prototype around answering a specific set of questions. Testing works much better when its focused. And it works much better when you have a set of critical lenses or a set of heuristics that you’re using to guide your reaction to what you see.

How do you help the tester act naturally, as if they’re unobserved?

P: The administrator needs to make them think aloud. One of my favorite ways to do that is, in the beginning, asking the user how many windows are in their house. Then they say, “Um, 9.” Then you say, “Tell me how you figured that out.” Like, “OK, there are two windows in the kitchen, then you go around the corner, and I guess there are two in the dining room.” It goes on and on like that. Tell me what you’re looking at. Tell me what you’re looking for. Tell me what you’re thinking. During the user test, it’s important to keep them talking to get valid data.

It goes back to the principle that “I am not the user.” The things that I can use and the things that I see in my application are not the things that other people see in my application. Even when I think I know who the user is, I don’t know what they’re expecting, or what they will see or what they won’t see.

What are the challenges in testing 3D motion control designs?

P: There’s no way to do it low fidelity that I’ve found. When you’re testing for screens — because screens are 2D and paper is 2D — you can make your concept in paper, and it literally takes five minutes to sketch out. Then you can slap it down in front of somebody and let them play with it, pretend to click on it. With gesture control, however, the detection of the gestures themselves is a challenge. I haven’t found a great way to sample that.

Gesture detection is a challenge with paper-based concepts, so we often use relatively low-fidelity JavaScript prototypes.

D: So much of it is the dynamic visual feedback that people are getting, and how that feeds back into what they’re doing with their hands. We spent a lot of time testing our tutorials, and you can only get so far with paper tutorials before it really breaks down. It just takes a lot of additional effort.

One thing we’ve done is to use relatively low-fidelity JavaScript prototypes. Things that we can just hack together very quickly. Code that will never see the light of day — but it helps us get out ideas very quickly in sketch form.

How do you incorporate contextual inquiry into user tests?

P: My goal as a human is to design things that people will use. This means designing the right thing, and designing it right. User testing helps you design the thing right — you go from an idea, and user testing helps you refine it down to something that’s really easy to use and people can use it, dare I say, “intuitively.”

The other half is a lot more challenging. It’s this idea of “designing the right thing.” To some extent, this is about figuring out what the problem is and what people’s needs are. As it turns out, people are really terrible at telling you what they need. They oftentimes take a very iterative rather than an innovative approach to solving problems. It’s that famous Henry Ford quote, “If you asked people what they wanted, they would have said build me a faster horse.” So the process of a contextual inquiry helps us solve this problem.

So let’s break this down. “Contextual” means we’re going out into the user’s context. The “inquiry” is that we’re going to figure out what they do. We’re not saying, “Would you rather have this or this?” We’re saying, “You have this very well-defined process that you use currently. We want to understand that process, and we want to understand how and where that process breaks down. So imagine that you’re the master, and we’re the apprentice. We have no idea what we’re doing. We’re starting at zero. So teach us. Teach us how you do your job.”

Phrasing it that way does a few things. First, it relaxes people. Second, it typically piques their interest. Rather than having to defend what they do, they get to explain to somebody who’s genuinely interested in what they do. It also differentiates itself from the interview process, because when you interview somebody and ask them questions, they tell you how things should be.

By saying “teach me what you do,” they tend to focus on all the little quirks of the process. The Post-It notes on their monitor. The pieces of paper on their desk. If you look at almost any workstation, there are those things. When you interview somebody, whether you’re in their space or not, they’re not going to tell you about them. That is what is interesting about that to us about designers, because that tells us where the process breaks down. Where humans have had to augment their existing processes in order to actually make them successful. Often I think it’s because you’re not even aware of them.

D: Doing something like a contextual inquiry, you see all the different inputs and outputs. It helps you develop something that actually holistically fits, rather than something that makes sense to you, or solves a contrived or misunderstood problem. A really good example would be if you’re building an application for the kitchen. There are Leap Motion applications for the kitchen and it’s great because your hands might be dirty. You might not want to touch your computer, but you’ve got your recipe.

If I were about to design something like that, I’d want to go and actually watch people in the kitchen. Watch them with their iPads. Watch them with their laptops. Watch how they’re moving. Watch how the cooking that they’re doing fits in with the rest of their lives. Do they have distractions from friends or family members? What are all these little things that are happening? If you do this with 7, 8, 12 people, you start to see patterns.

It’s very, very easy to make very precise tests. I can figure out down to sub-decimal points how quickly someone did something. But I can’t tell you if this shows how easy it was to use, how much they enjoyed using it, whether or not they actually completed that task to their satisfaction.

After conducting a contextual inquiry, you synthesize and reduce that information down into really succinct learnings, and through that process you can get rid of the outliers and make sure the stuff you have is solid data. And then you can augment that with other things like surveys and other observational evidence.

For engineers and product managers who love quantifiable data, where do these contextual inquiries become useful?

P: The process of going out and meeting these people and interviewing them — getting to know them and observing and understanding — is something for which there is really no substitute.

D: The empathy you build just by interacting with people outside of your company and your project team — actually being able to sit down and talk to people — is pretty amazing. And if the entire team can share that empathy and can share that common research experience, it makes communication within the team much, much easier.

The conversation will shift from, “I think someone will do this” or “If I were this person, I would do this” — which are very dangerous things to say, because very rarely will you be able to properly intuit what a person will do. People are complex. Instead, conversations become “Remember what this person did,” or “We’ve seen this 5 or 6 times.” What emerges is a very solid, shared language to work with, which can be very powerful on a team. Design arguments and product arguments seem to magically cease to occur.

What kind of equipment does a development team need to pull this off?

P: You don’t need a formal testing lab. You need a laptop, cell phone camera, Google Hangouts, and you need a little bit of guts to go out and ask your neighbors. Just go find people. People are generally nice.

D: People often get hung up on “If we’re going to do this, we have to do it right. We have to go all the way.” I know at least when I was in school, we read about usability specialists like Donald Norman or Steven Krug, and they have the labs with the one-way glass window, the microphones, and the multiple thousands of dollars of cameras, and we thought, “Wow, I don’t have the resources to do that! I can’t pull that off!”

At Leap Motion, we sit down in a conference room, we run a long USB cable from the camera sitting on the back of a chair, out the door, to a laptop sitting on a cardboard box, and the team gathers around, and there’s a little microphone in the webcam, and then there’s our facilitator with tester in the room. And that’s all we need.

We’d love to know — how do you connect with your users?

Originally published at blog.leapmotion.com on August 5, 2014.

--

--