Death to Lorem Ipsum Prototypes

Context, Context, Context, and Airplanes

Kris Paries
Thinking Design
4 min readAug 31, 2017

--

In the design community, UXers and digital product designers have made huge strides in the past five years. Companies are actually starting to build entire teams and business units around user experience design.

This is a far cry from than the lone designer from years past who mostly worked on marketing collateral, but would sometimes also try to help out the hundreds of developers and dozen or so product managers with a new feature. With a more mature community comes better processes, better prepared employees, and a larger talent pool.

As these organizations have understood the value that design brings to the table, there has been more pressure put on the process, as well as higher pressure put on the designers. Luckily, we have an ace in the hole: user testing. Instead of bickering endlessly about opinions and anecdotal evidence of what one user said at a conference years ago, we’re turning to data for answers, and we’re turning to users for answers.

However, I think it’s time we up our game.

I’d like to illustrate how we should mature our user testing with a design lesson; a lesson that the non-digital world learned 70 years ago.

In the 1940s, the military realized that something was very, very wrong with the air force. Pilot casualties were reaching 17-per-day. And that was in non-combat scenarios.

Initially the big wigs at HQ tried to simply rule this as pilot incompetency. After the pilots called foul, the military decided to make an earnest effort at trying to figure out what was causing these disastrous results.

As it turned out, the cockpit designers back in the 1920s had gathered 140 different measurements from over 4,000 different men, and had built the cockpit around the average of those men. (In digital product design we refer to this as “hitting the 80% use case”.)

A lieutenant named Gilbert Daniels hypothesized that perhaps the problem was with a fundamental change in the measurements of pilots in the past twenty years. He decided to survey 4,000 pilots of his own and found something different than what he was expecting — of the 4,000 plus men that he measured for ten of the core dimensions, can you guess how many “average” sized men he had?

Zero.

There was no “average” user. They had spent so much time making a one size fits all cockpit, but instead ended up with one size fits one. In the end, no pilot surveyed even met those criteria. So by building a product for the general group of pilots using generalized data, they had not-so-generally failed.

Let’s Stop Causing User “Casualties”

As digital product designers, we do this all the time. How many times have you been on a customer call where you showed them a mock-up with dummy data? How many times have we shown a user a social media workflow, but using someone else’s content?

☝️ DON’T DO THIS ☝️

Without of the context of data and content that’s relevant to that specific user, they land in a world of hypotheticals and therefore can only give general feedback. We get a lot of “imagine if you saw this, in this way that is relative to your context, and how would you react?” That feedback is garbage. You’re asking the users to be design experts instead of experts in their field.

So how do we fix it?

This part is a bit more difficult. The Analytics team at Adobe has had the incredible advantage of a talented development team that prioritizes user testing and feedback. This has resulted in workflow that involves interactive prototypes that always plug in to the users’ system and leverages the data and context of that user.

But of course, regardless of the size of the company, that’s not always going to be an option. There are other little things we can do to give us the advantage — we can take advantage of any data we already have on the user with whom we will be talking. If you can log in as that user, then do so and find what type of content that they’re sharing. Then put together another copy of your mock-up/walkthrough using their own profile picture and their own content in order to get more accurate feedback from them.

If your organization isn’t quite to that point yet — where you have access to that type of user data — another option is to set up a preliminary call where you show no UI whatsoever, and you only gather data around their current workflow. Then, reflect that workflow specifically to their use case when building a walkthrough customized for them.

Is it a lot more overhead? Is it a lot more work? Of course it is. But that could be the difference between a successful product launch with heavy adoption, and a new feature set that the vast majority of your users just think is interesting, but never actually incorporate in their day-to-day workflow.

Nothing we design counts if it’s not used. It doesn’t matter how cool the users think the design is; it only matters if it actually improves their lives and makes their tasks easier.

So let’s take our user testing to the next level. Let’s throw out generalizations, averages, and just shooting for the 80% use case. A single user’s contextual feedback is so much more valuable than 4,000 users giving feedback without context.

It’s the difference between a successful air force and 17 casualties-a-day.

--

--