How trustworthy is your gut feeling in UX design?

A colleague who likes to ship said to me, You’ve got to learn to trust yourself. I mean, you know what’s usable and what’s not — what’s key here is that we get the product shipped and move on. Get it done 80% right and keep going.

Was it a compliment, or a dig… or both?

I thought to myself, My God, am I being needlessly pedantic? Does my embrace of user-testing reveal some terminal lack of confidence about how to design and lead a product?

To be sure, he had a point. We live in the waves of iterations, where sprint follows sprint, sure as the sea slaps on the shore. There will be time to return and improve.

And one can’t discount the value of experience. I have learned from endlessly watching people click there when you had hoped they would click right here. I shine with the patina of lesson upon lesson from user-testing and I can guess which wires will leap right over usability hurdles, which might barely clear them, and which will end up head-over-heels in the mud.

Of course, then there are also the guidelines to follow: NNG reports on best usability practices; HIG and Material design proselytize their best-practices; blogs provide war stories from the trenches.

Is it legit, then, to base design decisions on guidelines, knowledge, and a gut feeling that, yeah, this is intuitive and I know the user will navigate this wire flow right? Or should one follow best-practices every step of the way, testing & validating, testing & validating, regardless of how much of a time waste such an expenditure of startup hours may be?

As usual, the answer is, it depends. Are you creating a Plain Jane vanilla usability flow, based on pre-existing patterns that users must surely recognize from their native OS, or are you introducing new icons, gestures, animations? Do you have a community of users who are tech-savvy 20-somethings that live on their phones, or are you designing for septuagenarians? The former implies that you’re probably good to go with rehashing patterns and even pushing the envelope. The latter means that you better go watch a bunch of old folks interact with your prototype to see where they tap astray.

Are you working on a feature that’s key to a golden KPI (Test!!!), or something subsidiary and not so valuable where a pinch of user frustration won’t likely cause wonton abandonment (Mweh, let’s test when there’s time and get started designing on this feature over here…)?

No matter what it depends on, I hold that even the tightest sprint timeline can spare 15 minutes for some quick and dirty online validation, because no matter how well I believe I know my grade of users and my patterns, a surprise always lurks. There will be those inscrutable testers on Usability Hub who click wildly elsewhere. What gives? Are they only in it to accumulate testing points so they can get testers themselves for free? Or were they just clicking willy-nilly and without regard of consequence because they’ve been drinking? Or it could be that they’re from Belarus and didn’t understand my English?

Aberrant results must be considered, and then if appropriate, confidently written off when conducting usability tests where one cannot watch the user. However, if more than 1 or 2 such results creep into a test and the whiff of something strange is in the air, further investigation is needed. Refine the demographic. Target Belarusians and chart the error rate. Enlist only grandmothers in Oklahoma. Whatever you do, make sure you feel comfortable that strange results are just margins of error rather than key learnings you’re writing off because you don’t have time.

I’m a creature of process, and I work usability testing into even the most mild-mannered of design sprints. This follows best practice, but more importantly provides confidence and useful data not just for me, but for the whole team — devs, PMs, stakeholders — who may be interested later in why one option was chosen over another (possibly their) idea.

In the end, I invite you to err as much as constraints allow on the side of usability, spend the extra 20 minutes and $12 bucks, and run a tight ship. Process breeds confidence, and designing from data breeds expertise.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.