Design Research Kit

At Medium, we’re trying out a fortnightly Designer Day, where we get out of the building and work on design-initiated projects. One recent project I took on was creating a Design Research Kit: an overview of research for folks who are less familiar, with some helpful tips.

Thanks to Pablo and Jules’s suggestion, I came up with a kit that might be useful for those outside of Medium, so here it is for others as well:


Firstly, a quick overview of the two types of design research

Generally speaking, there are two categories of design research:

Strategic research: Tries to understand the problem space. Explores usefulness, desirability.
Tactical research: Assumes usefulness and general directions. Gets into the nitty gritty, e.g. is the flow clear? Is this important part discoverable?

As you might expect, strategic research is important up front in the product process. Tactical research becomes more important as you’ve established some confidence in usefulness, and are edging your way to shipping.

In both of these types of research, there are specific methodologies. Here’s some examples of methods you might use along the way:

The strategic vs. tactical distinction is important, because you don’t want to end up doing strategic research late in the product process. At that point, you’ve already sunk a lot of resources into building your product or feature and you should have a reasonable sense it’s worth doing.

Some of the methods require a bit more training than others. Below you’ll find an overview of the research that’s easier to do yourself, and an explanation of the methods that require a bit more assistance.


Research you can (more or less) do yourself

Ideally, you should work with a researcher when doing research, but given limited resources that can’t always happen. Here are some methods that are relatively doable with a consult with a researcher:

Usability studies

What they are: 1:1 sessions where you see whether elements of an experience are usable or not by observing behavior. In many cases, 5 users is enough.

What they’re best for: Understanding whether something is usable, understandable, and discoverable, e.g. can the user understand how to complete this flow?

What they’re not good for: Usefulness or desirability. Usability is about whether something is usable, but usability is not enough to say the person would actually use it.

A usability study usually involves sitting next to the participant in a friendly but neutral position.

**Pro-tip**: One way to do this is to use usertesting.com, or any similar site, that provide “unmoderated usability.” Unmoderated usability means: you write the tasks, and the site recruits participants from their huge database of people to do the tasks asynchronously and record everything. It usually takes a couple of hours to get videos of the studies back, and it’s a nice way to get users from all over the world.

You should still work with a researcher to write the tasks and think through the analysis (perhaps watching the first couple of videos together), but it’s an efficient way of recruiting and moderation when you’re pressed for time and resources. Keep in mind this doesn’t allow for the kind of clarification or additional digging you can do in-person, so there are some trade-offs.

Hallway usability studies

What they are: Just like a usability study above, but more informal — called “hallway” or “corridor” because you find people in the hallway or corridor of your office building and ask them if they have a few minutes for the study.

What they’re best for: Small usability interactions instead of more complex flows.

What they’re not good for: Same as above—understanding if someone would actually want to do that thing. Remember how you’re cornering them in a hallway and they don’t really have a choice?

**Pro-tip**: Stay in the building, but avoid Product people who are too familiar with our experience to give appropriately naïve feedback. (NB: Appropriately Naive is my new emo band name.)

Talking to your sister-in-law/neighbor/rando about [your product]

What they are: You know how sometimes you’re at a barbecue or brunch or whatever and someone starts chatting to you about their experience using the product you work on? That.

What they’re best for: Getting something useful out of your next small talk where-do-you-work conversation.

What they’re not good for: Hmm. Getting a date? (Actually, you might get a date? People love it when you ask good questions.)

**Pro-tip**: Understand what kind of user they are to contextualize their feedback — where else do they read/write? What’s their conception of what Medium is? Under interviews below, you’ll find more in-depth question-asking advice.


More complex methods

Here’s a list of more complex methods that require more research support.

Surveys: Beware surveys! This might be the method that I see abused the most, probably because surveys are so easy to make, but actually require quite a bit of methodology background to understand how to both design and interpret them correctly.

By definition, surveys are self-report and that comes with a lot of issues, including but not limited to: unreliable human memory, and the fact it’s much easier to say something than to actually do it (the most dangerous one being the difference between the aspirational “Yes, I would use this feature!” vs. “I actually used it.”)

A survey I once received. I’m sure the intention was good, but given its complexity, the results would likely be unreliable.

Semi-structured interviews. This is a critical qualitative method. The amount of nuance you get in an in-depth interview is hard to get in any other method.

Similar to surveys, what people say does not always match up with what people do or think, so being well-versed in understanding the nuances takes time. Also, they’re conversations, so if you’re not a well-practiced interviewer they can go in all kinds of directions that aren’t ultimately useful for what you’re trying to learn. A researcher can help not only think of what the key questions are (coming up with the interview “script”), but direct the interview in a way that makes sure they are answered.

That said, it’s just a matter of practice. If this is something you’d like to put some time into getting better at, here’s an old presentation I have about how to form questions for interviewing.

Field studies: Meeting people in their context is illuminating — whether it’s in the home where they live, the hospital where they work, the cafe where they write.

Field studies not only let the participant feel at home in a familiar place, but they also allow you to observe elements of their experience they may not think to verbalize, or even be aware of in the first place. Imagine trying to make a better oven and only being able to interview cooks, rather than observing them in the kitchen, actually cooking.

Diary studies: Diary studies capture usage throughout the week by asking a couple relevant questions, and creating a longitudinal picture that would be otherwise difficult. For instance, you might ask people to document anytime they read digital media, and then answer a couple relevant questions, like: what prompted you to read this? where did you read it? how could it be better?

As mentioned, humans have terrible memories, and this is a great way to document moments. The diary entries become the scaffolding for a deep-dive interview that happens at the end of the week.

Usage recording: Lookback.io allows you to create a special version of the app that you’re working on that also records usage, as well as use the front-facing camera. (This only happens when a user opts into it, and, again, is a special version of the app — a special link is sent to each participant.)

So, uh yeah, understanding context is important. This person is reading while walking through downtown SF (!)

Usage recordings can both be strategic and tactical research, depending what you’re looking for. For instance, it might be a late-stage check in that a new feature is well understood and usable. However, you might also consider it for more strategic research when you’re just trying to understand baseline behavior in your app.


There are many other methods out there (e.g. contextual inquiries, micro-surveys, eyetracking, focus groups, participatory design, card sorting, cognitive walkthroughs), but those are some of the ones I’ve found relevant most recently.

Ultimately, the more familiar you become with different methods, the more likely you are to know which one is appropriate given a particular question you’re facing. As mentioned in the beginning, the usefulness and relevance of your findings greatly depends on what method type you use.

Beyond the right method, it’s always best to triangulate with multiple methods when possible — whether it's by using combinations of the ones listed above, or other methods types, like data science or customer support tickets. We’re constantly moving fast, and thus 100% confidence can’t always be attained, but if you keep doing research throughout the product process, you’re confidence will keep growing as you move along the product process.