How not to screw up your design research by choosing the right inductive and deductive methods

Piers Scott
6 min readMar 21, 2018

--

A few days ago I found myself in a local supermarket with a friend. Even though it was the middle of the day the line at checkouts were excessive. So I dragged my companion over to the self-service checkouts.

My friend had never used a self-service checkout, and really had no desire to try, but I insisted. I insisted partially out of laziness — I really wasn’t in the mood to rejoin the queues — but mostly I was curious to see just how he would fare with the self-service checkout.

Professional curiosity took over, and I encouraged my friend to use the machine.

As he used the checkout, I feigned ignorance, and with the distant manner of a Viennese shrink, I responded to my friend’s questions for help with the usual, “what do you feel you should do next?”

“Give you a kick up the arse,” was the response I was most frequently given.(Usually when I’m usability testing participants don’t threaten physical harm against me.)

Designing Research Projects for Success

Whether you’re designing a qualitative or quantitative (or combined) research project, the approach you use, and the questions you ask and don’t ask will dictate the success of your project.

A few years ago I was asked to identify methods for improving the purchasing experience for customers for a fashion retailer. We had limited time and budget and we weren’t given a very prescriptive brief — we just had to check in with the company’s customers and identify opportunities for improvement.

We had access to a large set of old quantitative data — surveys, site/app analytics, payment data — so we were able to gain a strong understanding of common behaviours on the brand’s digital platforms. But we still didn’t understand how customers behaved offline, or how they moved between physical and digital touchpoints.

So we made this the focus of our research. We recruited a set of candidates who would typically use the service and created a straightforward research programme consisting of,

  1. Contextual interview — we’d interview the candidate in their home or place of work,
  2. Shadowing — we’d follow the candidate as they used the physical service.

Putting the surveys aside, the quantitative analytic and payment data told us a lot about customers’ behaviours. We knew the times of day and days of week that customers were more likely to make their purchases. We knew the purchase journey spanned a few days, and typically started with a ‘quick look around’ before committing.

Designing for people — forget what they say, it’s what they do that counts

So with all this data we had a decision — how much, if any of it, do we use to inform our research process? Can we assume that the online purchasing process mirrors the offline one? The existing survey had been conducted to look for very specific answers, and we felt that some questions were leading. Nevertheless, the analytic data was detailed and contained some definite and consistent behavioural patterns.

This data presented us with a dilemma — a dilemma that exists at the start of each research project: should we take a priori/deductive or a posteriori/inductive approach?

With an a priori/deductive approach we’d go into the research with very specific questions arising from the existing quantitative data and our own expectations of people’s behaviour and we frame our research around these questions. With an a posteriori/inductive approach we put the analytic data and our own expectations aside, we ignore these while we conduct our research, giving participants more control over the direction of interviews.

There’s an opportunity cost in using the wrong research method. By allowing the participants to guide the research sessions we can go off track and end up with a broad and off-topic data set. But by focusing on specific areas we may learn nothing new, we may just end up confirming our own biases.

The researcher will always bring in their own biases, and the client brief will set the direction of the research. But the question here is, ‘when should you use deductive and when should you use inductive research methods in design research?’

In reality it’s often not a hard line between the two.

Using the Right Research Method

Because we were specifically interested in understanding the lived experience of a retailer’s customers and how they interacted with multiple touchpoints we opted to take an inductive approach to our participant interviews and shadowing. We knew what the quantitative data told us but we still had concerns about how accurate some of it was.

During the interviews we started with open participant questions and followed the interview thread from there. We then shadowed the participants as they interacted with the brand’s digital and physical touchpoints, and asked some contextual questions during this process.

But after conducting the first set of interview and shadowing sessions we realised that this approach wasn’t working as we’d hoped.

The inductive interview gave us a deep understanding of what was important to the participants (just what we wanted), but the inductive shadowing wasn’t. Because we were following participants as they conducted an activity they’d done hundreds of times before on autopilot our presence created an artifice of the whole situation — we didn’t feel that we were observing the participants acting as they normally would.

After the second session we regrouped. How might we improve the quality of the shadowing process? We discussed scrapping the shadowing part of the research, and we looked at technological solutions that would allow us to observe the process but remove us from the direct experience.

But then we asked, ‘what would happen if we leaned into the artifice?’ Rather than ask the participants to do as they normally would, what if we asked them to make their purchases in a different location (be it the client’s stores or a competitor’s)?

While the interview process would provide us with that open inductive research that we needed, the redesigned shadowing process could allow us to test specific theories coming out of the interview process.

By removing the participant from their usual location we found that participants were far more vocal about their expectations, and experience. We observed how participants navigated the unfamiliar store, what provoked them to ask for help, and we were able to easily compare and contrast the experience in context of the unfamiliar location.

With other participants we asked them to shop in their regular store but we provided them with a scenario — they were specific set of items — for this we gave them a list of unfamiliar items and asked them to find these items. With this scenario we were able explore their local store in a new way. By asking the participants to find unusual items we were able to explore their decision-making process when looking at different versions of the same product.

Productive Paths

Research should be participant-led — but if we only take an a priori approach to research we will only confirm or refute our own theories, and it can prevent us from discovering those unknown unknowns, yet a complete a posteriori approach may lead researchers down unproductive paths.

The trick is to be agile and aware enough to make the right alterations to your research project if you’re not getting the data that you need.

--

--

Piers Scott

Design Research with Rothco — Accenture Interactive, previously Design & Content @theothershq. http://piersdillonscott.com