Research shouldn’t be faster & lighter — it should be slower & thicker

Farrah Bostic
The Difference Engine LLC
19 min readAug 31, 2018

Why don’t companies do more research? When I left consultancies about eight years ago, I thought I had a good sense of the reasons for skipping a round of research or cutting budgets for it, or foregoing it altogether. What clients told me, over and over, was:

  • It costs too much
  • It takes too long
  • Results get lost in the shuffle
  • We already know the answers
  • We don’t believe the answers we get

To me, it seemed that the first two were “solvable”; the rest were problems of culture. I believed that if you could solve the first two, you could begin to figure out why the latter three happened, and eventually evangelize or educate through and around those challenges.

Eight years later, I believe that I was wrong.

We tried

But I wasn’t alone. There was already a trend towards digitizing market research — from the first online sample panels, to the early days of online movie trailer and commercial testing, to the discussion boards, chat room ‘focus groups’, live streaming of focus groups, virtual stores, and remote qualitative/video diary platforms, market researchers are always searching for ways to leverage technology to persuade more clients to do research. Early and significant players like SurveyMonkey and FocusVision, Qualboard and iTracks, QualVu and Revelation (both have since been bought by FocusVision) were followed quickly by Google Surveys, dScout, UserTesting.com and Respondent.io, among many others.

The first generation of technology solutions were sexy to consumer insights and market research groups at big brands, and initially offered some cost savings — you didn’t have to pay the same recruiting fees or travel expenses for talking to people in person, you didn’t need phone-banks of people calling people to screen them.

But costs crept up as more brands and businesses started to use these platforms as substitutes for traditional methods, requiring access to harder-to-reach target audiences, sampling practices more in-line with traditional sample providers, or qualitative methodologies that cost more to incentivize.

In the end, the amount of time taken to actually analyze this research didn’t diminish with these tools — for all that the various platforms have built dashboards, tagging systems, automatic transcription and the like, research and insights clients that don’t have the time to field traditional research don’t have the time to engage in deep analysis either. What was supposed to be fast, light-weight, self-serve, and easy, is usually still too technical for someone who doesn’t conduct research for a living.

I remember explaining to clients about ten years ago that Option A for 6 in-person ethnographic interviews would cost the same as Option B for 12 online video diaries because they took at least the same amount of time to analyze.

(The truth was, Option B probably took more time, since in the early days of video diaries, participants would upload tons more video, photos and text in response to our prompts than we would get in a 3-hour in-home immersive. Some of the more creative participants in the early days had so much fun doing it, they would edit their own little films and secret them on the Flip cameras we’d send them.)

I worked with clients who didn’t budget for customer research but spent somewhere between $1800 and $5000 just to field a SurveyMonkey Audience Panel survey. The effort involved in creating and programming that survey so it fit SurveyMonkey’s requirements and the client’s criteria, coupled with the need to pull cross-tabs and conduct analysis meant we got less value than if we hired a professional quantitative research company.

How I know I was wrong

I tried everything. Armed with first-hand experience of all the platforms and ideas about how to strip them down further, I spent the first four years or so building my own company on the idea that I could turn around high-quality, deep insight, qualitative research quickly and at lower cost.

I experimented with using quantitative sample providers to test screener assumptions and recruit respondents simultaneously — and discovered how widely varied the quality of various panels really are (I also met my first professional respondent, whose email address was literally surveyslacker10@gmail.com).

I eliminated the need for recruiters by building my own ‘panel’ — really a MailChimp mailing list that I used, and still do, both as my own sample database and as a network for finding the target audience.

I eliminated the need for formal focus group facilities by conducting research remotely using Skype, Zoom or UberConference, or in-person using services like Breather.

I eliminated the need for check or cash processing fees on incentives by paying people in e-gift cards.

I eliminated the need for a field manager by using tools like TypeForm for screening/recruiting, Calendly for scheduling meetings or calls, Google Sheets for keeping track of respondents, and DropBox or Drive for sharing notes and recordings with client teams. And I rarely traveled.

In the end I “succeeded” — I proved that I could do it faster while still doing it better. I could recruit, field, and analyze 10 in-depth user interviews in about a week, sharing recordings of interviews in real time, writing daily summaries of feedback, and then delivering in-depth written reports and recommendations, for between $8,000 and $10,000, inclusive of recruiting and incentives. I was a steal to big companies, and a worthwhile investment to post-Series A startups.

And then I stopped taking those requests, because while I may have proved it could be done faster and cheaper, I knew for sure it wasn’t better. It was grueling work that could only be done well by a very senior, experienced researcher — which made it nearly impossible to scale at those prices. Hard costs increased as the once-free tools moved to subscription business models (rightly so), and as participants required increased incentives to engage in anything deeper than a thirty-minute phone call.

Meanwhile, clients still told me research cost too much and took too long. Research still fell down the memory hole after delivery; and the same skepticism about the validity of results and likelihood of learning anything new remained.

We failed

I believe in the basic tenets of Just Enough Research; I’ll do cheap & cheerful research when it’s the only way I’ll get to understand what customers and prospects are going through. So when I see something like this, I nod my head and keep scrolling:

Yes! I completely, with all my heart, agree.

And yet… I hesitate. Because unless you are Erika Hall, or let’s say, me, or someone with years of experience doing rigorous research at design studios like frog or IDEO, or research consultancies like Hall & Partners or Flamingo, you should probably ignore this advice altogether.

After all these experiments, and all this evangelizing for more founders and teams doing their own research, I’ve come to believe that just enough research is not good enough research.

It’s such a seductive thought, though — any research is better than no research. Surely that’s just true! But it’s not true.

No research is bad, but bad research can be worse. Bad research can distract leaders, ruin morale, waste resources, alienate customers, and lead decision-makers astray.

In my mind, this advice should come with a disclaimer:

“Professional driver on closed course. Do not attempt.”

The tools that have been created to make research faster and lighter suggest that research is something anyone can do. But the high-end tools were designed by research professionals to solve a particular problem or problems research professionals have. They never imagined an honest-to-god n00b trying to build surveys without knowing about skip patterns or matrices; they didn’t figure start-up founders with no research training would try to bend a digital ethnography tool to do user interviews.

The tools that were built for research amateurs — the more ‘accessible’, self-serve, user testing-focused tools — are built around the idea that it’s more important to do any research than to do research well. They’re not flexible or robust enough for professional researchers — though they’ll do in a pinch… if you’re a pro.

Of course Erika thinks you should talk to ten people; God help me, I do, too. But, you should know that I have dreams about interviewing people, literally doing it in my sleep. Sometimes at dinner parties, my friends accuse me of “moderating”. I once interviewed a half dozen senior executives about enterprise software after falling down a flight of stairs and spraining my ankle, my foot propped up under a bag of ice between us, the pain practically blinding me. This is literally what I do. It’s the primary tool in my toolkit as a strategist and researcher — I wouldn’t know how to give advice or develop strategies without it. Your mileage almost certainly will vary.

When you should not attempt research without the advice of a professional

If you’re the founder of an early-stage startup that’s building a product & trying to raise, do not attempt

Maybe you’re thinking about starting a company and you want to understand the experience of other people who have the problem you’re trying to solve. If it’s genuinely still just an idea, and you haven’t hired anyone, or prototyped anything, or built anything — even a pitch deck — and you don’t have any advisors or investors and aren’t meeting with any, then absolutely, go ahead and talk to 10 strangers. I would hope that most founders in that situation would be doing that anyway.

But if your job is managing the first 5 employees, keeping morale high while you get kicked in the teeth by potential advisors, investors and co-founders or strategic hires, you probably can’t — emotionally — afford to sit next to and listen to ten strangers who don’t understand what you’re trying to do, give you bad news about what you’ve built so far, or tell you that while, yeah, it’s a problem, it’s not that big of a deal.

It isn’t that this information is unimportant or would be bad to know, it’s that you shouldn’t be the one to get it first-hand. You will be bad at it. It will distract you from your goals. You will be resentful of the process, offended by what you hear, defensive with someone who might be a fairy godmother-level of early evangelist in disguise — if only you didn’t react so terribly to their feedback, which will be, according to you, unfair or ignorant or both.

So you might need someone else to gather this information for you. It doesn’t have to be someone like me (I’m pretty expensive), but it does need to be someone you trust, with experience, who can share the information with you in a way that helps you make good, hard decisions. Your research should never be ‘nice to know’ — it should always be understood as critical to building your business. It should be something your organization absolutely must do. But you should be forgiven for not being the one to conduct it yourself.

I once worked with a friend and startup founder to conduct her early stage user research. She told me listening to interviews was like nails on a chalkboard, or having her skin ripped off. She wanted the feedback, she needed the feedback, but she needed me to be the filter, to help her make sense of it, to translate it into opportunities for action.

If you’re just going to pitch people and not listen, do not attempt

It’s a subtle thing — but, seriously, never “talk to people”. That’s sales. If you’re trying to do research, you want to create the conditions for some real, raw, constructive listening. You want to be fully present in the moment, observing people’s behavior and body language, and noticing their facial expressions and tone of voice, and hearing what they have to say and how they say it. You want to be actively listening and asking follow-up questions that genuinely follow from what they just said, as opposed to racing through the five or ten (please, no more) questions you want to ask.

Chatting up people at a party or a conference or in line at Starbucks is great — as a start. These are great ways to try out questions or practice your skills. But chatting people up isn’t research. And if you’re fishing for a specific response, or tempted to sell them on your idea — or if they’re just being polite and don’t really care — listening is going to be hard, bordering on impossible.

If you don’t have a plan about who you want to ask, what you want to ask, and how you want to ask, do not attempt

The hardest parts of conducting research aren’t in the fieldwork or listening to people, they’re found in figuring out who to talk to, what to get them to talk about, and then figuring out what it all means. In other words, you need a plan.

How will you figure out who these ten strangers are? Where will you find them? How will you convince them to talk to you? How do you know they have the problem you’re trying to solve? What will you do if you discover that they have the problem you’re trying to solve but don’t care that much, or are satisfied with their current solution, or like your competitor? (Answer: you should not walk away from them! But a lot of founders and designers, agencies and brand teams do exactly that.) What do you want to learn from them? What decisions are you trying to make? Are you able to listen in an open-ended way, or are you focused and listening for something specific? Will you ask everyone the exact same questions or will you build as you learn? Are you going to be tempted to engage in a little friendly p-hacking — and stop interviewing after you hear what you want to hear, or keep interviewing until you do?

https://marketoonist.com/2017/06/extrememarketresearch.html

How do you make a plan? You should write down what you want to ask; you should write down your hypothesis about who “strangers who have the problem you want to solve” are; you should ask a colleague or advisor or partner what they think about these questions and hypotheses and how answers might help you make decisions. You should practice asking your questions to someone who’ll give you constructive feedback on the clarity of the questions and your demeanor when you ask them, so you can be clear and neutral in real life. You arrange for someone to come with you when you have these conversations so someone else can take notes and observe, and then you can compare your impressions afterward and refine your approach. When you’ve done all that, then you can go ‘talk to people’.

If you can’t be flexible in the moment, and adapt to the person you’re listening to, do not attempt

If you’re married to your questions and expect people to give fulsome and descriptive answers in complete, grammatically correct questions, you’re going to have a hard time actually listening to people. Some people struggle to articulate themselves. This doesn’t mean they have nothing to teach you — but you might have to ask them to show you examples of what they mean, to demonstrate an action for you, to draw it, or to reason by analogy. Sometimes people’s minds wander — I’ve found it’s better to let them wander for a moment to see where they go because sometimes their free-associating is hugely informative and stimulating; and then of course you have to be able to gently yet firmly bring them back to the topic if they get lost. This takes lots of practice, and lots of patience. Do you have the time and the temperament? Yeah? Then go on and give it a go.

If you are distracted by lots of other stuff, and can’t concentrate on research, do not attempt

If you’re switching between tasks all day long — trouble-shooting, debugging and customer service fire drills in the morning, paying bills over lunch, preparing for a pitch in the afternoon and building prototypes at night, it’s going to be hard to switch to the kind of open-minded and open-hearted posture that listening calls for. Talking to 10 people will genuinely take only a couple of days, if it’s all you do for those two days. If you can find two straight days just to interview people, please do. It will be so good for you. But if you’re going to try to fit in a couple here and there between all your other responsibilities, it’ll take a week or two to get to ten people. When it’s mixed up into all the other things you’re doing, there are countless heuristic traps you’ll have to avoid falling into. You’ll be tempted to rely on the most memorable interview, or the one who agreed with you the most, or the most recent one. You have to discipline yourself to take thorough notes (or record the interaction) and read through your notes more than once (or listen to the interview again). If you’re not going to invest the time and focus into research, then just be honest about that and don’t try to half-ass it.

If you don’t have something to offer people in exchange for their time, (maybe) do not attempt

There are lots of people who will talk to you for free, and often, these are the best interviews. But if they don’t know you (because, strangers), are busy, are experts, or are remote and would need to coordinate schedules with you to create the opportunity for you to listen and learn, you should have something of value you can offer them. Maybe it’s an exchange of ideas and opinions; maybe it’s cash; maybe it’s a free trial; maybe it’s a connection to someone you know that they want to connect with; maybe it’s a cup of coffee. For quick feedback, a cup of coffee or a few bucks can be enough. But if you want to spend some deep time with people, visit their homes or workplaces, ask them to try something and give you feedback over time, or any deeper method of research, it’ll likely cost you more — sometimes a lot more. You are not entitled to other people’s time or expertise. If you’re not willing to be flexible and respectful, how much do you really want to hear what they have to say?

If you don’t plan to take time to think about what you’ve heard, and don’t have a plan for analysis and synthesis of qualitative data, do not attempt

I will never forget the email I got asking me for advice on tools for collecting and acting on feedback. This very well-meaning product manager told me she’d been sitting in on user interviews, and was taking notes on the feedback directly into a spreadsheet — one cell for each comment — and then was translating those cells into tasks in Jirra for the designers and engineers to act on. She was skipping analysis and synthesis; she wasn’t tallying comments to see how common they were; she wasn’t prioritizing the feedback. Her question wasn’t whether that was a good approach to listening or developing insight; her question was whether I had software tools I liked better.

That’s not good.

As a teacher of design thinking, I have to confess that analysis and synthesis is the hardest thing to teach. It’s an epistemological problem — how do we know what we know? How will you know when you see a ‘real pattern’? Let’s start with first principles: Are you even open to any answer to your research question, and any result to your test? Clients of all experience levels struggle with listening and sense-making.

I see clients dismiss information we collect all the time with, “we’ve heard that before”. They think novelty is more important than mass — but when lots of people keep telling you the same thing, it might be a good idea to do something about it.

I see clients who cling to any remotely positive feedback as proof they’re on the right track. When research participants use vaguely positive language but can’t say anything specific about what they like about your product/campaign/design/idea, will you be capable of digging deeper or questioning just how positive they really were? For example, when someone says of an ad or a landing page, “I like it — it’s short and to the point”, you might hear good news, but what I hear is, “It’s fine, I don’t care, let’s move on.” That’s not good news.

I see clients take feedback extremely literally, not understanding the underlying framework or values of the comment. When someone says, “It’s too expensive” I know to dig in to find out if it is truly out of their reach, if being expensive makes it seem more valuable, or if it’s a problem of perceived value. Somewhere between 60 and 80% of what people say can be taken at face value because people are too lazy, most of the time, to lie when there are no clear incentives to do so; the rest of what they say isn’t a lie, it’s just not exactly what it seems. You’ll have to think about what people tell you in context, not just as words typed on a page, pulled from context.

And I see clients forcing customers into overly specific feedback. I heard a story recently about a product manager interrogating a feature request from a customer, “five-whys”-ing her into submission until she gave a specific use case that called for the feature she wanted. The PM, feeling like he’d cracked the case, built a specific feature for the specific use case; meanwhile, the customer still wants the feature she asked for. He extracted information from her, he thinks he did what she asked, but he didn’t really listen to her, or try to take her perspective.

If you don’t know what to do with the data you’re collecting, it will fall down the memory hole and never turn into something useful, or it will lead you to make some bad decisions, or it’ll make you feel like you didn’t learn anything from the exercise, causing you to doubt the validity of research altogether.

At best, you’ll have wasted a lot of people’s time.

Okay, so now what?

The hardest lesson I’ve learned in all this is:

You have to want to do it & you have to want to do it well

We made a mistake separating research from the rest of the enterprise, or making it someone else’s job. And yet this is the world and culture we’re in — most people don’t know how to do research, and not everyone doing research does it well.

The real reason that companies don’t do research is that they don’t want to do it, and they don’t want to invest in doing it well. They don’t think it’s worth it.

Maybe this is because, as a culture, we prefer auteurs and geniuses, who simply know what to do. And if we can’t rely on a charismatic visionary, we’ll find safety in numbers and throw tech at the problem — surveys and analytics, and mobile diaries, and large-number respondent pools.

Maybe it’s because, fundamentally, we don’t trust other people — especially strangers — to tell us the truth. We’ll turn to experts before we turn to just folks. Talking to humans seems so pedestrian, so normal — and yet also so risky.

Maybe we think we know what to do. We already have plans and don’t want to change them.

Maybe we think that smart people should know the answers. We’d rather spend the money on a hot-shot designer, or a famous marketer, or a brilliant developer.

Maybe we are suspicious of research, suspecting it’s there to kill our darlings, make us play it safe, reveal that we are wrong, or make us look foolish. We are suspicious of researchers as not designers, developers or marketers, as people who are boring, who like to play it safe or who have some sort of bias that will lead the team astray, so we follow our guts and hope for the best.

But let’s say you’re not like that. You know research is valuable, you want to do it well… you just don’t know how.

Here’s how

If you’re an early stage startup or product team with scarce resources, who nevertheless wants and needs to learn, then suck it up, buy Just Enough Research and do what Erika tells you (or try Talking to Humans by Giff Constable, which is also great, and free!). You will not be sorry you did, and you will be better off than if you didn’t. Just remember to be gentle with yourself — because it’s hard to know ahead of time whether it’ll hurt like hell or invigorate you.

As you build the business, don’t lose sight of research — integrate it into your practices, hire researchers or make research part of the job description and the training program for your designers and product or brand managers.

Get a researcher to be an advisor to your business or put one on your board — they’ll bring a breadth of experience and exposure that they can use to help you puzzle through each new challenge.

Your job, as a startup founder, is to seek a business model that scales — so you fundamentally are already in the research business. Embrace that and build for it.

If you’re an established enterprise or late stage startup with some working capital, invest some of it in what I guess I’ll call Good Enough Research. Good Enough Research is fully integrated into your business practices — it’s part of the design and development process, it’s part of the strategic development process, it’s part of the marketing process. Hire researchers to be part of every team, train your people on research practices, scope your schedules and budgets to include research no matter what. It’s not something you throw over the fence at someone else anymore — it’s woven in to every thing you do.

If you can’t get headcount, get budget. Pay qualitative design researchers, and hire anthropologists and ethnographers to spend time, on an ongoing basis, with your customers. Plan for it, invest in it, take some time with it. Get your partners to spend time with outliers and fringe cases. Ask them to play with ideas and prototypes.

Put what resources you have into the research, even if you don’t do it in-house: involve your people in setting objectives and designing the study, expose everyone in the team to research as it’s happening, develop assets that bring it to life, translate it into inspiring action plans that your team can execute, build on it over time, notice what never changes and what is different this year than last year. Start deep, slow, thick — then you can maintain your learning with lighter and faster loops over time.

Regardless of the approach, the path to growth runs through getting to know your customers as human beings, considering the cultural context for their behaviors and beliefs, staying current with the competition and trends, and constantly questioning your own assumptions.

We do these things not because they are easy, but because they are hard

“Because the purpose of business is to create a customer, the business enterprise has two — and only two — basic functions: marketing and innovation. Marketing and innovation produce results; all the rest are costs. Marketing is the distinguishing, unique function of the business.”

— Peter Drucker

The toolkit that marketing and innovation shares is research. If these are the essential functions of the enterprise, then why the hell are you depriving your organization of the tools it needs to perform those functions?

It’s just not something you should do fast and light. It should be one of the most valuable undertakings of the entire enterprise. Research should inform product and marketing; it should shape the business and guide the way to growth. It’s too important to half-ass it. It’s critical to your success. It’s not for amateurs.

Of course it takes time and costs money.

You should still do it, and you should invest to do it well.

--

--

Farrah Bostic
The Difference Engine LLC

Founder of The Difference Engine @DifferenceNGN. I listen to humans so I can help businesses all over the world make important brand & business decisions.