The 5 Mistakes that are Ruining Your Survey Data

Allison Dickin
Scaling Insights
Published in
7 min readJul 28, 2020

Surveys have a bad rap. Some people think the data from surveys isn’t useful, others believe they take too long to be worthwhile, and others believe survey responses are too biased to use.

The reality is that while surveys have their challenges, they can be extremely valuable, lightning fast, and reflect the real experiences of your users… if you do them right. In this article, I’ll explain where people go wrong with surveys and provide some fixes to increase response rates and generate faster and more useful results. Follow these tips and surveys will be your new best friend.

Surveying users at the wrong time

What it is

For survey insights to be most valuable, participants must receive the survey at a time when the questions you’re asking are relevant and timely. Unfortunately, this doesn’t always happen. For example, maybe you want to survey new users about their onboarding experience with your product, so you send out an email survey to everyone that completed onboarding in the past 3 months. The problem? Users who onboarded 2–3 months ago are not going to have detailed memories about their experience and will provide less specific or helpful feedback, leaving you with fewer insights to take action on.

Why it happens to the best of us

While it’s possible to survey the wrong users simply through poor planning, we often do it because we have limited ability to target the right users or we have a small base of users meeting the criteria we care about. In the onboarding example, we may not have enough users completing onboarding each week to generate enough survey responses, or we may not have a tool to automate sending surveys when onboarding is completed, so we’re left with manually sending batches of survey invites as often we are able to manage.

How to do it better

The best way to survey users about a specific experience or about specific features of your product is to ask them questions in-context when it’s most relevant to them, and when the memory of the experience is fresh in their minds. This leads to higher response rates and more specific and actionable feedback.

Surveying users in-context often means surveying them in-product, and you will get the highest response rates this way (as long as you limit your survey to a few quick questions); but in-context surveys can also be administered by email if in-product isn’t an option for you.

In the onboarding example, an in-product survey would be great, but email surveys can work too, as long as the email is sent close in time to the relevant experience. Even sending onboarding surveys monthly is a big improvement over a 3-month cycle; weekly would be even better; and best would be triggering an email survey as soon as you log that onboarding has been completed. This can be done with tools like UserLeap, that have automated email survey capabilities in addition to in-product surveys.

Surveying the wrong users

What it is

We often survey the wrong users by asking our questions of a general audience, without targeting those who are most relevant to the questions we want to ask. For example, a colleague of mine recently received a survey asking him to answer questions about various features of a product he had recently signed up for, even though he hadn’t used most of those features yet. He probably didn’t respond, but if he did, his answers would have been pretty meaningless.

As a result of targeting the wrong users, you’ll often see low responses rates leading to slow data collection, as users are going to be less interested in taking surveys that don’t seem specific to their personal experiences. Your results may also be less actionable, if your data is muddled by response from users who aren’t relevant to your questions.

Why it happens to the best of us

Surveying the wrong users may happen despite our best intentions, when we have a limited user base to work with (leading us to try to get as many responses as possible, even from less relevant users), limited capacity to identify the right users to survey (leading us to send to a wide pool and rely on users to opt in correctly), or limited capacity to survey frequently (leading us to squeeze a few targeted questions into a larger survey covering other topics).

How to do it better

You can minimize the risk of surveying the wrong users by carefully choosing who you send your survey to and only targeting those who are best positioned to answer your questions (e.g., those who have used the features you are asking about). If you are unable to do this and must rely on users to correctly self-report their experiences, take some time to review who actually responded, so you know which users’ perspectives you’re working with when using the results.

If issues like a small user base or limited targeting capacity come into play, then an in-product survey tool can be helpful here as well, enabling you to ask relevant questions based on users’ actual behavior. In-product survey tools can be effective with small samples because they generate much higher response rates than email surveys, giving you quick insights from the right users, even if they are in limited supply. For example, one UserLeap customer recently ran a survey focused on image-editing features. They displayed the survey in-product immediately after a user edited an image and identified multiple concrete opportunities to improve the experience in less than a day.

Asking the wrong questions

What it is

Sometimes we ask complex or confusing questions; other times we ask questions that inadvertently bias our results. But people often find themselves believing surveys aren’t useful after they’ve asked questions that couldn’t provide the data they need to make decisions. This often happens when we ask users what they want instead of asking them what their problems are.

For example, if you ask users if they’d like you to build a certain feature and 75% of them say yes, it might seem like a pretty clear result, but it doesn’t actually tell you all that much. It doesn’t give you much insight into the value the feature would provide your users and the ROI you might expect to see if you were to build it. It would be more effective to ask users about the problem your feature is intended to solve: how big of a problem is it? And how valuable would it be to them to have that problem solved?

Why it happens to the best of us

It’s deceptively difficult to write survey questions effectively. While anyone can write a survey, not everyone can be a survey expert; however, survey expertise is just an exercise in a particular sort of common sense. Errors in survey design are often a consequence of an unconscious assumption that our users think like us and have the same information that we do. They don’t.

How to do it better

To write better survey questions, focus only on topics that users are experts in: their own experiences, pain points, needs, and goals. For more guidance, follow the 3 basic rules for writing survey questions, and review your questions to make sure you’re avoiding bias. Better questions will give you better data and more actionable, easier-to-interpret results.

Asking too many questions

What it is

There’s so much we want to learn and it’s hard to know what questions are going to be useful (if we haven’t put in the effort to ask good questions — see #3 above). The problem? Long, repetitive surveys yield much lower response rates, potentially from a biased group of respondents, and data collection will take much longer. Not only that, it’ll take much more time to sift through all the results and pull out useful insights.

Why it happens to the best of us

We often send longer surveys because we have limited tooling that makes it cumbersome to survey users, making it something we can do only a few times a year.

How to do it better

The answer here should not be a shocker at this point in the article — shorter “microsurveys” with better, more targeted questions sent to users in-context will yield higher response rates and more actionable results. Another benefit of the microsurvey approach is the ability to iteratively build on results. Start with a few questions, analyze the results, and dig deeper with the next microsurvey, all within a matter of days.

Failing to fully analyze responses

What it is

In our rush to act on the results of surveys, we often don’t take the time to fully analyze the responses. Maybe you asked a few open-ended questions, but since you didn’t have time to manually tag each response, you just eye-balled it and hoped for the best. Or maybe you looked at the overall breakdown of responses, but didn’t dig into key sub-segments of users who had unique responses.

Why it happens to the best of us

This can be unavoidable without the right tooling. Time is rarely on our side when it comes to user research, and we’re often forced to sacrifice rigor in order to stay on track with deliverables.

How to do it better

Open-ended analysis in particular can be extremely valuable, opening your eyes to user needs or use cases you had no idea existed and uncovering exciting opportunities. But it is a fact that manual analysis of open-ends is laborious and time intensive, unless you have tools like (you guessed it) UserLeap to do the analysis for you. However, even without best-in-class tools, it is worthwhile to spend at least some time reviewing open-ended survey responses. If you commit to spending one hour, or even 30 minutes, reviewing responses to each survey you run, even if you don’t codify them manually, I promise you will find deeper, more valuable insights than you can get from the numbers alone.

This article first appeared on the UserLeap blog, where we post regularly about using customer insights to build better product experiences. UserLeap is the first real-time customer insights platform. We help software companies use rapid customer insights to build better experiences.

--

--