A Practical Guide to Avoiding Survey Bias

Allison Dickin
Scaling Insights
Published in
9 min readMay 22, 2020

When speaking with people about surveys, I get more questions about preventing survey bias than anything else. While there are a lot of articles out there that name the different types of bias, I’ve noticed that few of them give practice advice on how to prevent bias from influencing survey results, so that’s what this article is intended to do.

Survey bias is the tendency for survey participants to respond inaccurately or not-exactly-truthfully to questions, often (but not always) unintentionally. There are lots of ways that bias can creep into surveys, some of which can be controlled by following best practices and some of which cannot. You can control the degree of bias in your survey by making considered choices about who you survey, how you survey them, and how you design your survey questions. Since this article is part of a series about writing surveys (see the first article here), I’ll focus on the types of bias that can be controlled through well-designed survey questions.

Rule #1: Be Switzerland

One of the most straightforward ways of preventing survey bias is to keep your question wording neutral. Be Switzerland.

Often, in an attempt to inject excitement into surveys, we accidentally nudge participants in a particular direction. While it is important for survey questions to feel human and friendly, this should not be done at the expense of high quality results. Try to keep question wording neutral and only provide the information participants will need to answer them.

Here are some examples of questions that could create bias, and how to fix them:

  • “We love hearing about great experiences. How would you rate your experience?” In this question, the first sentence makes it a little too clear what the questioner is hoping to hear. This approach may bias people towards giving a more positive rating, or discourage people with negative experiences from answering at all. It would be better to leave the first sentence out altogether, even if you lose some of the brand glow from the survey.
  • “How would you rate your experience on our new and improved website?” Here, the question injects bias by stating that the website is improved, a presumption that is likely to lead participants to respond more positively than they would otherwise. A better approach would be to leave out the words, ‘and improved’ from the question. Even better, do a pre/post test: Before releasing the new website, ask visitors to rate their experience with the current (old) website. Then, after you launch the new site, ask the exact same question to visitors on the new site and compare the results.

Rule #2: Mind Your Answer Scales

If the first way to prevent bias in surveys is by keeping your questions neutral, the second way is by keeping your answer scales neutral (or, at least, balanced). You want to make sure that your scales give as much weight to the positive side as the negative side, so that your participants will have an option available to them, regardless of their experience.

Here’s an example of what *not* to do that we at UserLeap recently came across on a very popular video streaming platform:

Setting aside the strangeness of the question (has *anyone* ever had an ‘absolutely outstanding’ experience with an advertisement?), the answer scale is extremely unbalanced, with 4 answer choices framed in the positive, and only 1 framed negatively. If you’re looking to put your thumb on the scale for your report back to leadership, then this might be a way to do it (haha, but really: don’t). However, if you want to accurately gauge the user experience, this is a terrible method of doing so.

In this case, a more traditional scale would look one of two ways:

  1. A bipolar scale would have two positive responses, two negative responses, and one neutral (e.g., Excellent, Good, Okay, Bad, Terrible)
  2. A unipolar scale would use the same word or phrase to describe the experience across the scale, but covers the full range of feelings with descriptive adverbs (e.g., Extremely good, Very good, Somewhat good, Not very good, Not at all good)

To take a more typical example, if you were creating a scale for users to respond to a question about how satisfied they are with your product or service, your scale options would look like this

  1. Bipolar: Very satisfied, Somewhat satisfied, Neutral, Somewhat dissatisfied, Very dissatisfied
  2. Unipolar: Extremely satisfied, Very satisfied, Somewhat satisfied, Not very satisfied, Not at all satisfied

Both bi-polar and unipolar scales are acceptable, so it’s really up to you to choose which version to use. The one key to this is to use the same type of scale consistently throughout your survey (as much as possible).

Rule #3: Keep Your Cards Close to Your Vest

Another way we can unintentionally bias survey questions is by being a little *too* straightforward about what we want to learn. It’s important to provide users with enough context to answer your questions, but beyond that, the less they know about your intentions, the better. This is true for any survey question, but it’s especially relevant in two cases:

(a) for screener questions (questions you ask at the beginning of a survey to determine what questions to show later on, or whether someone qualifies for the full survey), and

(b) for surveys where you are offering participants an incentive for responding (which could encourage them to try to game the system to make sure they qualify).

A simple example of how bias can creep in by providing information is a basic awareness question. Maybe you want to know whether users are aware of a particular feature your product offers, so you ask them a simple yes/no question: “Are you aware that UserLeap automatically analyzes open-text responses for you?” On the surface, there’s nothing obviously wrong with this question, but if this is the first question in your survey, you’ve probably biased the responses.

Why? Since you’ve called out this one specific feature, you’re likely to end up with a sample that over-represents people who are either familiar with the feature or interested in it. Others are more likely to assume the survey isn’t relevant to them and move on without answering. The consequence is that you’ll probably end up with inflated awareness numbers, and a more positive picture of user sentiment about this feature than a more representative sample would provide (i.e., you’re not just biasing the results to this one question, but likely skewing the overall participant pool and biasing the rest of your results as well).

The solution to this problem is called ‘blinding,’ and simply requires you to ask about participants’ awareness of several features without giving away which one(s) you are really interested in until later in the survey. In this case, you might ask, “Which of these UserLeap features are you aware of? Select all that apply.” (Automated open text analysis, In-product surveys, Survey template gallery, Event-based survey targeting, None of these). By asking the question this way, you’ll probably get lower (and likely more realistic) awareness numbers, and your responses to other questions will be more in line with your overall user base than if you had moved ahead without blinding your question.

Rule #4: Be mindful of question order

The order you ask your questions can have an effect on the responses to your survey, either by giving participants information they otherwise wouldn’t have had or prompting them to shift their mindset.

For example, let’s say you want to know how people rate your product and what can be improved about it. If you ask them what you can do to improve before you ask for their ratings, you are likely to get lower ratings, because you’ve just prompted users to think about all the things they don’t like about your product. The same would be true in the opposite scenario: if you ask customers what they like best about your product before asking them to rate it, you’re going to get better ratings.

To prevent bias from question order, use a funnel approach, in which you ask any general/overall questions first before digging into the details with more specific questions. It’s also helpful to do a full review once you’ve completed your survey and consider whether questions at the beginning could bias questions later on.

Sometimes you’ll find that whatever order you choose, you’re risking biasing some questions with other questions. In that situation, prioritize your most critical questions first in survey order, and move lower priority questions to later in your survey. Or, consider breaking it out into two different surveys with different participants.

Rule #5: Check Your Assumptions

It’s critical to keep in mind that when writing survey questions, you are operating with your own biases and assumptions that you may not be consciously aware of, and that these biases can easily bleed into the survey itself. For this reason, a key step when drafting surveys is trying to figure out whether you’ve injected your own bias into your survey, and if so, removing it.

How do you do this? It can be difficult to simply stop what you’re doing and identify your biases, but it gets a little easier once you’ve got something down on paper. Take a look at each question you’ve written and ask yourself whether it’s making any assumptions or taking anything for granted.

For example, maybe you have a few potential new product features in mind and you want to know which to build first. You write the question, “Which of the following features would be most valuable to you? (a ‘Favorites’ option to keep track of items you love, a ‘Save for Later’ option in your shopping cart, a ‘Share’ option that lets you share items with others)”

Notice any assumptions? Unless you have data from another source (which is entirely possible), you’ve assumed that at least some of these features would be valuable to users. But what if none of them are? You’ve asked them the question, so they’re going to give you an answer, and you could take action based on the results. But you may be missing something critical.

One possible solution would be to include a ‘None of these’ option, or an option for users to select ‘Other’ and write in what they would most like you to focus on. Another solution would be to ask users to rate each feature on a scale from “Not at all valuable” to “Extremely valuable” instead. This would give you an overall ranking as well as a sense of the perceived value of each feature.

While this is probably the most nebulous step in preventing survey bias, it is possibly the most important one, so don’t leave it out. It will get easier the more you do it, I promise!

Summing it up

I hope this article gave you some practical tips for preventing bias when you’re writing your own surveys. I’d love to hear what you thought, what other questions you have, or what else you’d like me to write about. Tell me here!

This article first appeared on the UserLeap blog, where we post regularly about using customer insights to build better product experiences. UserLeap is the first real-time customer insights platform. We help software companies use rapid customer insights to build better experiences.

--

--