What to look for in a good survey

Natalie Herd
Mind Tap Research

--

At Mind Tap Research, our surveys are informed by the latest scientific research on survey design and behaviour change. We’re passionate about this research and keen to share our expertise. Over the years, we’ve unfortunately come across far too many poorly designed surveys, so we’ve decided to share some tips about what to look for in a good online survey.

Not only does poor survey design run the risk of producing unreliable and misleading data, at best it confuses respondents, at worse it frustrates respondents and leaves a bad impression of your brand. Well-designed surveys, on the other hand, are easy and satisfying for respondents to complete, accurately measure the constructs they are intended to measure, and generate reliable data that is transparent and easy to interpret. There is no perfect generic survey; each survey needs to be customised to gather meaningful responses on the topic of focus. That being said, there are best practices that are informed by research into optimal survey design.

Questions should not be double-barrelled.
For example, the question: “How satisfied are you with the battery life and charging time of your mobile phone?”, should be separated into two separate questions: one asking about battery life, and another asking about charging time. Similarly, the question: “How often do you bike ride to work for environmental reasons”, should be two separate questions: one about frequency of bike riding to work, and another about motivations for bike riding.

Avoid leading Questions.
Questions should not lead respondents to answer in a particular way by providing clues in the question as to which response option is correct or most desirable. For example, the question, “Have you ever put pressure on a GP to prescribe you antibiotics?”, prompts respondents to answer ‘no’ due to the negative connotations associated with ‘putting pressure’ on someone. Better phasing of the question would be: “Have you ever asked a GP to prescribe you antibiotics?”

Avoid using absolutes in questions.
For example, the question, “To what extent do you agree or disagree that all children should be vaccinated?” does not take into account that a small minority of children — such as those with compromised immune systems — are unable to be immunised. The wording of this question means that it is unclear if pro-vaccination respondents should agree or disagree with this question. Removing the word “all” in the question, helps to eliminate this ambiguity. Another example of using absolutes is the question, “Do you floss your teeth everyday?” For some respondents, flossing most days might be enough for them to be able to respond ‘yes’ to this question, whereas some other respondents might assume that occasionally failing to floss, means that they have to answer ‘no’ to this question. Better wording would be: “Do you typically floss your teeth each day?”.

Avoid using ambiguous or technical words.
For example, in behavioural science we often use the terms ‘enablers’ and ‘barriers’ for a given behaviour. Whilst it might be okay to use the word ‘barriers’ in a survey question, we would never use the term ‘enablers’, as its meaning is ambiguous for anyone not familiar with the field of behavioural science.

Avoid response scales with too many points (e.g., 100).
This many points is excessive and respondents typically gravitate to multiples of 10 anyway, making most of the points on a scale like this redundant. Research has shown that ratings tend to be more reliable and valid when five or seven points are offered (Krosnick & Fabrigar, 1997; Krosnick & Presser, 2010; Lissitz and Green, 1975).

Ensure each point on a response scale is labelled.
Using word labels on each point of a scale helps to ensure that response options are interpreted in the same way by respondents. Numerical labels on scales are relatively meaningless, and scales containing both numerical and word labels creates confusion, adds to the cognitive burden for respondents, and does not produce superior data. When developing word labels for response scales, ensure that the words used have equally spaced meaning.

Ensure all respondents are able to answer each of the questions they are asked.
If a particular question is not applicable, then a “Not applicable” or “Don’t know” response option should be provided, or even better, programing logic should be used so that the question is skipped for that respondent. “Don’t know” options should only be provided when the question is assessing knowledge, or if it is reasonable that the respondent might not be able to provide an opinion on a given topic (i.e., due to unfamiliarity with the topic). Questions that are potentially sensitive, should contain a “I’d rather not say” option to allow respondents to opt out of answering those questions.

Minimise acquiescence response bias by using item-specific rating scales.
Research has shown that people have a general tendency to provide affirmative responses to survey items. There are a number of reasons for this bias (Saris et al., 2010). Firstly, people are often conditioned to be polite and avoid social friction. Secondly, respondents might think that affirmative responses are the correct answers that the researchers are looking for. Lastly, some respondents look to take shortcuts to complete the survey as quickly as possible by not thinking about their responses, or not even reading the questions properly. Agree/Disagree rating scales, instead of item-specific response options, exacerbate acquiescence response bias and are also more cognitively burdensome for respondents. For example, try answering these two different versions of the same question: 1) “To what extent do you agree or disagree that your health is excellent? (completely agree, somewhat agree, neither agree nor disagree, somewhat disagree, completely disagree)”; 2) “How would you rate your current health? (excellent, very good, good, fair, poor)”. As you can see, the second version of the question is more straightforward to answer, and generates more reliable responses.

Minimise response order effects by randomising response options where appropriate.
There are two main types of response order effects: primacy effects (i.e., choosing response options presented near the beginning of a list); and recency effects (choosing response options presented near the end of a list). When the response options are categorical (as opposed to a rating scale), primacy effects predominate for visual surveys and recency effects predominate for oral surveys (i.e., when the list of response options is read out to a respondent). Randomising the order of categorical response options across respondents eliminates response order effects in the data.

Use realistic time periods for recall and future prediction questions.
When asking respondents to report on past behaviour, limit recall to two weeks for behaviours that people often perform frequently (e.g., exercise-related behaviours, driving, television viewing, etc.). For behaviours that are performed very regularly, such as eating, the recall period may need to be even shorter. Recall periods for infrequently performed behaviours, such as visiting the dentist, should be much longer (e.g., 12 months). Similarly, when asking respondents to make predictions about the future, use sensible time frames that people can realistically envisage.

Disable the “Back” button on surveys.
When designing a survey, questions should be ordered in such a way as to not lead respondents to answer in a specific way. For example, unprompted awareness of a product or campaign should be asked before any information about that product or campaign is revealed. It follows then, that respondents should not be allowed to go back through the survey and alter their responses. There are of course some surveys in which subsequent questions do not provide clues as to how to answer earlier questions, and it is only in these survey that the “back” button should be enabled.

Lastly, don’t make surveys too onerous, particularly if respondents are not being reimbursed for their time.
You can increase engagement with your survey and minimise the drop-out rate, by limiting the number of onerous tasks, including:

  • Open-ended questions;
  • Rank order questions;
  • Choice task questions — e.g., choice-based conjoint analysis;
  • Overly wordy questions or statements; and
  • Questions that require calculations or cognitively demanding recall.

Originally posted on: https://www.mindtapresearch.com/news/2017/10/11/what-to-look-for-in-a-good-survey

--

--

Natalie Herd
Mind Tap Research

Founder of Mind Tap Research. PhD in behavioural science. Online market and social researcher.