The nightmare of designing surveys

Product Alpaca
Design Warp
Published in
5 min readJan 16, 2019

Recently I wanted to check my assumptions on which of the X improvements would have the most impact on the client. I intended to ‘simply’ let the customer react to and prioritise them in a survey.

One of my colleagues told me that back in the day he even had a whole course on designing surveys. I, of course, wanted to design the perfect survey. To catch up for the missed out opportunity to educate myself, I dug into some literature, for some ~science~

Me, looking for the holy grail of designing a perfect survey

Well… I’m a designer, I know that we humans are unpredictable, unreasonable biological machines. Yet some of the research findings genuinely surprised me.

For example…

The ‘other’ option is a lie

It’s often a dilemma to choose between open and closed questions. On one hand, you do not want your respondents to get bored and to spend valuable effort on writing out the answers. On the other hand, you also do not want to sink into the confirmation bias and only get the answers which you came up with yourself.

And then a genius came up with the best of two worlds — the holy grail, ‘Other, namely…’ option. This was perfect — people could choose from a pre-existing option, or write their own answer. Problem solved!

This turns out to be generally not effective. That is because respondents tend to keep their answers neatly fitting the choices that are explicitly offered.

What is recommended is that if you do not really know what answers to offer (for example, to ask why the user is leaving the shopping cart), you are better off to first roll out an preliminary explorative open question, and then — analyse and categorise the answers. Only afterwards should you change the question to a closed one with options.

Sometimes people say ‘yes’ to pretty much anything

Another annoying effect (with a very fancy name ‘acquiescence’) makes the lives of survey designers harder. People have a tendency to try to be nice and polite, which means they like agreeing with statements, no matter what they are about. ‘Nurture is to blame for criminal behaviour’, ‘Nurture is not to blame for criminal behaviour’ — ‘Agree’ to both.

In one study, researches took averages across other 10 studies (so meta!). 52% of respondents agreed with statements, and only 42% disagreed with the opposite statements. In seven other studies, it boils down to an average of 22% of agreeing with both a statement and its reversal, and only 10% disagreeing with both!

There was a whole bunch of studies on this, the main conclusion is that acquiescence effect averaging about 10%. Ten percent of your data can be wrong if you are using only yes/no questions!

Why this happens is that with an agree/disagree, true/false, or yes/no question the respondent has to first think of their opinion, and then decide if it more or less falls within one or the other category. Unfortunately, we are very nuanced beings when it comes to opinions, and also very inconsistent when judging where our ‘grey area’ opinions fall within the yes/no dichotomy. And indeed, the studies show that when offered a more nuanced range to choose from (like Likert scale), the reliability and validity of responses increases.

It looks like it’s best to avoid the yes/no questions all together.

Me, falling down the rabbit hole of learning how irrational humans are

The primacy effect? The recency effect?

This is pretty confusing. Primacy effect means that if people read through the list op answer options, they may start thinking about the first ones, and by doing that they sort of do not pay attention to the rest of the options. Meaning, people are more likely to choose the options that are presented first, because they thought about them, and they come to mind sooner. BUT. It is also, of course, valid that the options presented last will be remembered best because they are still in the short memory, and people will tend to think about those sooner. Yep… go figure.

The general advice for both of these problems is that you mix up the order of answers for the respondents. This, of course, does not apply to the rating scales. That would disturb the logical flow of answer options from ‘extremely disagree’ to ‘extremely agree’.

The ‘don’t know’ option

You also don’t want to force people to submit an opinion if they don’t really have one. In a perfect world, people would admit that they don’t have enough information to have an opinion. But, they of course don’t. However, adding a ‘don’t know’ option is just another well-intentioned improvement that seems to not work.

According to studies, the ‘don’t know’ option indeed filters out people who truly do not have an opinion or knowledge on the subject. However, people who do have opinion and knowledge, often get encouraged to choose the ‘don’t know’ option. This answer also gives an easy way out if the respondent gets bored and wants get over with the questionnaire as soon as possible (this option is often chosen towards the end of the questionnaire).

Several studies indeed confirm that approximately 3–4% of respondents who choose the ‘don’t know’ answer indeed do not have an opinion or knowledge on the question. 11–12% of people who choose the ‘don’t know’, after being asked further on the subject, turn out to actually have the knowledge on the subject.

A way around this problem and filter out the answers of respondents who picked an answer but do not have the knowledge of the subject is to follow-up with several more questions probing the respondent on their attitudes and opinions.

References

I have referred to ‘research’ and ‘the studies’ a lot. You can find a very concise and science-based summary of the topics I covered and many more, including the references to specific studies in Chapter 9 of ‘Handbook of Survey Research’ https://books.google.nl/books?id=mMPDPXpTP-0C&pg=PA263

--

--

Product Alpaca
Design Warp

Thoughts on product, tech, UX and everything in between.