Accurately ascertaining attitudes: designing unbiased survey questions

Charlotte Woolley
Grinding Gears
Published in
6 min readSep 24, 2018

It would be great if we knew exactly what our customers thought so that we could adapt our products and improve our performance. However, it isn’t possible to read the minds of our customers, so surveys are a popular alternative to understand their attitudes.

Yet, if the answers recorded on surveys don’t truly reflect the attitudes of the customers, how can we really know how to improve? This blog post will try to resolve this problem by focusing on the design of unbiased questions and is the second half of a two-part series that discusses the potential for bias in consumer-based surveys.

Querying the questions

One of the most important parts of questionnaire design is recognising exactly what we want to find out. It sounds obvious but if a specific answer is desired, a specific question needs to be asked.

Can we question how we ask questions?

Let’s return to the scenario from my previous blog post about bias. The data science team made some dried aubergines and we were interested in improving our recipe based on the views of people who tried them. We gave them out for people to try at a company-wide FreeAgent recruitment event and asked people who tried them if they would be willing to take part in a telephone survey. Once we had some willing participants, we had to figure out the best way of obtaining people’s opinions using a survey!

There are many types of biased question that could be included into the survey, which we wanted to avoid. Imagine we start the survey with the following question:

How much did you enjoy the FreeAgent recruitment event and our aubergines?

This is an example of a double barrelled question — one question that requires two separate answers¹-³. Imagine if people loved the FreeAgent event so much that they forgot to answer anything about our aubergines? We could improve this question by asking a separate question about the FreeAgent event, and keeping this question simple:

How much did you enjoy our aubergines?

However... This is an example of a free-text question — a question that allows an open response, which can be problematic if we require an answer in a particular format or with a certain level of detail for analysis purposes¹-³. People might use different language to describe the same feelings — enthusiastic Eddie might say the aubergines were ‘exquisitely tasty’ while serious Sally might say they were ‘very good’, which makes their answers difficult to analyse. We could improve this question by deciding upon some set, scaled answers to give the participants which allows us to obtain the level of detail that we require, on a standardised scale:

How much did you enjoy our aubergines?

  • Somewhat disliked
  • Neither like nor disliked
  • Somewhat liked
  • Strongly liked

However… This is an example of a leading question with unbalanced answers — a question that encourages the participant to answer in a particular way, by presuming an attitude is true prior to asking the question and/or by not providing balanced answer options¹-³. Although we (obviously) love the taste of aubergines, we are presuming that everyone else does too! We could improve this question by removing the assumption that the participants enjoyed our aubergines and by allowing them to give answers at both extremes of the scale:

How did you feel about our aubergines?

  • Strongly disliked
  • Somewhat disliked
  • Neither like nor disliked
  • Somewhat liked
  • Strongly liked

However... This is an example of an ambiguous question — a question that is not clear about the exact piece of information you require and could be interpreted in different ways¹-³. Although it is clear to us that we want to know if people enjoyed the taste of our aubergines, this question could have a multitude of different interpretations: are we asking if they thought the aubergines were well presented, if they were served with a smile, if making them was a waste of valuable company time or if they tasted nice? We should make this a more specific question: We could improve this question by making it explicit that we want to know about the taste of our aubergines:

Ambiguous questions can lead to confusing answers

How did you feel about the taste of our aubergines?

  • Strongly disliked
  • Somewhat disliked
  • Neither like nor disliked
  • Somewhat liked
  • Strongly liked

However…This is an example of a non-exhaustive question — a question that can have answers that are outside of the boundaries of the answers expected¹-³. People may not wish to answer this question, they might need to hang up the phone half way through or they might have not got chance to try the aubergines because they were so popular! We could improve this question by trying to consider answers that are ‘outside of the box’:

How did you feel about the taste of our aubergines?

  • Strongly disliked
  • Somewhat disliked
  • Neither like nor disliked
  • Somewhat liked
  • Strongly liked
  • Did not answer
  • NA (Did not try the aubergines e.g they were eaten too fast!)

Imagine we also wanted to know how likely someone would be to tell their friends about our amazing aubergines. We have taken on board all of the pitfalls of problematic questions and we come up with the following:

How unlikely would it be for you to not recommend our aubergines to a friend?

  • Very unlikely
  • Unlikely
  • Neutral
  • Likely
  • Very likely
  • Did not answer
  • NA (Did not try the aubergines e.g they were eaten too fast!)

However... This is an example of a double negative question — a question that uses two negative terms and can confuse the reader¹-³. People might answer that they were very likely to not recommend our aubergines when in fact they meant they were very likely to recommend them! We could improve this question by removing the negative aspects of it:

How likely would it be for you to recommend our aubergines to a friend?

  • Very unlikely
  • Unlikely
  • Neutral
  • Likely
  • Very likely
  • Did not answer
  • NA (Did not try the aubergines e.g they were eaten too fast!)

This example illustrates how easy it is to be tripped up when we are on our journey to good survey design. Considering the impact of our questions when we ask them is vital if we want to get answers that actually mean what we expect them to!

Analysing the answers

Now we have well-formulated questions, it is good practice to validate the survey, which allows us to have a look at the types of answer people might give. Validation takes place before the actual survey begins and allows us to adjust the original survey design to minimise unforeseen misunderstandings, problems and biases. However, it impossible to anticipate the variation in people’s answers and even when careful survey design has taken place, there is almost always a requirement for data cleaning before analysis. If you are struggling with data cleaning you might find my previous two blogs about common errors in survey data and techniques to deal with them useful.

References

  1. UCDenver. (Not dated). Examples Of Bad Questions & Suggestions Of How To Fix Them! UCDenver. Available from: http://www.ucdenver.edu/academics/colleges/SPA/FacultyStaff/Faculty/Documents/Callie%20Rennison%20Documents/example%20of%20bad%20survey%20questions.pdf. Accessed 17th September 2018.
  2. Choi, B. C. K., & Pak, A. W. P. (2005). A Catalog of Biases in Questionnaires. Preventing Chronic Disease, 2(1), A13.
  3. Kalton, G., & Schuman, H. (1982). The Effect of the Question on Survey Responses: A Review. Journal of the Royal Statistical Society. Series A (General), 145(1), 42–73.

--

--