Writing Good Surveys Is Harder Than You Think Pt 3

Mimi Turner
SEEK blog
Published in
4 min readJun 6, 2016

Part 3 of 4: Constructing the questions

Constructing the questions

This is the fun part! There are several things you need to take into account when constructing questions.

Types of questions and outputs

There are open and closed questions, and different types of closed questions.

Open questions get the respondent to provide a text response. These are much harder to analyse than closed questions, so use them sparingly.

Closed questions provide the respondent with options to choose from. Closed questions include:

  • yes/no questions
  • single-choice questions (select one response that best applies)
  • multiple-choice questions (select all that apply)
  • rating scales (e.g., Likert scales — strongly disagree to strongly agree)
  • ranking scores (e.g., rank these 1 to 5 with 1 being your favourite and 5 being your least favourite)

Use the type of question that makes the most sense for what you are trying to find out.

Most good survey tools provide a range of question types to suit what you need to ask.

Single-choice vs. Multiple-choice questions

It might seem obvious what the difference between these two is, but they are still, surprisingly, confused.

A single-choice question means you want respondents to only select one response. You need to ensure that multiple responses don’t realistically apply, otherwise you’ll need to make it a multiple-choice question.

Make sure the question text reflects the type of question it is, so the respondent doesn’t have to think, e.g., “(select one)” or “(select all that apply)”.

Double-barrelled questions

This is a tricky one! A double-barrelled question simply means you are asking two things in the one question. This could be as innocent as using two words rather than one to ask something. Double-barrelled questions are bad because it means you get an ambiguous result; respondents may have interpreted it differently, and it’s unclear what you’re measuring.

Here are some examples:

Incorrect: Which design was the most simple and effective?
This is asking two things (simple, effective), not one.

Correct: Which design was the simplest? Which design was the most effective?

Incorrect: How did you feel about using Products A and B?
This is also asking two things. People may have felt differently about using each of the products.

Correct: How did you feel about using Product A? How did you feel about using Product B?

Incorrect: Were you able to find the article quite easily?
This qualifies the question and pre-empts the respondent’s response.

Correct: Were you able to find the article easily?
This lets the respondent qualify how they felt for themselves.

Make sure you only cover one concept in each question, without any qualifications.

Limiting preference responses

If you are asking respondents to select all the options they like or dislike, you might want to consider limiting their responses to the top three likes or dislikes. This will distill the responses down to the most meaningful ones, and make analysis easier for you.

Mandatory or not?

It’s important that the respondent knows which survey questions they must complete, and which are optional. Some survey tools will mark mandatory questions with an asterisk to make it crystal clear they are required.

Just ensure when you design the survey, that a respondent is not able to skip a bunch of important questions, otherwise it won’t be very useful.

Aspects to include

If you’re using a question with a scale, have the negative option on the left and the positive option on the right. This is the way people process information e.g., strongly disagree to strongly agree.

Always provide a neutral option on a scale — a neutral response is a valid response! If you force respondents to answer positively or negatively you are skewing the data. If a respondent consistently picks a neutral response then they’re either not taking the survey seriously or they’re not very opinionated. Either way it’s probably a good idea to exclude them from your analysis.

Similarly, you should consider if/when a “don’t know” or “not applicable” option is appropriate to include. Don’t assume all the respondents will fit neatly into the response options you have come up with.

Providing an ‘other — please specify’ option is also important to capture other responses you may not have thought of.

Giving respondents a text box to provide any additional feedback is always a good idea as it lets people have their say, and can reveal what’s really going on for them.

Instructional and other text

It’s important to keep instructional text succinct as respondents will lose interest if you present them with large chunks of text. At the same time, however, you need to be friendly and approachable in your writing, to encourage participation.

Make sure you use everyday language and only use jargon if you really have to (while offering a definition of said jargon). If your respondents don’t understand the questions you’re asking, this will skew the data.

Try and keep the actual questions short — this makes them easier to understand and answer.

Leading questions

Leading questions are worded in a way that can bias or limit the respondent’s thinking. For example:

Leading question: Did you find the message annoying?

Non-leading question: How did you find the message?

In this example, the non-leading question will generate a much greater diversity of responses and reveal what matters to the respondent.

Sometimes you might want to ask a leading question to meet a specific objective, but on the whole they should be avoided. Leading questions can be a hard to spot, so get someone else to review your questions.

Part 4 talks about oher aspects you might want to consider.
See also:
Part 1 — Making a plan
Part 2 — Structuring the survey

--

--

Mimi Turner
SEEK blog

Highly experienced UX Researcher, ReOps Specialist and Research Coach. Loves making things work. Loves sorting, writing, singing and problem-solving.