How to Create Effective Research Surveys

Part One: The Questionnaire

CHI KT Platform
Apr 10 · 10 min read

By Alexie Touchette

A research survey is a diverse methodological method that can be used in many stages of the Knowledge-to-Action Framework [1]. From assessing the research-to-practice gap [2,3,4], to identifying barriers and facilitators [5], and evaluating knowledge use [6], surveys are an effective strategy through which researchers can systematically gain insight on a wide range of topics [7].

In this first of two posts, we provide key definitions, advantages and disadvantages of surveys, advice for creating questions, and some tips on the do’s and don’ts of an effective survey. In the second post, we’ll provide advice and examples for creating an effective survey layout, including the use of visual elements.


WHAT IS A SURVEY? WHAT IS A QUESTIONNAIRE?

There are many definitions and descriptions of what constitutes a survey. Surveys are used to systematically collect quantitative information from a relatively large sample, taken from an even larger population [8].

However, an important distinction must be made between a survey and a questionnaire. These two terms are often used interchangeably, but mean different things. Let’s clear it up: a survey is the overarching process of collecting, analyzing, and drawing conclusions, while a questionnaire is the set of questions used to do so [7].

Components of an effective survey question

Generating clear and straightforward questions and formatting the survey questionnaire in a way that flows and promotes participation are key.

The following section summarizes a question-and-answer process provided by de Leuuw and colleagues [8], meant to provide tips and strategies for developing effective survey questions.

IN ORDER TO ANSWER A QUESTION, A RESPONDENT MUST:

  1. Understand the question in the way the researcher intended the question to be understood;
  2. Have (or be able to retrieve) the relevant information needed to answer the question
  3. Translate that information into the appropriate format required to answer the question; and,
  4. Be willing to share the relevant information in the most accurate answer possible (for tips on reducing bias, refer to de Leuuw et al. [8])

1. Write questions respondents can understand

Given that survey methodology consists of asking a series of standardized questions, respondents need to have the same understanding of what a question is asking. To reduce misunderstandings in questions, work to reduce ambiguity.

1.1 Use clear and simple language, avoid complex terms, acronyms, and jargon

The use of complex language and terms, as well as acronyms and jargon specific to your field, will make questions more difficult to answer.

Example:

Poor question: “In the last week, did you exercise?”

The problem: The term ‘exercise’ isn’t clearly defined and could be interpreted differently by respondents — some might consider walking to and from their car exercise, whereas others may not consider anything that doesn’t cause them to sweat or breathe heavily as exercise.

Better question: “In the last week, did you exercise or participate in any physical activity for at least 20 minutes that made you sweat and breathe hard, such as running, swimming, cycling, or similar activities?”

Another Example:

Poor question: “Have you ever been diagnosed with CHD?”

The problem: Using acronyms and technical jargon will make it difficult for people who don’t know these terms to provide reliable answers.

Better question: “Have you ever been diagnosed with coronary heart disease (CHD) — a disease in which the blood vessels that supply oxygen and blood to the heart narrow?”


1.2 Provide a time frame or reference period

Ensure respondents know the period or time frame — and therefore the context — of your questions.

Example:

Poor question: “How often are you sad?”

The problem: A lack of time frame or context in which to answer can lead respondents to interpret the question differently (i.e. answer generally or pick a random reference period).

Better question: “In the past four weeks, how often have you felt sad?” or “On average, how often do you feel sad?”

Note that this type of question would be best answered using a Likert rating scale (i.e. where 1=never and 5=all the time).


1.3 Avoid questions with an embedded assumption

Sometimes questions can contain assumptions about the respondents’ situations or how they think about things. When these assumptions are not true, the respondents may be forced to answer in a way that does not accurately represent their situation.

Example:

Poor question: “When riding in the back seat of a car, how often do you wear a seat belt?”

The problem: This assumes the respondent rides in the back seat of a car, and does not provide an option for those who never ride in the back seat (it also doesn’t provide a time frame). To answer this question, you need to first determine if the respondent has ridden in the back seat, and if so, how often they wear a seat belt.

Better question(s): 1.a) “In the past year, have you ridden in the back seat of a car?” (IF YES) → 1.b) “When riding in the back seat of a car, how often do you wear a seat belt?”


1.4 Avoid double-barreled or multiple questions

Double-barreled questions ask two things at once. To avoid ambiguity, ensure there is only one question being asked of respondents at a time.

Example:

Poor question: “How helpful were your friends and family while you were sick?”

The problem: This question is asking about the helpfulness of two seperate groups of people (1. friends and 2. family), so respondents will have to choose to either ignore part of the question or answer it as a single question and lump these groups together.

Better question(s): 1. “How helpful were your friends while you were sick?” and 2. “How helpful was your family while you were sick?”


1.5 Avoid double-negative questions

Double negatives pose the risk of misinterpretation, and thus present the risk of respondents answering questions in a way that is unintended (or even the opposite of what the question is actually asking).

Example:

Poor question: “Which of the following behaviours is not considered unacceptable?”

The problem: Double negatives can be confusing, and may result in unreliable responses — be as clear and unambiguous as possible.

Better question: “Which of the following behaviours is considered acceptable?”


2. Write questions respondents have the information they need in order to answer

A respondent can only answer questions for which they already have or can retrieve the information. There are two main reasons respondents may or may not be able to retrieve the relevant information:

2.1 Lack of Information

A researcher may unintentionally ask questions the respondent doesn’t necessarily have the information to answer, or in a format that isn’t familiar to the respondent. For instance, a question might ask “How many miles from your home is the nearest hospital?”, though this question may be answered more easily with travel time as the unit of measure, or by asking about the location of the respondents’ home and the nearest hospital, and having the researcher calculate the distance.

Additionally, people cannot accurately and reliably report on others’ emotions, so it’s best to avoid asking such questions. For example, instead of asking “How much does your mother enjoy the activities in the nursing home?”, a better alternative would be to ask about an observable behaviour, such as, “Does your mother participate in any activities in the nursing home?”

2.2 Recall problems

A respondent may not be able to recall the information needed to answer certain questions, for various reasons. There are no solutions to recall problems, though there are a few strategies that might help respondents recall information:

  • Make the time frame consistent with the significance of the event — the more minor the event, the shorter the time frame should be.
  • Deconstruct a large complex question into a series of smaller questions which will allow the respondent to spend more time on each element of the question.
  • Consider using retrieval cues, such as asking the respondent to think of actions that are associated with the event. (e.g. taking prescription medications, missing work, and staying in bed can act as recall cues for visits to the doctor).

3. Write questions so that respondents can translate their answers into the required format

3.1 Be clear about the desired answer format

Respondents should be able to easily tell what kind of answer is required, what level of detail is required, and how respondents should categorize open-ended answers.

Example:

Poor question: “How long ago did you leave your job?”

The problem: The format of the time period isn’t specified, whether it should be reported in days, months, or years.

Better question: “How many months ago did you leave your job?”


3.2 Ensure response options are appropriate and obvious

Don’t make respondents work to figure out how to answer your question — the response options should be clear from the way the question is written.

Example:

Poor question: “In the past 12 months, did your doctors treat you with respect (YES/NO)?”

The problem: This question is formatted as a binary question when the reality can be variable and nuanced.

Better question: “In the past 12 months, how often did your doctors treat you with respect?”


3.3 Provide mutually exclusive and exhaustive options for closed-ended questions

In other words, there should be a response option for everyone, and everyone should have only one response answer that fits their situation best.

Poor question: “Are you currently: married, separated, divorced, widowed, living with a partner, or have you never been married?”

The problem: There are multiple ways this question can be answered by someone who has never been married and lives with a partner, or is separated and lives with a partner, etc.

Better question(s): 1. “What is your current marital status (married, separated, divorced, widowed, or never been married)?”, and 2. “Are you currently living with a partner/your spouse?”


4. Write questions respondents are willing to answer honestly and accurately

There are many reasons why a respondent may not want to answer certain questions honestly. Concerns about anonymity and confidentially (i.e. being worried the information they provide will be shared with others), personal image (i.e. wanting to present a good image to others), and/or a desire to be properly classified (i.e. if the correct answer is seen as potentially leading to an incorrect conclusion, respondents might feel like they need to distort their answer in order to receive the right classification) can all lead respondents to purposefully or inadvertently distort their answers.

One method to mitigate these issues is to provide sufficient information and reassurance about confidentiality and anonymity through informed consent. Within the questionnaire itself, de Leuuw et al. [8] have a few suggestions on how to provide context within your questions in order to appease concerns regarding how responses will be interpreted and classified.

4.1 Include an introduction to the question

For example, “Many people find they do not exercise as much as they want to because of other work, family, or other responsibilities. Would you say you exercise as much as you would like?”

4.2 Think carefully about how your questions are organized

For instance, including alcohol in a list of other substances that can be abused (such as marijuana, cocaine, and prescription drugs) will provoke a different reaction than would including alcohol in a list of healthy behaviours (such as exercise, heaving regular check-ups, and brushing your teeth).

4.3 Provide inclusive response alternatives

For example, when asking a question about hours spent watching television per day, make the maximum response much larger than would be expected (such as 10 hours instead of 4), in order to avoid participants distorting their answer to avoid being seen as an ‘extreme’.

Although following these tips will help improve questionnaire design and clarity, field testing (piloting) and conducting an evaluation of questions are key steps to creating an effective survey (for more information on testing your questions, see de Leuuw et al. [8]).

For additional information on effective question-writing, refer to Dillman, Smyth & Christian’s Internet, Phone, Mail, and Mixed Method Surveys: The Tailored Design Method[9] or free online resources like this: https://www.statpac.com/survey-design-guidelines.htm

In our next post, we’ll discuss how the organization and layout of questionnaires can be improved to make your survey study even more effective.


Further reading: Pros & Cons of Survey Studies

Advantages of a Survey:

  • Provides a description of participant characteristics
  • Is economical in collecting data from a large number of participants over a short time period
  • Allows responses to be easily recorded and aggregated for analysis
  • Facilitates data collection from multiple, geographically-distributed locations [7].

Disadvantages of a Survey:

  • Is based on self-reporting, and relies on participants’ ability and willingness to answer questions honestly and completely
  • Presents concerns about reliability (i.e. how dependable, stable, and consistent the questionnaire is when repeated under the same conditions — such as a questionnaire given to the same people at two different time points)*
  • Raises concerns about validity (i.e. how much the questionnaire measures what is intended to measure — such as barriers and facilitators)*

*Though beyond the scope of this blog, psychometric properties like reliability and validity are important considerations when developing questionnaires and other survey tools. Check out this link for more info: https://opentextbc.ca/researchmethods/chapter/reliability-and-validity-of-measurement/


References

1. Graham I, et al. Lost in knowledge translation: Time for a map? J Contin Educ Health Prof, 2006; 26(1):13–24.

2. Askari M, et al. Assessment of the quality of fall detection and management in primary care in the Netherlands based on the ACOVE quality indicators. Osteoporos Int, 2016; 27(2):569–576.

3. Day L, et al. Implementation of evidence-based falls prevention in clinical services for high-risk clients. J Eval Clin Pract, 2014; 20(3):255–259.

4. Sibley KM, et al. Balance assessment practices and use of standardized balance measures among Ontario physical therapists. Phys Ther, 2011; 91(11):1583.

5. Sibley KM, et al. Clinical balance assessment: Perceptions of commonly-used standardized measures and current practices among physiotherapists in Ontario, Canada. Imp Sci, 2013; 8(1).

6. Palinkas L, et al. Mixed Method Designs in Implementation Research. Admin Policy Ment Health, 2011; 38(1):44–53.

7. Liamputtong P. Research methods in health: Foundations for evidence-based practice, 3rd Ed. South Melbourne, Australia: Oxford University Press, 2017.

8. De Leeuw E, et al. International handbook of survey methodology. New York: L. Erlbaum Associates, 2008.

9. Dillman DA, Smyth JD, Christian LM. Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method, 4th Ed. Hoboken, NJ: Wiley, 2014.

About the Author

Alexie Touchette is a Masters student in the Department of Community Health Sciences at the University of Manitoba.

KnowledgeNudge

Publishing bi-weekly, we focus on all things knowledge translation (KT) – synthesis, exchange, application & dissemination – from a health perspective. Topics include the science of KT, patient engagement, and media & dissemination.

CHI KT Platform

Written by

Know-do gaps. Integrated KT. Patient & public engagement. KT research. Multimedia tools & dissemination. And the occasional puppy.

KnowledgeNudge

Publishing bi-weekly, we focus on all things knowledge translation (KT) – synthesis, exchange, application & dissemination – from a health perspective. Topics include the science of KT, patient engagement, and media & dissemination.