Becoming Wyzerr: How We Achieve 80–90% Completion Rates on Surveys

Hoolio
Wisdom Blog
Published in
6 min readSep 13, 2017

By: Bethany D. Merillat, MS, M.Ed

This is the 3rd entry in a series of 8 articles centered around the research and science behind Wyzerr, an artificial intelligence company that uses playful gamified surveys to collect feedback data and generate real-time business insights and tasks for managers in retail, food, and hospitality.

The Barkley Marathon endurance, held in Frozen Head State Park, TN, is considered one of the toughest races in the world. Each year, a maximum of 40 runners pay $1.60 (and for race virgins, send in a license plate from their home state and an essay on why they should be allowed to run the Barley) to attempt to finish the 100-mile race before the 60-hour cutoff (Mahoney, 2017). Participants climb and descend over 67,000 feet (the equivalent of climbing Mt. Everest from sea level, twice), and battle grueling changes in temperature which range from freezing cold to scorching hot, sometimes within the span of a few hours (Seminara, 2013). In the course of the race’s history, only 15 runners, or less than 2% of the 1,000 who have run the race have completed it (Mahoney, 2017).

Survey researchers don’t ask their participants to do anything nearly as challenging as the Barkley. A race where such a low completion rate is acceptable (given the conditions and some might even say higher than expected). We are not asking participants to run 100 miles up a mountain in the dark, we are asking them to give their preference on a product, or opinion on company culture. So why is it, then, that survey completion rates are so low, and that research companies continue to accept low completion rates as the norm?

Now that online surveys have become the accepted form of data collection, research needs to address the still high rates of non-completion and low response rates with which they are associated. On the surface, high rates of completion are of course desirable, but why? What is the cost of low rates? The cost is bias, and of course, literally, money. Research has supported the finding that higher rates of non-response may introduce bias into the survey results and generate overall higher costs to researchers and marketers (Groves & Couper, 1998; Ward et al., 2017). What research has also found is that any increase in response rates, however small, can result in significant cost savings for data collection (Groves & Couper, 1998).

Parameters such as follow-ups, rewards, length of the presentation and presentation of the questions have all been studied in order to raise completion rates (Couper et al., 2001; Deutskens, De Ruyter, Wetzels, & Oosterveld, 2004; Dillman et al., 1998; Lozar Manfreda et al., 2002; Sheehan & McMillan, 1999). One of the major findings which has emerged is that across surveys, among the many variables that can be manipulated, the most effective way to increase responses is to reduce survey length — shorter is better (Deutskens et al., 2004).

Completion Rates Increase as Survey Length Decreases

In 2017, Ward, Welch, Conley, Smith and Greby conducted a study using two national surveys, the National Immunization and the NIS-Teen Survey, to study how response rates were influenced by decreasing the survey length. They found that the shorter the instrument, the higher the response rate. While this is one of the most recent findings, research over the years has consistently supported these results. Work by Curtin et al. (2005), De Leeuw and de Heer (2002), Galea and Tracy (2007) and Groves (2006) has found that across a wide range of surveys, covering myriad topics, and across different countries, nonresponse rates are consistently linked to the perception of burden associated with taking the survey (Hansen, 2007).

Why is this? Because the biggest challenge an individual faces when taking a survey is reading and interpreting the question (Yan, & Tourangeau, 2008). Findings from work done by Yan and Tourangeau (2008), suggest that response times to surveys are heavily affected by question characteristics, such as the total number of clauses in the question, and the number of words per clause. Simply put, the more clauses, the longer it takes to answer the question, and the longer it takes to answer the question, the longer it takes to answer the survey, and the longer it takes to answer the survey, the higher the rates of non-completion.

Further, response times to individual questions are also influenced by how the question is set up, including both the number and type of question categories, and location of the question within the questionnaire. As expected, the researchers also found that as the options available to the participant increased (e.g., more choices for answers), the longer it took them to respond, even holding all other factors constant, something the researchers attributed to reading time or the increased processing burden (Yan, & Tourangeau, 2008).

Time (response latency) can tell another story too. The researchers discovered that the closer participants got to the end of the survey, the more quickly they answered the questions (Yan, & Tourangeau, 2008). This was confirmed by research by Galesic and Bosnjak (2009) who also found that as survey length increased, participants were less likely to start or complete the measure, and answers to questions posed at the end were faster and shorter than those at the beginning. Thus, while decreasing the burden and time it takes to answer a question is desirable, the quality of responses may decline if participants are just clicking through questions to quickly get to the end.

Finally, the researchers observed that while responses did vary as a result of individual participant characteristics (e.g., age, education, and experience with completing internet surveys, and the internet), these respondent-level characteristics varied randomly over questions, indicating that researchers can assume that in a diverse sample, data should not be biased as a result of individual differences (Yan, & Tourangeau, 2008). This then leaves us free to focus on what we can control, the format of the survey and questions.

So what does this all have to do with Wyzerr’s method?

Shorter may be better, but not if in the process, quality is compromised. If we are going to make surveys shorter, we must make every question count. This demands strong psychometric analyses for all survey instruments and careful consideration of questions, responses and response latency. Wyzerr takes all of the above findings, and other research on survey methods, into account in designing its surveys. While every survey will be different, we work with our clients to help them maximize the success of their surveys using these sound research principals.

Yan and Tourangeau (2008) found that increases in the number of clauses results in increased burden of processing, so we emphasize clear, concise and brief questions. They also found that participants speed up responses toward the end of the survey, so we encourage clients to place their most important questions near the beginning of the survey, and utilize the myriad features offered by Wyzerr to make those last few questions as engaging as possible, ensuring participants devote as much time to questions at the end as at the beginning.

Even when all these steps are put in place, however, even the best survey can have a flaw, which is why Wyzerr also places a strong emphasis on pre-and post-analyses of the questions and the data. We look for outliers, examine response times per question, and test for reliability as well as validity. How consistent are results on the survey over time, and among different and similar individuals? How well do the results align with the predicted results, and is the survey measuring what it is intended to measure or something else? These are only some of the questions we help our clients to consider, ensuring that the data they collect is not only fast, fun and easy for the participant, but valid, reliable and most importantly, useful for whatever their goal is.

However, we don’t stop there, because in addition to the factors addressed above, other features of the survey, such as the design and visual layout can have a significant layout on how well the survey instrument performs. Next week, we will talk about how Wyzerr has revolutionized the survey industry by redesigning the survey platform, design and interface to enhance both the survey experience, and effectiveness of data collection.

About the author: Bethany D. Merillat, M.S., M.Ed., is an experimental psychologist with a passion for changing lives through research. She specializes in survey design, online data collection, and interventions to increase health and wellness, and has published number of journal articles in these areas.

--

--

Hoolio
Wisdom Blog

The AI wizard here at Wyzerr. He takes in feedback data and turns it into real-time insight.