Getting the most out of your SMS survey

Results from an experiment testing the effects of SMS survey design on response rates and patterns.

Busara Center
The Busara Blog
6 min readMay 4, 2020

--

Nicholas Owsley (1), Chaning Jang (1), Samuel Kamande (2), Khadija Hassanali (2) and Aimee Leidich (2)

Photo by Jae Park on Unsplash

Introduction

COVID-19 has forced organizations to change how they do things. Researchers are no different: most academics and other research organizations are scrambling to move their data collection online: onto web surveys, phone surveys and SMS surveys. This means risking lower response rates, and foregoing some control over factors that could affect survey responses. It’s therefore important to understand how different factors affect participant responsiveness and survey data quality. In partnership with Ajua, we ran an experiment to test how some simple characteristics of an SMS survey can affect response rate and response patterns on a low-income sample in Kenya. The results offer some clear guidelines on how researchers can streamline SMS survey designs, and what to expect from SMS survey data in these contexts. This short piece adds to the growing effort from the research community to share evidence and best-practices on how to most effectively run remote data collection.

Takeaways

  • SMS survey data can yield valid responses. The patterns of bad data are limited.
  • Expect a completion rate of ~20% for SMS surveys with a low-income urban sample unaccustomed to SMS surveys. 30% if you do everything right.
  • Small incentives for completing surveys (as low as $0.25) increase your completion rate by ~5–6 pp (40%)
  • Higher incentives (only up to $1.00 in this case) encourage completion when your survey is longer.
  • Make your surveys longer, within reason; Survey completion is similar for surveys of 25 questions compared to surveys of 5 questions — especially when you offer incentives.
  • Tell respondents how long the survey is: this increased completion by 2 pp. (and probably has long-term trust benefits)
  • People who start the survey, finish. Survey completion conditional on starting is 94%.
  • Younger and more educated respondents have far higher response rates.
  • Open-text questions are possible — they yield reasonable responses and in low numbers don’t affect the likelihood of completing a survey.
  • Randomize your response option order if you can — it affects responses.
  • Randomize the order of your questions or modules if you can — there are order effects

Study Overview

We present results from 6211 participants (3) from an experiment to test i) whether we can collect valid data using an SMS survey, ii) whether features of the survey affect survey response and completion rates, and iii) whether features of the survey affect response patterns.

We specifically test the following features on survey completion: 1) incentives, 2) telling respondents the survey length before starting the survey, 3) the length of the survey. We also test the following variations on response patterns: a) question order, b) response option order, c) and open-text responses compared to predetermined options.

Design

Table 1. Treatment Groups

Results

Incentives increase survey completion, even more with long surveys.

Incentives increase completion rates from 14.1% at no incentive to 22.6% at a 100 Ksh ($1.00) incentive (Figure 1). There is also a notable 6 pp. increase by providing a 25 Ksh incentive — cash strapped projects could thus still consider small incentives to make a meaningful bump in responses. The effect of incentives is even more pronounced for longer surveys — for surveys longer than 5 questions, the effect is ~7pp and ~11 pp for a 25 Ksh and 100 Ksh incentive respectively.

Figure 1

Figure 2

Longer surveys reduce completion rates, but not by much.

The completion rate for a survey of 5 questions is 20% compared with 17% for a survey of 25 questions — there is actually no difference in completion rates (23%) when an incentive of 100 KES is offered! This strongly suggests researchers should use the window of opportunity to ask more questions — up to 25 questions — and therefore collect more data. One caveat is that there are order effects (discussed below), so response quality might be affected for later questions in long surveys.

Figure 3

Disclosing the length of the survey during consent is good practice.

Disclosing the length of the survey during consent increases completion rates by 2 pp, a 12% increase — this effect is strongest for longer surveys, unsurprisingly, due to some drop-off from respondents during the survey. Our own research experience from in-person studies suggests that this is also just good and respectful research practice.

Figure 4

Figure 5

More educated, younger, men, respond more:

Characteristics of respondents also predicted completion rates — higher education, being a man, and being younger, are all strongly correlated with higher response rates. Though this is not experimental, it can help researchers with expectations of response rates given the demographics of their sample.

Table 2: Completion rates by respondent characteristics

Response patterns

Overall, the data produced very few signs of unusable data. There were no cases of any individual choosing only the first or last option, and no clear patterns suggesting ‘clicking through’. Responses were well-distributed across options but with concentration on a set of options, as expected. Finally, and encouragingly, open-text responses produced both relatively legible and credible responses and correlated strongly with the responses from the predefined option lists for the same questions, even though these were at different points on the survey. Below summarizes some specific findings on response patterns.

Likert option order matters: When likert responses start with ‘strongly agree’ (in the ‘alternative question frame’) instead of ‘strongly disagree’ they get significantly more ‘agreeing’ responses — this suggests respondents ‘stick’ slightly towards earlier options. The order of the options therefore makes a significant difference and should be randomized.

Figure 6

Question order matters: There are significant order effects — when a question is asked later in a survey, respondents give slightly more ‘agreeing’ responses. This suggests that question order, or at least module order, should be randomized, and also suggests some caution in making surveys longer than 25 questions.

Open text questions can give rich data: In spite of poor spelling and the need for translation and additional data cleaning time, open-text responses give rich data, had similar themes to those in the multiple-choice version, did not have any negative effect on survey completion rate, and gave meaningful additional information. This can therefore be a useful option where predefined lists may be limiting.

Footnotes:

(1) Busara Center for Behavioral Economics

(2) Ajua

(3) This is a distinct sub-sample — Busara’s pool of low-income respondents from Kibera, Nairobi , with limited experience with SMS surveys — out of a full sample of 9768 respondents, which also included 3578 respondents from Ajua’s respondent pool who regularly conduct market-research-focused mobile surveys. This analysis focuses exclusively on the Busara pool given it’s especial relevance to researchers working with their own target populations with limited exposure to these data collection tools. Response rates for Ajua’s respondent pool was far higher than for the Busara pool, and more stable to survey design features, which could be a function of their extensive experience with these surveys — for a discussion of the full results see this previous post.

As part of our commitment to transparency, and continued focus on fast and responsive data, we have created this repository to #StandAgainstCorona. Our aim is to provide full access to data and instruments for all COVID-related research, as well as, compile resources we have created for internal use during the crisis.

Connect with us on our social media platforms, Twitter, Facebook, Instagram and LinkedIn

--

--

Busara Center
The Busara Blog

Busara is a research and advisory firm dedicated to advancing Behavioral Science in the Global South