Survey fatigue

Nastiya
Psyc 406–2015
Published in
3 min readMar 27, 2015

One thing research assistants realize early on when staring to work for the first time in a psychology lab is how hard it can be to get participants to engage in a study. On the other hand, how willing are you yourself to participate in an experiment or fill-out surveys?

As university students, we have all been there: constantly harassed by requests to fill-out surveys to improve student facilities and courses, to give our opinion on student services, to cast our vote for the various student committees, and so on. As psychology students, many of us have been at the other side as well (obviously, a recent example is the survey we had to develop for this class): putting countless amounts of effort into carefully developing a survey, questionnaire, experiment design, you name it, and then spending even more effort trying to get people to fill it out or to participate in your experiment. This experience can be somewhat enlightening (although mostly frustrating). All I can say I personally no longer pass by poor students looking for participants (my tolerance towards other types of surveys is still far from increasing any soon, on the other hand).

An interesting paradox in surveying people is that, even when the surveying is done in order to improve their situation, many people are still reluctant to participate. Furthermore, even when an incentive is offered, it can be hard to get enough quality data (although incentives do increase the overall turnout). One explanation of this phenomenon has been termed “survey fatigue”, and is an active area of investigation. Today, it is extremely easy to survey almost any target population thanks to electronic assessment and surveying. The response rates to such surveys have however steadily decreased over the past years, as evidence by studies (Adams & Umbach, 2012). Without surprise college students happen to be THE most surveyed population there is (Sax, Gilmartin, & Bryant 2003), and therefore the most affected by survey fatigue. Not only is university an environment prone to over-surveying its students, but the student population also happens to be one of the favorite target of commerces and marketing companies, as they are considered the “trend-setters” and “consumers of the future”, and therefore marketers seek to have the “pulse” of this population most avidly.

The over-use of surveying leading to survey fatigue is however not without consequences. Serious concerns have been raised over the issue of survey fatigue. Survey fatigue decreases the amount of people who take the survey, making it more difficult to obtain a large enough representative sample. People who take the survey or end-up participating in a research experiment are also likely to be different from those who passed it by. Finally, people who end up taking the survey might not be involved in it to a great extent, and therefore results might not be accurate and usable, leading to more time and effort lost in discarding useless data and trying to find more participants.

Efforts in trying to understand the phenomenon of survey fatigue points to obvious yet easy to ignore factors. Basically, some survey developers lack insight into what it is like to take their test or survey. Some surveys are way too long, others are not clear and hard to understand. Sometimes the survey’s purpose is unclear and the interest of the participant is lost. It is understandable that survey developers are engaged and excited about their research, but they often have the unrealistic expectation that survey takers feel the same.

I think it is time that we install some ethic standards regarding heavily surveyed populations, such as college students: survey less, but better. We need to make quality tests that are efficient, provide good data, and respectful of the test taker and his/her time.

References:

Adams, Meredith J D and Paul D Umbach. 2012. “Nonresponse and Online Student Evaluations of Teaching: Understanding the Influence of Salience, Fatigue, and Academic Environments.” Research in Higher Education 53(5):576–91. doi: 10.1007/s11162–011–9240–5.

Sax, LindaJ, ShannonK Gilmartin and AlyssaN Bryant. 2003. “Assessing Response Rates and Nonresponse Bias in Web and Paper Surveys.” Research in Higher Education 44(4):409–32. doi: 10.1023/A:1024232915870

ID: 260584176

--

--