Customer Feedback Surveys Should Be More like Tinder and SnapChat

Hoolio
Wisdom Blog
Published in
9 min readOct 18, 2017

By: Bethany D. Merillat, MS, M.Ed

Should I swipe left or right? Traditional wisdom for generations suggested we should not ‘Judge a Book by its Cover,’ yet in this day and age, instant judgments color many everyday decisions. Tinder, SnapChat and Twitter have revolutionized the way we share information, emotions and events — how we date, how we communicate, and how we interact with our world.

Yet these platforms, and a plethora of other popular apps, all rely on one common feature — time. More specifically, the absence of time. Users on Tinder quickly make judgments about who to date by swiping left (no thanks) or right (I’m interested) to pictures of unknown individuals. Users on SnapChat have up to 10 seconds to share their story with the world, and Twitter allows you a mere 140 characters to ‘tweet’ your message.

SnapChat

We opened this blog series talking about the 8-second rule and why capturing attention and engaging your audience is important, but observation of the current culture underscores the vital importance of designing surveys in a way that can compete with today’s attention-grabbing technology. Research has found that participants exposed to a picture for a mere 100ms made the same judgments about facial appearances (e.g., attractiveness, likeability) as those who were able to view the picture with no time constraints¹, suggesting that judgments about some visual stimuli are made in a very short period of time.

The important consideration, however, is the impact of these quick visual judgments. A number of emerging studies have found that these quick judgments we make, based on visual appearance of the stimuli alone, can have a significant impact on important decisions. For example, researchers who observed the outcomes of over 500 court cases in the Commonwealth of Massachusetts Small Claims Court found that as plaintiffs increased in attractiveness, defendants were more likely to lose the case².

Research also found that the outcomes of the U.S. Congressional elections in 2000, 2002 and 2004 could be predicted by inferences of competence based solely on exposure to pictures of candidates, and that individuals who appear “baby-faced” receive less severe judicial outcomes than more mature-looking individuals². In fact, a body of research suggests we use instant judgments to make decisions in a wide range of important areas, including business and financial decisions, choice of future spouses, and as mentioned above, political decisions.

Accordingly, if visual appearances have such a striking impact on the choices we make, and if those judgments are made within seconds (perhaps milliseconds) of viewing the stimuli, then a survey must equally appear engaging, exciting and enjoyable from the first moment a participant views it. But even that may not be enough. Once the person’s attention and the ensuing positive emotion are captured, those feelings must be sustained by providing content in a way that captures the individual’s attention and carries it through to the end of the survey. Thus, when designing a survey, it is essential that the format mirrors what the participant not only wants to see (e.g., something quick and digestible like a tweet or a video on snap chat), but is already engaged with. First impressions are everything.

A recent survey released by Flurry found that U.S consumers spend an average of 5 hours a day on mobile devices, and that more than half of that time, 51%, is spent on quick moving social media (e.g., Facebook 19%) messaging (12%), media and entertainment (15%) apps like Snapchat (2%) and YouTube (3%). Games alone accounted for 11% of the time spent in apps. This means that for a survey to capture the attention of a person glued to their mobile device, it must mimic the successful features of the apps that consumers spend the majority of their time using. Questionnaires must, therefore, be quick, engaging, easily digestible and fun — with lots of color, icons, minimal words, and clear instructions.

But researchers don’t have to redesign the wheel, and in fact, they can use America’s mobile viewing habits to their advantage. Instead of asking them to take time they already devote to entertainment and enjoyment and use it instead for a boring survey, why not create a survey that would be seen as an equal, like a social media interaction or game — thus combining pleasure and, for the researchers, rich insights? Therein lies Wyzerr’s overall company vision: create technologies for research that are fun, engaging, and as simple to use as a mobile game or social media app.

Lumosity knew this lesson all too well, and used the same methods to skyrocket to prominence. From a little-known startup in 2005, Lumosity found a way to make “brain training” fun, by turning it into a game. In 2013 Forbes ranked them as the 66th most promising company in America, and as of 2017, reports over 70 million users in over 180 countries. How did they do it, according to their website, Lumosity transforms, “…science into delightful games.” Wyzerr expanded this model into the survey & research world.

Wyzerr’s Bubble Surveys

Wyzerr’s early team knew that in order for a survey to compete for the limited attention span of modern consumers, it needed to rethink and redesign the layout and format of a survey. To that end, the team created surveys that mimic what participants see in social media apps and mobile games because engagement is everything. The more engaged a person is with a survey, the more likely they are to respond and complete it in its entirety. Existing research has found that many of the traditional design features used in survey research may actually sabotage respondents’ performance. Survey designers have tried everything to capture attention and keep it, from the use of incentives, longer field periods, and increases in the number of follow-up emails — all of which have yielded marginal but not significant gains.

On the other hand, Wyzerr’s average completion rate has held steady at 83% over 3 years of research and testing, over 100,000 surveys run, and over 1 million responses captured. In fact, a major retailer’s 46-question customer satisfaction survey continues to return an 86% completion rate. 86% of this retailer’s survey respondents willingly complete a 46-question survey because on Wyzerr it equates to about 3 minutes of engagement. Fun engagement, that is. Wyzerr has succeeded at collecting rich insights about the customer experience in a field which has only seen marginal improvements in response rates over the years.

The key takeaway for all marketers looking to collect feedback regarding the customer experience in today’s instant judgment marketplace is to throw out everything you know about structure, standard, and the “way to do things” in research. It no longer applies. The world has changed. Consumers have changed. Technology has changed behavior drastically, and consequently, the methods and types of studies that used to work in the past will no longer yield high quality results and reliable data. Take a page out of SnapChat and Tinder’s business strategy, and challenge yourself to run short, quick frequent fun surveys versus the long and cumbersome studies you’re used to doing.

About the author: Bethany D. Merillat, M.S., M.Ed., is an experimental psychologist with a passion for changing lives through research. She specializes in survey design, online data collection, and interventions to increase health and wellness, and has published number of journal articles in these areas.

References

Blair, I. V., Judd, C. M., & Chapleau, K. M. (2004). The influence of afrocentric facial features in criminal sentencing. Psychological Science, 15, 674–679.

Christian, L. M., & Dillman, D. A. (2004). The influence of graphical and symbolic language manipulations on responses to self-administered questions. Public Opinion Quarterly, 68(1), 57–80. doi:10.1093/poq/nfh004

Conrad, F. G., Schober, M. F., & Coiner, T. (2007). Bringing features of human dialogue to Web surveys. Applied Cognitive Psychology, 21(2), 165–187.

Couper, M. P., Tourangeau, R., & Kenyon, K. (2004). Picture this!: Exploring visual effects in web surveys. Public Opinion Quarterly, 68(2), 255–266. doi:10.1093/poq/nfh013

Eberhardt, J. L., Davies, P. G., Purdie-Vaughns, V. J., & Johnson, S. L. (2006). Looking deathworthy: Perceived stereotypicality of Black defendants predicts capital-sentencing outcomes. Psychological Science, 17, 383–386

Gorn, G. J., Jiang, Y., & Johar, G. V. (2008). Babyfaces, trait inferences, and company evaluations in a public relations crisis. Journal of Consumer Research, 35, 36–49.

Graesser, A. C., Cai, Z., Louwerse, M. M., & Daniel, F. (2006). Question Understanding Aid (QUAID) a web facility that tests question comprehensibility. Public Opinion Quarterly, 70(1), 3–22.

Groves, R.M. & Couper, M.P. (1998). How survey design features affect participation. In: Nonresponse in household interview surveys. New York, NY: Wiley.

Groves, R.M., E. Singer and A. Corning. (2000). Leverage-saliency theory of survey participation: description and an illustration. Public Opinion Quarterly 64(3), 299–308.

Hall, C. C, Goren, A., Chaiken, S., & Todorov, A. (2009). Shallow cues with deep effects: Trait judgments from faces and voting decisions. In E. Borgida, J. L. Sullivan, & C. M. Federico (Eds.), The Political Psychology of Democratic Citizenship (pp. 73–99). New York: Oxford University Press.

Hansen, K. M. (2007). The effects of incentives, interview length, and interviewer characteristics on response rates in a CATI-study. International Journal of Public Opinion Research 19(1), 112–121.

Khalaf, S., & Kesiraju, L. (2017, March 2). U.S. Consumers Time-Spent on Mobile Crosses 5 Hours a Day. Retrieved August 30, 2017, from http://flurrymobile.tumblr.com/post/157921590345/us-consumers-time-spent-on-mobile-crosses-5

Liu, C., White, R. W., & Dumais, S. (2010, July). Understanding web browsing behaviors through Weibull analysis of dwell time. In Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval (pp. 379–386). ACM.s

Mueller, U., & Mazur, A. (1996). Facial dominance of West Point cadets as predictor of later military rank. Social Forces, 74, 823–850

Naylor, R. W. (2007). Nonverbal cues-based first impressions: Impression formation through exposure to static images. Marketing Letters, 18, 165–179.

Olivola, C. Y., & Todorov, A. (2009, May 5). The Look of a Winner — Scientific American. Retrieved from https://www.scientificamerican.com/article/the-look-of-a-winner/

Olivola, C. Y., & Todorov, A. (2010). Elected in 100 milliseconds: Appearance-Based Trait Inferences and Voting. Journal of Nonverbal Behavior, 34(2), 83–110. doi:10.1007/s10919–009–0082–1

Olivola, C. Y., & Todorov, A. (2010). Fooled by first impressions? Reexamining the diagnostic value of appearance-based inferences. Journal of Experimental Social Psychology, 46(2), 315–324. doi:10.1016/j.jesp.2009.12.002

Pope, D. G., & Sydnor, J. R. (2011). What’s in a Picture?: Evidence of Discrimination from Prosper.com. Journal of Human Resources, 46(1), 53–92. doi:10.1353/jhr.2011.0025

Ravina, E. (2008). Love & Loans: The Effect of Beauty and Personal Characteristics in Credit Markets. SSRN Electronic Journal. doi:10.2139/ssrn.1107307

Redline, C., Tourangeau, R., Couper, M., Conrad, F., & Ye, C. (2009). The effects of grouping response options in factual questions with many options. In Annual Conference of the Federal Committee on Statistical Methodology. Available at: http://www. fcsm. gov/09papers/Redline_IX-B. pdf.

Rule, N. O., & Ambady, N. (2008). The face of success: Inferences of personality from Chief Executive Officers’ appearance predict company profits. Psychological Science, 19, 109–111.

Schwarz, N. (1996). Cognition and communication: Judgmental biases, research methods, and the logic of conversation. Mahwah, NJ: Lawrence Erlbaum.

Schwarz, N., Grayson, C. E., & Knauper, B. (1998). Formal features of rating scales and the interpretation of question. International Journal of Public Opinion Research, 10(2), 177–183. doi:10.1093/ijpor/10.2.177

Sudman, S., Bradburn, N. M., & Schwarz, N. (1996). Thinking about answers: The application of cognitive processes to survey methodology. San Francisco, CA: Jossey-Bass Publishers.

Todorov, A., Mandisodza, A.N., Goren, A., & Hall, C.C. (2005). Inferences of competence from faces predict election outcomes. Science, 308, 1623–1626.

Tourangeau, R., Couper, M.P., & Conrad, F.G. (2004). Spacing, position, and order: Interpretive heuristics for visual features of survey questions. Public Opinion Quarterly, 68(3): 368–393.

Tourangeau, R., Conrad, F. G., Arens, Z., Fricker, S., Lee, S., & Smith, E. (2006). Everyday concepts of emotions following every-other-day errors in joint plans. Journal of Official Statistics, 22(2), 385–418.

Tourangeau, R., Couper, M. P., & Conrad, F. (2006). Color, labels, and interpretive heuristics for response scales. Public Opinion Quarterly, 71(1), 91–112. doi:10.1093/poq/nfl046

Willis, J., & Todorov, A. (2006). First Impressions. Psychological Science, 17(7), 592–598. doi:10.1111/j.1467–9280.2006.01750.x

Zarkadi, T., Wade, K. A., & Stewart, N. (2009). Creating fair lineups for suspects with distinctive features. Psychological Science, 20(12), 1448–1453. doi:10.1111/j.1467–9280.2009.02463.x

Zebrowitz, L. A., & McDonald, S. M. (1991). The impact of litigants’ baby-facedness and attractiveness on adjudications in small claims courts. Law and Human Behavior, 15(6), 603–623. doi:10.1007/bf01065855

--

--

Hoolio
Wisdom Blog

The AI wizard here at Wyzerr. He takes in feedback data and turns it into real-time insight.