No one trusts surveys, ever. Survey results — even less.

Tzeying
3 min readAug 28, 2019

--

On what planet…

Surveys have such a horrible reputation. Just the mention of the word conjures images of scammy work-from-home ads, grossly manipulated statistics and at worst, utter dismissals by cynics.

(Psst… we are currently trying to learn more about how you research so we can build out a: )

  • Completely free database of survey questions
  • Hand-picked questions that are tested, sorted & tagged
  • Available in multiple languages for multiple countries

You’ve probably questioned survey results many times before, whether you:

  • Found them online (exactly who published this and what’s their motivation for running it?),
  • Hired an external vendor to run research for you (how much did they manipulate the results because I paid them?),
  • Were presented information internally from your organisation or team (those results fit a little too nicely into their KPIs),

The truth is that the skepticism isn’t completely unfounded. Surveying has a dark cloud hanging over it which makes it difficult for people to completely trust results: the practice of forcing surveys to validate a pre-determined narrative.

Surveys are meant to validate. They are a tool to prove / disprove, explore a new space, test ideas, solicit feedback, etc. so that you and your team can make decisions. However, there is a very fine line between extracting insights to help you make an informed decision and basically curating statistics and numbers to paint the story you need.

So, how do we avoid this?

Survey creation and analysis is an art form; there is no single best way to do it. One of the hardest things is to keep everything as objective as possible.

From writing the survey on your own to listening to someone else’s results, the following are a few red flags you should look out for:

  • Writing biased or leading questions so the respondents answer the way you want
  • Excluding questions or response options you don’t want the respondents to answer
  • Giving the wrong response options (e.g. open-ended vs close-ended, scale type)
  • Targeting a niche group that you know will answer the way you want
  • Slicing and dicing the responses to make the results look better so that the response number goes from n = 300 to n = 50
  • Deleting the results that contradict your pre-determined ideas
  • Misconstruing a combination of responses to create an insight out of thin air

Granted, these things do happen by accident — especially from inexperience — and even seasoned pros are guilty of it too. While being able to recognise and avoid these issues are skills that are learned through experience, we believe that everyone should have access to the knowledge they want.

That’s why we (Ali Grimaldi and I) are building a free and open-source knowledge base and community to help individuals / teams make surveying a credible and respectable tool within their organization.

We want to pool together experiences that can help individuals / companies extract valuable insights around customer experiences, concept validation, product-market fit, etc.

This is part of a series of articles that we will be publishing over the next few weeks / months. We are also working on a knowledge base that we aim to go live in the next 3 months.

Next week we will be talking about how to write a good survey without breaking the bank.

--

--

Tzeying

UX Manager @ Shopify. Also, building bellinislushie.com — a better survey design toolkit