The Art of Surveys: The Collective Survey Best Practices

Act 1

Allison Mui
Apr 27 · 9 min read

“Supposing is good, but finding out is better.”

- Mark Twain

Early this year, we knew we needed a way to holistically evaluate the satisfaction and health of our docs. To our surprise, no one in industry has quite figured out the magic KPI for this yet.

Our first instinct was to talk to our users. We wanted to know just how users are feeling about our docs. Are they effectively problem solving their issues with docs? Are they effectively learning new concepts? Are they happy or disgruntled after their visit?

That’s what led us into the deep and thorough world of survey best practices. Writing surveys is a lot of work, in fact, a surprising amount of work to us! Just like a company should deliver a pleasant and intuitive user experience, it should also deliver a pleasant and intuitive survey experience.

In this blog, we break down the best practices we learned for survey inception to execution.

Preparation

In order to prepare for the most effective research, make sure to first:

  1. Identify specific, clear-cut, and unambiguous goals .
  2. Identify clear research questions and desired outcomes.
  3. Define a streamlined KPI that you will use across this and future surveys. For us, we used the Strimling Study to measure quality.

Creation

Next is question development — what are you actually asking your users? Here are some rules to keep in mind:

  1. Use simple concrete language.
  2. Don’t use double negatives, unfamiliar abbreviations, or jargon.
  3. Avoid ambiguous terms, such as frequent, always, or often. These terms mean different things to different people.
  4. Limit the scope of each question to one specific issue.
  5. Avoid an agree-disagree format for it leads to acquiescence bias. A better option is using alternative statements.
  6. Use at most 4–5 choices for close ended questions.
  7. Include a ‘don’t know,’ ‘not applicable,’ or ‘other’ option in multiple choice questions.
  8. Ask yourself ‘Is this question clear? Can it be more specific?’ when reviewing questions.
  9. Keep questionnaire items short — a good rule of thumb is less than 20 words.
  10. Survey length is the number one reason respondents dropout. Limit survey length by only asking questions you don’t already know the answer to.

When creating questions, think about how you ask them. There are many different types of questions, each having their own set of pros and cons.

Here are a few types of questions we considered for our survey.

Likert scales
Likert scales can be used to determine how the average user felt about X.

Tip: Use likert scales when you already have some sense of what users are thinking.

Example: On a scale of 1–7, please rate your experience according to the following statement: my search experience was confusingintuitive.

In-App Questions
In-app questions have been said to be one of the methods with the highest response rates. They are used to collect actionable advice. They usually are composed of at most two questions.

Tip: In-app questions can be used to measure comprehension. As a way to identify if users are drawing the right conclusions from documentation is to ask if they were able to perform the specific task the page is addressing.

Example: Were you able to import your data? Yes/No

Multiple choice
Use multiple choice questions when there is a limited and well known set of options for a given question.

Open-Ended Questions
Open ended questions are great for gathering more detail and enabling users to share more about their experience. They are often used to discover new customer needs and highlight areas of improvement.

Use open-ended questions when there isn’t a good amount of existing insight. Open-ended questions are preferred if answer options are extensive and unknown.

Note there are several cons to open-ended questions to consider. These include:

  • Responses are viewed individually, making it more difficult to analyze. This tool may be used to streamline the analysis.
  • Open-ended questions require a lot of user effort and can be time consuming. This may lead to user-burnout or disinterest.

Tip: Pair close-ended questions with open-ended questions to better understand and address the data.

Example: If answered ‘difficult to understand,’ which aspects of the tutorial can be improved?

Ranking Questions & Matrix Questions
Use ranking scales when there is confidence respondents will be familiar with each answer option.

Matrix questions are great when asking several questions in a row that contain the same answer options. This can significantly reduce length and redundancy.

Tip: Don’t use this question type if other question types can provide the data you need. It is easy for users to misread or mis-answer.

Tip: Watch out for how these questions are displayed on mobile devices. We learn the hard way that these are not often optimized on mobile devices.

Three main metrics are used to measure the performance and quality of a product or service. This includes CSAT (customer satisfaction), NPS (net promoter score), and CES (customer effort score).

These metrics are most effective when combined with qualitative research. Qualitative research will help identify drivers behind the scores and necessary next steps.

Customer Satisfaction Score (CSAT)
CSATs are asked with a form of a single question: “How would you rate your overall satisfaction with X?” This question is usually accompanied with either a 1–5 or 1–10 range.

The score is then calculated by taking the total of 4 (satisfied) and 5 (very satisfied) scores and dividing it by the total number of responses.

(Number of satisfied customers (4 and 5) / Number of survey responses) x 100 = % of satisfied customers (CSAT)

A good CSAT score is defined by the specific business or product.

Several cons to CSAT that should be taken into consideration include:

  • Relies on self-reporting, making the measurement vulnerable to response bias.
  • Takes in a blunt measure of positivity or negativity, excluding nuance or granularity in experiences.

Net Promoter Score (NPS)
NPS is composed of a single-quality loyalty measure. This might be asked like “How likely would you recommend X to a friend or colleague?” NPS questions are usually accompanied with a range of 1–10.

NPS serves as a growth indicator in addition to a satisfaction metric. Takeaways might include:

  • How satisfied consumers are with your products/services.
  • How loyal they are to your brand.
  • How likely customers are to recommend your company to others.
  • Probability of customer churn rate.

To calculate NPS, subtract the percentage of detractors from the percentage of promoters.

Scores under 0 indicate significant underperformance. 0–30 indicate normal performance or loyalty. 30–70 indicates great performance or loyalty. Scores above 70 indicate very high performance or loyalty.

Detractors include users who answered 0–6. These users are unhappy and at the risk of churning. Passives include users who answered 7–8. These users like the product, but don’t love it yet. Promoters include users who answered 9–10. These users love the product and will actively promote it.

Customer Effort Score (CES)
CES is composed of a single-quality effort measure. This might be asked like “How much do you agree with the following statement: X made it easy for me to X.” CES questions are usually accompanied with a range of 1–5 or 1–10.

This metric focuses on measuring user satisfaction levels by focusing on the efforts users make to interact with the product.

In respect to documentation, this metric could also be used to measure comprehension and effectiveness. This metric would uncover how easily users are completing tasks with the aid of documentation.

According to a Strimling study, these four terms are the most important to users when measuring quality: accurate, relevant, easy to understand, and accessible.

Strimling proposes measuring quality with these four questions:

  1. Could you find the information you needed in the document?
  2. Was the information in the document accurate?
  3. Was the information in the document relevant?
  4. Was the information in the document easy to understand?

Questions should unfold in a logical order. Begin with simple questions before moving to more complex questions. Beginning the survey with simple and interesting questions will motivate users to continue further.

If running a long survey, group questions together. This will help respondents focus their thoughts and answer a series of related questions around these specific thoughts.

Tip: Avoid asking several difficult questions right after another as it will overburden respondents.

Execution

Now, it’s time for the big execution. A few key things to keep mind are pretests, frequency, outreach, and logistics.

Before the final push, use a pilot test to detect any flaws in the questionnaire. Pilot tests will help:

  • Identify biasing effects of different questions or procedures.
  • Determine order of questions or procedures.
  • Determine if respondents are interpreting questions as intended.
  • Evaluate how people respond to the overall questionnaire and specific questions.

Run a pilot first internally with colleagues or other employees. This first test-run will be used to catch any mistakes, improve clarity, improve quality, etc.

Run a second pilot with potential respondents. This test-run will evaluate if the questions are effective or confusing and gather feedback on the overall survey.

When measuring change over time, the method of research and procedure should remain the same for a period of time.

It is suggested to send surveys out quarterly, or once or twice a year.

When reaching out to users, make sure to communicate the value of the survey. In a recent study, 87% of respondents said they would participate in surveys because they felt it would help make a difference in a company’s products or services.

It has also been seen effective to be personal in the email ask.

Example

At [company], there is nothing that keeps us up at night more than thinking about how we can make a better product for you. But one of the most important lessons we’ve learned over the years is that what WE think is best for the product doesn’t really matter. What matters most is the challenges our customers — that’s you — are facing, and how we can better solve them. We want [product] to be as useful and easy to use as it can possibly be. We want it to be the best [category] product on the planet. Will you help us do that?

When setting up research, be sure to state clear intentions and procedures. This will give respondents more confidence that the information they provide will be private and confidential.

In terms of compensation, if asking more than 5 questions, it’s best to add an incentive. An example includes: a chance to win a valuable price — Amazon gift card, iPad.

Phewph — that’s a lot.

Phewph! That was a lot — a culmination of 15+ articles to be exact. Hopefully, you have found this helpful or maybe better yet, you feel confident to lead your own survey experience. You got this!

To see how we executed these best practices in our continuous docs research, check out the next act in our ‘The Art of Surveys’ series in which we go into execution mode.

Thanks for reading!

Product Gals

Through the eyes of a product design & product management partnership.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store