Four Steps To Designing Insightful and Impactful Surveys

Colin Fraser
21 min readJan 13, 2017

--

If you want to know something about your customers or market, it seems pretty obvious that a good way to find out is to ask. And indeed, surveying is probably one of the most effective knowledge-generation activities that a company can take — given that the survey is well executed. But a survey which is incompletely or badly executed is at best expensive and of questionable value — and at worst, misleading and damaging.

Surveys come with a host of hidden costs, aside from the fees charged by vendors. First of all, the pool of survey participants is finite, and every time you survey a member of that group, you probably ought to exclude them from that pool for at least a few months. Surveys take effort on the part of the participant to complete, and marketing research points to customer effort as being a primary driver of dissatisfaction — so the act of surveying may actually adversely affect the exact scores that you are trying to measure. In particular, a long or confusing questionnaire will probably leave a bad taste in the mouth of the participant. Finally, poor survey design can lead to false insights. It is common to believe that even if the surveying process isn’t perfect, it is still worthwhile because “some data is better than no data”. But if your imperfect and limited data leads to false insights, the cost of acting on these false insights can be immeasurably greater than the alternative of not having measured anything at all.

What is a survey?

A common mistake is to believe that a survey is a thing with a bunch of questions on it that we send out to participants in order to gather information. Sometimes the survey is delivered by email or mail, sometimes by phone, and sometimes by face-to-face interviews. After collecting the responses to the survey, we may tally up the responses and try to make inferences about the group that we want to learn about. Isn’t that right?

What I’ve called a “survey” in the last paragraph is actually rightly called a questionnaire. And while a questionnaire is indeed an integral part of any survey, a survey should be seen as a holistic project which includes the act of delivering the questionnaire, but also includes a great deal of planning and design before the questionnaire is even written, along with the analysis of the responses to the questionnaire.

The distinction is important, because if you believe that a survey is a questionnaire, then you’ll believe that by having administered a questionnaire, you’ve surveyed. And since we all know that surveying is important, it is tempting to be quite satisfied with simply sending out a questionnaire and tallying up the responses. But by only completing these two relatively simple steps, you’ll run the risk of incurring all of the horrible costs that I discussed above.

So without further ado, I present to you the seven indispensable pillars of survey research whereupon the completion of all seven, you may truthfully claim that you’ve done a survey.

The Seven Steps of Survey Research

  1. Establishing measurement objectives (MOs)
  2. Sampling design
  3. Questionnaire design
  4. Questionnaire distribution
  5. Analysis of the responses
  6. Interpretation of the results
  7. Presentation

These steps can be broadly categorized into two groups: steps 1–4 are all part of design, while the last three are part of analysis. For virtually every surveying project, each step is equally important and basically distinct from all of the others, and should be performed in roughly that order.

It’s a lot of work, but it’s worth it! A properly executed survey can be a reliable generator of important and actionable business intelligence, and an improperly executed survey can do the exact opposite.

In this piece I will give an overview of each of the first four steps in this seven-step methodology, which altogether comprise the design phase of the survey project. Surveys which are designed according to this process will be a great deal easier to analyze and extract value from once the responses come back.

Establishing measurement objectives

The establishment of clear measurement objectives (MOs) is probably the step that is the most often overlooked when companies do survey research, which is a real shame because this step forms a foundation for every single subsequent step. This is the step that forms the clear link between business objectives and statistical research. In effect, this step provides the answer to the question, “why are we burdening our customers with filling out a boring questionnaire?”.

“why are we burdening our customers with filling out a boring questionnaire?”

An MO is a question that we would like our survey to answer. It is a business question and not a statistical question, and ideally, it is aligned with high level strategic goals of the business. In order for a survey to have impact, the MOs should be clear and relatively closed-ended questions whose answers will have obvious business implications.

Some nice MOs

Would our existing customers pay more for our service if it had these extra features?

Has our new website design improved usability?

Does the general public’s perception of our brand match with our marketing strategy?

Note that MOs are distinct from questionnaire items, which we will discuss later. MOs are questions and questionnaire items are (often) questions, but it is unlikely that the MOs will appear directly on the questionnaire. It is probably not useful or informative, for instance, to ask our customers directly whether they would pay more for extra features. But well-crafted MOs will guide the entire rest of the surveying process all the way to the presentation of results.

Too often, folks begin the survey process by considering what they want to ask, focusing on the questionnaire items as the main aspect of survey design. By starting with MOs, we shift the focus from what we want to ask to what we want to answer. The point of survey research is not to ask questions; the point is to generate business intelligence.

Since they are non-technical at least from a statistical perspective, the crafting of MOs needs not be a task left for the analyst. In fact, ideally MOs are developed in concert with the highest level of leadership who are affected by the survey results. If you are a business leader ordering a survey, you ought to make sure that the benefit of the knowledge generated by the survey outweighs the costs, both financial and otherwise. By ensuring that a survey has a clear and well-crafted set of MOs, you do your part to make that happen.

Sampling Design

Now that you’ve established what you want to know, it’s time to figure out where you’re going to find it. Sampling design is the portion of the exercise where you’ll decide which groups should be part of the survey, and how they will be selected. Sampling is an incredibly complex topic and it is worthwhile to consult with a person who knows how to do statistics in order to get this part right, but here are some basic things to think about.

The best thing in the world would be if you could force everyone in the world to take your survey. Nothing would be up to chance, and analysis of the results would simply be a matter of tallying up the responses. Unfortunately, this is not possible, which is why we use sampling theory. The basic idea of sampling is that we want to learn something about a population by examining the properties of a sample from that population. It’s all about generalization. We look at the observable sample and make inferences about the unobservable population that the sample comes from.

In order to make good inferences about the population from your sample, it is crucial for the analyst to have a very detailed understanding of how that sample is collected. All too often, this is overlooked — survey administrators don’t pay much attention to how the sample is collected, and send the results off to be analyzed. Inevitably, sloppy handling of the details of the sampling procedure will lead to incorrect generalizations, which will lead to incorrect conclusions, which may lead to costly ill-advised action.

Simple Random Sampling (SRS)

The most obvious sampling procedure is to select survey participants completely at random. More precisely, SRS is a sampling procedure that assigns an equal probability of being invited to the survey to each member of the population.

Simple random sampling is great. As long as the sample size is large enough, a survey that uses SRS will yield robust and readily interpretable results with small confidence intervals. That last caveat is a killer though — large sample sizes are expensive and unwieldy, and SRS designs require bigger samples than alternative designs.

Complex Designs

Complex design is a catch-all term that refers to sampling designs that incorporate some level of clustering and stratification. I won’t get into clustering here, but you can read about it here. It is worth mentioning a few words about stratification however, because stratification can often be used to dramatically reduce the sample size required to make certain inferences compared to SRS. In fact, I’ll go out on a limb and suggest that you should almost always incorporate some form of stratification into your survey design.

Stratification is a way to incorporate knowledge that you already have about the population of interest into the sampling procedure in order to increase the power of your survey. The idea is to identify groups, which we call strata, that are internally similar, but different from each other, with respect to the variable that you are trying to measure. Then, instead of sampling at random from the entire population like an SRS design, the stratified design treats each of the strata as a subpopulation and samples from each independently, ensuring adequate representation from each group.

An example might help.

Suppose that we are conducting survey of statistics students at a local university with the intent to gauge opinion on sampling theory. One option would be to sample at random from the entire population of statistics students — an SRS design. But suppose that we also have the following idea: maybe the attitudes on sampling theory vary between undergraduate students and graduate students so that undergraduate students find sampling theory to be dull and boring, while graduate students find it to be exciting and sexy. If we have reason to believe such a theory, then it will be a really good idea to use a stratified random sample, with each graduate level as a stratum.

The reason that this helps is twofold. First of all, since we’ve posited a theory — that undergraduate students and graduate students have different attitudes in towards sampling theory — we would probably like to test it. But the undergraduate subpopulation exceeds the graduate subpopulation by a wide margin. If we leave the sampling completely up to chance, we may not end up capturing a large enough sample of graduate students to do that analysis. But by using a stratified design, we get to select how many of each type end up in the sample, insuring that each stratum has enough representation for robust analysis and comparison.

The other reason is a little bit more technical but no less important. It turns out that by using a stratified random sample, we can almost always decrease the variance of our estimates, which allows us to use a smaller population in total, compared to an SRS. The degree to which stratification helps on this front depends entirely on how good of a job you do at identifying relevant strata. If the strata are very similar within and very different without, then large gains can be made to the sampling efficiency.

Last Thoughts on Sampling Design

An absolutely crucial part of sampling design is to make sure that your sample comes from the population that you think it comes from. Suppose we conduct the survey above, sagaciously stratifying according to graduate level, and computing all kinds of sample statistics related to attitudes on sampling design. We go out and publish our findings and then we discover something horrifying: the university we’ve chosen to sample from is a men’s university.

Can we still make inferences about statistics students? Well, maybe. It depends on whether average attitudes on sampling design vary by gender — and I truly do not know the answer to that. But one thing that is for sure is that people would be right to question the power of your research to generalize beyond any group beyond men’s university students.

This is a bit of a silly example, but it happens in practice more than you might think. You want to conduct a survey of your customers, so you reach out to your marketing department to obtain a list of customer contacts. Little do you know, that list was generated by querying for all customers who made a purchase in the last month. Can you make valid inferences from your survey results? Again, maybe. It depends on whether your MOs vary according to the date of last purchase. But if you don’t know whether your MOs vary by purchase date (and you probably don’t), then you have a problem.

The number one very very most important part of sampling design is to document everything. If you use a stratified design and your analyst thinks that it’s an SRS, your results will be wrong. If you sample from people who have made a purchase in the last month but your analyst thinks that it’s your entire customer base, your results will be wrong. If there is any detail of the sampling procedure that is not written down so that it can be accessed in the analysis phase, your results will be wrong.

So stratify, and write everything down.

Questionnaire Design

Finally, we arrive at the place that most people try to start. Now that we have figured out what we want to know, and who we’re going to find it out from, we can finally decide how we’re going to find it out.

Like sampling design, questionnaire design is a massive topic with multitudes of papers and books addressing it from different angles. A nice one is Design, Evaluation, and Analysis of Questionnaires for Survey Research by Saris and Gallhofer. It is not always straightforward to figure out how to get from MOs to questionnaire items — if you’ve ever filled out a survey in psychology research, you know that the material of the research project is often quite far removed from the material on the questionnaire, and the plan for achieving MOs from questionnaire items can be very sophisticated.

With that said, there are a few basic don’ts and dos that I think are pretty broadly applicable to questionnaire design.

  1. If you don’t address your MOs then you’re wasting your time.
  2. Even if you do address your MOs, you’re wasting the participant’s time.
  3. Say things in a way where people will know what you mean.

If you don’t address your MOs then you’re wasting your time

I have often been an analyst of a survey that fails to answer some critical question. It sucks. The reason that it happens is that people jump straight into designing the questionnaire before considering the MOs (and sampling design). Then when it comes to analyze the responses, you discover that some crucial piece of information is missing from the puzzle, so you have no answer to some big question from the guy who ordered the survey in the first place. Depending on how critical the question is — and sometimes it can be quite critical — that guy might be really mad.

Again, questionnaire design is just a small part of the large project that is a survey, and each piece of the puzzle should serve to address the MOs. For this reason, the right approach is to tailor the questionnaire. The idea here is to go through each MO, individually, and come up a strategy for measuring it. Odds are, the correct strategy won’t be to repeat the MO on the questionnaire verbatim, but rather, MOs will be addressed by combining multiple questionnaire items together. I’ll go into a little bit more depth on designing useful questionnaire items shortly — although, like sampling, it is far too broad a topic to fully address here. For now, the focus is simply on the process of visiting each MO individually and forming a specific plan for addressing it. At the end of this process, not only will you have a questionnaire, but for each item on the questionnaire, you’ll be able to point to the specific MOs that it addresses, and know exactly how the questions combine to make the survey worthwhile. This information can be collected along with the sampling design details and the MOs themselves into a measurement plan. This document will prove endlessly useful for the analysis phase of the survey project.

Even if you do address your MOs, you’re wasting the participant’s time.

You might find that there are some MOs which are just too hard to answer with a questionnaire, or that inferences will be too difficult given the sampling design to completely address the MOs. Often, we respond to these setbacks by hoping that “some data will be better than no data”, and power through. My advice would be different. If you can’t find a way to address the MOs using a questionnaire, then don’t do the survey. Why? Because it’s expensive and your participants (who are often your customers, or at least people who you would like to be your customers) hate it. It’s not worth paying to administer a questionnaire that will only annoy the participants and will inevitably fail to answer the questions that you want answered.

A survey is useful insofar as it can be used to generate knowledge, and is only worthwhile if the benefit of that generated knowledge outweighs the costs, in dollars and in participant effort, of sending it out. Moreover, you only have finitely many people that you can survey. If you piss them off too much by sending a questionnaire that can’t even address its MOs, you might not have any left who will be willing to answer the questionnaire three months down the road that does generate knowledge. Go back to the drawing board and make a survey that works. Never send out a survey for the hell of it.

With all of that said, there is every possibility that you will succeed in developing a sensible measurement plan and can now move on to actually authoring the questionnaire. Congratulations. However, the participants are still probably going to hate responding — it’s not fun and it takes effort. But there are some things that you can do to minimize this effort, and you should do them. Here are a few specific ones.

Don’t ask anything that doesn’t contribute to a specific MO.

Nothing is worse than a long questionnaire. At some point, some participants are bound to get bored and pick C for everything just to get to the end and be entered in the draw for the free iPad or whatever (more on free iPads later). For anything longer than four or five items this is inevitable — and if you can keep the questionnaire that short, please do. But if you can’t, you should at least minimize the length of the survey, subject to the constraint that it must address each MO. This means that if there is a particular questionnaire item on the table which doesn’t address a specific MO, but would be “nice to know”, don’t include it. The temptation can be strong to include just one more question for curiosity’s sake, and often there will be pressure from others who heard that you’re doing a survey and think, hey, while you’re at it can you ask this? Unless you’re prepared to amend the MOs and can justify adding a questionnaire item in those terms, don’t do it.

Don’t ask anything hard

Your MOs may be complex and difficult to translate back and forth into simple questionnaire items. Surveyors are often interested, for instance, in some kind of preference ranking. “What are people’s main considerations in choosing a provider of our service,” an MO might read. A typical first shot at translating this into a questionnaire item may be to list a number of considerations that you think may be important — price, customer service, quality of service, whatever — and ask respondents to rank them. Don’t do this.

Ranking things takes cognitive effort. If you have a list of ten things to rank, there are over three million possible rankings — and that’s if ties aren’t allowed. That’s not to say that people consider all 10! orders when they rank a list of 10 things, but once you get past about the top 3 it can be hard to choose what is the next most important thing, and the distinction in importance between your 6th and 7th main consideration might be meaningless. What’s more, the participant will likely know that this is just busywork, and that the analyst can’t possibly be using the information that they provide about their 8th most important consideration. Annoying participants with busywork will hurt the response quality down the road, and might cause them to abandon the survey altogether.

This is clearly not the only way to put something too complicated or difficult on a survey, but it is one I see pretty commonly. But I think that the best practice is simply to ask yourself for each questionnaire item: “would this annoy me?”. If the answer is yes, see if you can find a different approach.

Say things in a way where people know what you mean.

This is a spectacularly difficult part, and one that questionnaire designers often miss. Your company has a great deal of internal jargon, and often your MOs will be phrased in these terms of art. An online store might wish to measure the usability of the new version of its site, an ISP might want to measure reliability or stability, or just generally you may wish to measure something like loyalty or satisfaction. A first try at measuring usability might be to pose the question, “How would you rate the usability of our new website design? (1–5)”

The trouble with this is that not everybody knows what you mean by usability, or reliability, even loyalty or satisfaction. If you ask outright about these kinds of concepts, it may be dangerous to trust the responses. Folks might have different personal interpretations of these concepts and answer according to those interpretations.

A useful way to frame this issue is through the ideas of concepts-by-intuition and concepts-by-postulation. This way of thinking comes from the American philosopher F. S. C. Northrop and has been applied to questionnaire design in the social sciences for a long time. A concept-by-intuition is a concept which is more-or-less immediately apparent to everyone because it is perceived directly. Feelings are generally concepts-by-intuition: people know what it means to feel angry or happy, not because someone explained it to them or they memorized a definition, but because they have perceived these feelings directly. If I ask you if you like something, I generally don’t have to define “like” — you just get it, because the feeling of liking something is a concept-by-intuition.

Concepts-by-postulation are concepts which require definitions in order to be properly communicated. They are not felt or perceived directly, and can only be properly described within the confines of the system in which they live. Usability is such a concept. In order to have a clear discussion about usability, we need a clear definition. Obviously, something is usable if it is easy to use — but use for what? and by whom?

Concepts-by-intuition are easy to ask about, because the person you ask will know what you’re talking about. But to ask about a concept-by-postulation, we must first, well, postulate the concept. Concepts-by-postulation require (possibly very extensive) definitions up front, and exist in a web of all kinds of other concepts-by-postulation that make up all of the buzzwords and jargon in your industry.

The tricky thing with surveys is that we are typically interested in measuring concepts-by-postulation.

The first instinct may be to provide some definitions and mental anchors up front. You might imagine a questionnaire that says something like this:

We say that our website is usable if it is easy to use it to get information and ultimately purchase our product. How would you rate the usability of our new website design on a scale from 1 to 5?

And, to be sure, this is probably better than the initial version which leaves usability undefined. At least the respondent now has a clearer picture of what you mean.

But this is tedious, and for some concepts that you wish to measure, the definitions will be quite extensive. A better strategy is to try to find reflective indicators of the concept that you are trying to measure in terms of concepts-by-intuition. Questions about concepts-by-intuition will be easier to understand and answer, and will require less effort on the part of the respondent.

A useful trio of types of concepts-by-intuition that you may wish to use are: feelings, judgments, and action tendencies. These have been identified in the psychology literature as capturing much of what goes into what we think of as an attitude. A nice exercise, then, is to take the concept that you wish to express and try to think of feelings, judgments, and action tendencies that go along with it. For each, come up with a statement that captures the statement.

Coming up with concepts-by-intuition which are linked to “usability”

And guess what! By the time you’re done, you’ll have all kinds of useful questionnaire items that you can include on your survey.

Questionnaire items just drop right out of the last exercise

Now, the concepts that I’ve identified may not quite capture your idea of usability or what you want to measure, but the process works. Sit down, think about what you want to measure, try to find associated feelings, judgments, and action tendencies that go along with it, formulate them as statements, and stick em on a questionnaire.

That’s certainly not all there is to questionnaire-building, but it will give you a really nice start.

Questionnaire Distribution

The distribution method is the last piece of the puzzle in the design stage. Once you have measurement objectives, a sample, and a questionnaire, you need to distribute it.

There are lots of ways to distribute a questionnaire. You can call people directly on the phone or interview them in person, you can send it on a letter or a postcard, you can send it by email or over SMS, and I’m sure many other ways that I don’t even know about. I’m going to talk mostly about distributing surveys online here, but many of the same considerations apply for the other methods.

The most important thing that you need to be aware of is that your choice of distribution method will influence your results. Minute changes in the presentation and delivery method can have surprisingly large effects on the quantity and quality of responses that you obtain. So if you are working with a vendor to deliver surveys by email or by some other online method, try to inspect the delivery interface as closely as possible. Try to fill out the survey in Chrome and then in Firefox and then in Internet Explorer and then on your phone, and pay attention to how the interface changes across these media.

Most survey vendors can give you details on how a survey was completed — you’ll want those. See if you can capture what browser the respondent used, and especially whether they completed the questionnaire on a mobile browser or on a personal computer. I’ve almost always found that there will be significant differences between responses along that dimension, and if you don’t keep track of that, it could bias your results. But by making sure that you have kept track of the delivery medium, there are methods you can use in analysis to cancel out any media bias that arises.

For this same reason, surveys that are carried out over different media may not have results that are directly comparable. If you survey a population today by email and six months from now with a telephone interview, you’re likely to get very different results due to the change in delivery method. Don’t mistake those for changes in the underlying variables that you are trying to measure. If your survey plan involves the administration of multiple questionnaires across time, try to make the user experience as consistent as possible between each questionnaire.

Closing Thoughts

This piece barely scratches the surface of effective survey design, but following this process will get you a long way. The most important thing to remember is that a survey is an information gathering process. The point of a survey is not to ask the participants questions, but to create new knowledge. If you design and send out a questionnaire without deciding specifically what information you want to gather, it will be very difficult to make sense of the survey responses. For this reason, effective survey design starts with establishing clear MOs, which form the foundation for all of the rest of the work.

Other than that, the most important piece is to document everything. Why is each question part of the questionnaire? Why did you choose this sample and sampling method? How were the responses collected? The reason for this is that in the analysis stage of the project (which I’ve not covered at all here), knowing the answers to these questions goes a long way towards deriving meaningful insights. If there are biases due to sampling, selective non-response, or delivery method effects, these can often be handled by statistical methods as long as the analyst is aware of them. So write everything down.

Surveys are hard work, and that shouldn’t be underestimated. It is often taken for granted that if we want to know something “we can just send out a survey”, but that idea is doomed to fail without an effective plan for design and analysis. But by following these steps, you can ensure the greatest chance of success for your surveying project, and allow yourself the opportunity to answer important questions and uncover real insights using surveys.

References

Saris, Willem E., and Irmtraud N. Gallhofer. Design, Evaluation, and Analysis of Questionnaires for Survey Research.
This book goes into detail on the concepts-by-intuition versus concepts-by-postulation distinction, and how to use these ideas to craft questionnaire items. It is also a great reference for ideas on how to craft closed categorical sets, Likert-scale type questions, and other common survey tropes.

Heeringa, Steven, Brady T. West, and Patricia A. Berglund. Applied Survey Data Analysis.
This is a nice resource for understanding complex sampling design — how to implement it, and how to analyze the results.

Imbens, Guido, and Donald B. Rubin. Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction.
This is not a resource specific to survey analysis, but is a great reference for understanding how to make causal inferences about data. If you ever intend to consider an MO of the form “does X cause Y?”, this is an indispensable read.

--

--