Making Sense of Surveys

Surveys are widely used for data collection, but methods to analyze them aren’t nearly as popular.

Kaushik S Mohan
Data Clinic
6 min readNov 5, 2019

--

Many of us have answered surveys at some point in our lives, from giving feedback at restaurants to answering online questions about website experience. Surveys are a useful form of data collection because they can help measure people’s perceptions, opinions, and other qualitative factors that may otherwise be hard to quantify. For example, political opinion polls are essentially surveys conducted to measure public satisfaction, agreement, and alignment with elected officials, candidates, and policies, and are particularly relevant during an election year. The ability for surveys to reach a wide audience in a cost-effective manner makes them a popular form of data collection.

One such survey we’ve come across in our work is the New York City Department of Education’s (NYC DOE) School Survey. The NYC DOE conducts an annual survey that measures student, parent, and teacher opinions regarding several different aspects of their respective schools. These surveys have the potential to provide individuals, schools, and organizations with a deeper understanding of schools that go beyond indicators of proficiency such as test performance. Robin Hood, a Data Clinic partner and NYC nonprofit seeking to fight poverty, was interested in understanding how various qualitative aspects of the school environment, including concepts such as teacher collaboration, parental involvement, and principal effectiveness, relate to school performance over time.

While this might sound straightforward, the challenge of mapping different survey questions to a broader concept is by no means an easy task. For example, last year’s parent/guardian School Survey included over 40 different questions. While it is possible to analyze any single item in isolation, it is often a combination of questions that makes up a construct of interest. Some survey items may appear to be part of a cohesive group, but in reality, the process of selecting how best to cluster questions together is inherently subjective.

NYC School Survey: 2018 Parent/Guardian survey

Using factor analysis on survey data

One way of overcoming this challenge is through a statistical modeling technique known as factor analysis, a fairly popular approach for analyzing survey data. The assumption underlying this method is that we expect the responses of questions to be correlated to each other if, in fact, there is a broad construct that connects them. The stronger the association between a subset of responses, the more likely it is that they represent an overarching construct or factor. In addition to measuring similarity across responses, factor analysis also determines how strongly each question relates to the broader construct and assigns it a weight. Some questions may only be tangentially related to a factor, whereas others may be more important. What this does in practice is to take a survey consisting of many different questions and group items into a smaller number of themes for analysis using a data-driven approach to remove subjectivity inherent in a manual process. We’ll walk you through an example using the School Survey to make this more clear.

Let’s start with the 2018 survey for parents and guardians. For this example, we are interested in responses from parents of elementary and middle school students, leading us to exclude questions 8 and 10, which are aimed at different age groups. We also exclude question 7, as the response type is categorical and cannot be coded into numerical form. Based on some preliminary exploration using the psych package in R, our factor model arrived at 9 factors arising from over 30 survey items. Below is the full factor diagram with the individual questions in boxes on the left and the corresponding factors (ML1-ML9) in circles on the right. The value between a factor and a question indicates the strength of the relationship between them, while the numbers to the right of the factors represent correlations among the factors themselves. In a well fit model, we would hope to see low correlations among factors and a strong association between a factor and the questions it relates to.

Results of a 9-Factor model for the 2018 Parent/Guardian Survey

Now we can drill down into specific factors to see if we can identify the constructs that they might represent. ML3 is composed of the following items:

These questions are concerned with individualized education plans (IEP), which are tailored educational programs for qualified students with disabilities. The goal is to assess parent/guardian satisfaction, perception of the school’s commitment, and the range of services offered in relation to IEPs. Given that the purpose of these items is to evaluate the overall quality of IEP programming at a school, it’s not a surprise that they are grouped together, and the fact that they are makes it easy to label and describe this factor.

However, not all groupings are as obvious at first glance. For example, factor ML5 is comprised of questions 1a., 1b., and 1d., and in looking at the content of these items, it appears they are about parent-school communication. But if ML5 is thematically about communication, we would also expect 1e. and 1c., which also speak to general communication, to be included.

Upon further inspection, we can see that the questions included in the factor (in bold) go beyond general communication, and instead focus specifically on how well the school communicates and provides opportunities for parental involvement in a child’s education (purple text). Factor analysis was able to tease out these differences, which may not have been apparent when grouping questions manually. Although this consolidation approach is arguably more objective, it’s important to note that the process of describing, defining, and providing context to each factor requires a deeper domain understanding — the data will only take you so far.

Challenges and additional opportunities

With this example, we have illustrated how one could isolate qualitative constructs from survey responses. Interestingly, in 2016 the NYC DOE adopted a Framework for Great Schools that categorizes School Survey questions into six constituent elements, much like our analysis. However, our factor analysis revealed additional nuance that was not identified in those six broader categories, suggesting that respondents answered certain framework items differently — maybe they do not all map to the same concept? The School Survey is unique in that it maps individual items to a framework; most surveys do not have any type of categorization and could therefore benefit from factor analysis as an initial step in understanding how question responses cluster.

It’s important to note that our illustration is an exploratory one that hopefully provides a starting point for using factor analysis to study surveys. More advanced methods of confirmatory factor analysis can provide statistically robust estimates of the association between questions and their underlying constructs.

This type of analysis may also be helpful in making comparisons across time. For example, the School Survey changes over time as the NYC DOE refines questions to better align with measurement objectives or to ensure more accurate responses. This creates a challenge in trying to understand how schools change with respect to individual questions — how can we compare items that are different from year to year? Factor analysis works well when analyzing surveys for a given year, but gets a lot more complicated when the questions change every year. Stay tuned for an update as we work to overcome these temporal challenges to help identify trends over time.

If you would like to run this analysis yourself, or explore further, you can check out the repo here.

--

--