Quantifying the Unquantifiable
By Eric Heilman, Institutional Research Consultant at Mission & Data and Director of the Center for Institutional Research in Independent Schools
If you read mission statements or strategic plans for independent schools, you will come across a wide variety of aspirations. Some of these goals, like “academic excellence” or “financial sustainability” are concrete and have a variety of established metrics schools rely on to assess where they are and to measure progress over time. These more prosaic aims, however, are often but a minor slice of a grander vision schools have for themselves. Across the country, we see schools aspiring to foster “squishy” outcomes like collaboration, entrepreneurship, wellness, and equity, inclusion, and justice.
We do not, however, see a corresponding proliferation of metrics to measure and track these squishy outcomes, leaving school leaders at every level to rely on their gut instincts and personal perspectives to “feel” whether the institution is improving in these dimensions. When asked why they do not have more concrete metrics in place for outcomes that are of such high institutional priority, by far the most common response we hear at the Center for Institutional Research in Independent Schools (CIRIS) is a belief that the squishy aspects of school experience like justice or wellness simply can’t be quantified. At CIRIS, we take the position that developing quantitative metrics of these essential goals is not only possible but also highly valuable for schools.
The belief that squishy outcomes are impossible to model and measure quantitatively is often a corollary of the implicit or explicit belief that what we do in schools has a special, ineffable, almost magical quality to it. By extension, the growth and development we facilitate for our students and communities defies reduction to a mathematical model that merely takes input variables, applies an algorithm, and outputs conclusions. To suggest that a cold formula could reveal insights we might not have seen without having the rich, complex, and human day-to-day contact with our community seems theoretically impossible and perhaps even a bit insulting of our work.
Unequivocally, the work of schools is special. It is also extraordinarily nuanced and complex. It does not, however, defy rational modeling. In fact, it is because teaching, learning, and community-building are so human that we know they are also rational. TikTok challenges aside, the human brain is a masterpiece of rationality, data collection, pattern finding, and extrapolation. When we really examine the details of our work, it is truly a long process of perceiving data about our students and families, mapping those inputs through the systems that live in our minds, and then forming a conclusion.
This dynamic is never more apparent than when we pass through a season of report cards and conferences. All over the country, through numeric scores or scale ratings, teachers are summarizing student achievement and progress in areas as diverse as calculus knowledge to social-emotional learning to self-advocacy. What’s more, we know from our hours in formal and informal meetings about our students, that even in squishy domains, different teachers can consistently and independently reach similar conclusions about a student’s performance. This phenomenon demonstrates that not only is it possible to create a shared model that maps data to conclusions about amorphous concepts like wellness or equity, but that we already regularly do it in schools.
The question of whether the way in which our human minds map perceived data to conclusions can be approximated by a set of equations and algorithms has also already been answered outside of our schools by the fine folks at Google, Apple, and Facebook. I have no question that if my Mom competed against the Amazon AI in a contest of sending me gifts, my Mom would lose (sorry, Mom!) Just this week, the Spotify AI suggested I might like two EDM songs about Jedi and Marvel superheroes — an incredibly specific and perhaps disturbingly accurate recommendation from a set of equations and algorithms!
Independent schools are a long way off from maintaining sophisticated AI like the Big Data companies, but their existence in the world proves that the way in which humans translate data inputs to conclusions via algorithms is in fact replicable to some extent by quantitative methodology. The challenge then is whether schools can create basic quantitative models of the judgment processes faculty already apply.
We know it is possible because it is already happening in schools. From DEIJ dashboards to student experience models, schools are already finding creative ways to leverage data to design metrics of the squishy. At Maret School in DC (home of CIRIS), we have implemented a Thrive Index system that, through a set of formulas, combines data from all of our information systems and students themselves to create profiles of student engagement in six mission-inspired dimensions of student life. The image below of the Thrive profiles of two different students is one example of how schools can build quantitative metrics that capture squishy outcomes.
The graph below shows examples of the Thrive Index Model used at Maret for two students: the red student and the blue student. Each spoke represents a dimension of student life, and stronger results appear as being further out from the center of the wheel. In this example, both students are performing similarly in terms of grades (Academic Achievement), but the red student has higher ratings in areas like time management, self-advocacy, sense of academic growth/progress and satisfaction (Academic Efficacy) as well as engagement in the arts, and social/community life. The blue student is above average in terms of athletic engagement and physical and emotional health.
Is the Juice Worth the Squeeze?
Even if we accept that it is possible to create quantitative measures of “unquantifiable” squishy outcomes, it is not clear that the investment in doing so is worth the effort. After all, if these models are truly just approximating the processes faculty and staff already accomplish, then what is their value added?
Both humans and mathematical models draw conclusions based on two factors: source data and algorithm. Without a doubt, the faculty and staff at a school each absorb a much deeper and richer set of source data about the school experience than a mathematical model could capture. We know through bitter experience, however, that even when we do our best to coordinate our knowledge, we miss warning signs and indicators we shouldn’t. Whether it is an individual student who ends up in crisis or, as many schools experienced on a broader level during the Black At movement, the fundamentally human habit of parsing the huge amount of data we ingest every day through the lens of known patterns and reified anecdotes creates dangerous blind spots and biases.
While poorly designed and monitored mathematical models can have the potential to perpetuate bias, they also provide a function schools desperately need: objective counternarratives. Because they apply a consistent algorithm to a much broader yet shallower set of source data than humans do, they reach different conclusions and can flag different patterns and warning signs than we might fail to notice. In an ideal scenario, the quantitative model’s divergent conclusion might be accurate and help us catch a problematic situation earlier. Even if the model misidentifies an issue, having to go through the process of exploring the situation to refute the model gives us the incredibly valuable opportunity to stop and re-examine whether the human conclusion has fallen prey to shortcut thinking. After a crisis unfolds, the most common and toughest pain educators feel is the wish that we had “noticed something sooner.” The potential quantitative models have to give us that chance to avoid rather than repair suffering among the students, families, and colleagues entrusted to us is alone convincing evidence of their value.
This is not the extent of their value, however. Mathematical models of squishy outcomes can improve operational efficiency and financial position. A quantitative model of student experience gives Admissions staff additional material to substantiate the claims about campus life they make to potential applicants and their families. Quantitative models of the squishy concept of “fit” can inform and improve who we bring into the community through admissions and hiring offers and optimize the use of our financial aid and salary budgets. Similarly, they can help us manage student attrition and faculty turnover by helping us notice institutional frictions we might otherwise miss. Given the costs associated with finding, recruiting, and replacing faculty and families, the financial value of mathematical models ranks in a close second to their value in supporting community experience.
Overcoming Capacity Constraints
Actually creating a quantitative model of the squishier aspects of school operations can be complex, and school leaders can understandably feel daunted by the undertaking. More often than not, however, schools are closer to the goal than they think. At CIRIS, we believe that excellent institutional research work requires functional capacity in five key areas:
- Institutional Knowledge
- Data Architecture
- Data Analysis
- Data Culture
Between their leadership teams, technology offices, and STEM departments, schools often have a deeper well of skills needed to tackle this kind of project than they might think. Where there are gaps, organizations like CIRIS exist precisely to help schools who are looking to build these capacities so that they can sustain institutional research initiatives internally. We provide a catalog of projects completed by other schools across the country, host online networking events, share training opportunities around statistical skills and software tools, and fund an annual Summer Fellows Lab to incubate and accelerate major institutional research projects for a cohort of independent school data enthusiasts. For more information about CIRIS resources, please visit us at ciris.maret.org and follow us on Twitter @CIRIS_Maret.
There are many successful models for how schools can accomplish this kind of exciting and valuable institutional research work, few of which require hiring a designated, full time Institutional Researcher (though if you can do so, you should!) No matter how you bring together your expertise in the five core competencies, the impetus to do so is clear. We are living in an era of data. The future of effective education will be a product of how well we can strategically collect, store, and leverage data to augment and assist the people in our communities and continue to push the boundary of what we consider unquantifiable.
Mission & Data, LLC specializes in supporting independent schools in developing or strengthening their mission-driven, data-informed decision making capacity. We coach and consult with school leaders and boards of trustees to make sense of the input, process, outcome, and satisfaction data that is available through existing channels and generate rich and generative new data sources such as stakeholder interviews and surveys. If you would like to know more about how Mission & Data, LLC can help your school, please contact us at firstname.lastname@example.org
Eric Heilman is Mission & Data’s Institutional Research Consultant. Eric also serves as the Director of Institutional Research at Maret School in Washington, DC where he develops and implements data models supporting enrollment management, faculty development, DEIJ initiatives, and student wellness as well as the Executive Director of Maret’s Center for Institutional Research in Independent Schools (CIRIS). A winner of an E.E. Ford Foundation Educational Leadership grant in 2020, CIRIS supports the professional development of institutional researchers in independent schools by supplying resources, technical training, and mentorship to data analysts and school leaders across the country. After finishing an undergraduate degree in International Economics at Georgetown University, Eric went on to graduate school in Economics at the University of Chicago where he earned a Master’s degree. While at the University of Chicago, Eric worked on the research team of Nobel Prize winner James Heckman, focusing on microeconomic models of educational attainment.