How can educational researchers better communicate the value of our work to the people we study?

CC BY 2.0-licensed Study by Moyan Brenn.

This month, the National Academy of Education hosted experts from educational research, policy, schools, advocacy, and industry to discuss emerging challenges around using large datasets for educational research. Acknowledging that since the mid-1800s, the U.S. government has managed statistical records of public schools, the focus of the workshop was balancing research needs and student privacy.

As we gathered to discuss mutual expectations and concerns, it became obvious that two different conversations were occurring. From the perspective of educational researchers, there’s confusion about what the problem is because, despite the perceived novelty of data collection, they have been engaging in the collection, storage, safeguarding, and use of student data since before large data-sets were recognized as ‘big data.’ They take research ethics seriously, can explain the history of IRBs, and know of no instances of breaches of educational research data (examples of breaches of general schooling data can be found here). They do this research in the interest of the public good, and their work has led to positive change.

From the perspective of legal scholars, parent groups, and privacy advocacy groups, there are a few concerning trends in the collection and use of educational data:

— The main problem is that researchers have done a poor job explaining the value of collecting and using student data for educational research. Much of the contributions of research to practice are invisible, especially when they are successful. There are no signposts to flag how early education becomes a priority, or a school starts serving breakfast, or why early-career teachers are paired with veteran mentors. But if there were, the signs might say say this is brought to you by research from xyz, or student data from over 100,000 kids in 45 school districts over a 5-year period have informed this new practice.

— A second problem is our failure to engage parents and teachers in the process. What several failed technology rollouts have taught us (e.g., Inbloom, Los Angeles Unified laptops rollout) is that a lack of transparency breeds mistrust and misinformation. Currently, a handful of advocacy groups challenge the use of student data, sometimes using arguments that conflate national testing with all educational research, any student data collection with third party sharing for advertising purposes, and big data with existing practices of student tracking and infinite remediation loops.

— A third problem is that the student, and here I’m talking mostly about the K-12 student, gets forgotten in discussions of student data. Educational researchers need to show the value and benefit to students of this research. Beyond that, instead of thinking of students as data points, perhaps when engaging in primary collection, researchers could use this as an opportunity, where appropriate to talk to students and teachers about data, their data, their lives converted to data points, and use this as a teachable moment to inspire an interest in research and perhaps provide data literacy training.

Where do the tech companies fit? A struggle I have with using data from learning platforms, whether they are MOOCs or personalized learning systems, is that the technology developers for the most part decide what will be measured, leaving educational researchers to adapt and adjust their research questions to what the tech allows. This is something researchers should question and push back on. Often in development cycles, teachers and educational researchers are brought in at the end, as an afterthought, rather than more meaningfully engaged at the outset. There’s a disconnect when the tech vendors decide what can and should be measured. Before digital learning platforms, when was time on task or when a reading is accessed been a sufficient predictor of performance or a focus for intervention? These are the data we’re analyzing because they are available.

During the workshop, experts spoke quite a bit about laws and legislation. I don’t think we need new laws, and I worry that proposed laws will restrict beneficial research without evidence that doing so will improve student data privacy. We’re legislating against fears, against risks of risks, but not really against any clear, evidence-based harms in data used for educational research purposes.

So, where do we go next? First, I’d like to say that very, very few people understand the kinds of data being used in educational research (e.g., administrative vs. behavioral). What non-researchers are interested in is whether data are personally identifiable and whether at any point in the collection or use, the process is going to negatively impact their kids. And are underrepresented populations disproportionately vulnerable? For example, how might disciplinary records used for research purposes impact future opportunities?

Next steps:
· Educational researchers are doing a decent job communicating the value of their research, but need to prioritize outreach and public-facing outputs.
· Academic departments need to provide training for researchers to communicate value of research.
· Educational researchers need best practice guidelines for engaging schools and evaluation criteria for what success looks like in terms of safeguarding student privacy.

Researchers need to challenge myths and misunderstandings around data for educational research purposes by discussing the long and productive history of use of student data in educational research and describing what is and isn’t done with student data. Publications such as Elana Zeide’s 19 Times Data Analysis Empowered Students and Schools and Data Quality Campaign’s Turning Data Into Information: The Vital Role of Research in Improving Education are promising examples.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.