Context and Implementation Matter
It’s complicated: With so many edtech options to evaluate, schools shouldn’t be in it alone.
There are thousands of edtech options for schools, but not enough information about how they will help teachers support students effectively. Educators need and are asking for more resources to help them evaluate and identify technology tools that can work in their classrooms, and for their students.
At the Chan Zuckerberg Initiative, we envision a world where everyone has equitable opportunities to learn, earn, and contribute. In order to help educators get our students there, we must help provide models and tools that enable transformative learning and teaching. However, it’s complicated.
Educators are continually presented with a wide array of new education technology tools. In fact, with spending on K12 education technology exceeding $13 billion per year, the decision-making about how to help learners and teachers in schools has gotten ever more complicated. The volume of edtech options and the marketing claims can be challenging to navigate. Rightfully, educators have shared their frustration with the lack of independent information, professional support, and time to evaluate all the options.
Decision-makers need better tools for evaluating edtech options — starting with understanding the research and evidence-base used to develop them, and then for generating and evaluating evidence about which things work in their own context. With better information about the approach and outcomes, they can reward solutions that are designed to adjust to their school or classroom context.
There are no magic bullets: regardless of the quality of any learning tool, there will never be a single, foolproof, edtech-driven solution that will work out of the box for every student and teacher across the nation. The evidence shows: Context matters. Implementation matters. Technology will always be only one element of an instructional intervention, which will also include instructor practices, student experiences, and many other community and family contextual factors. Over time, too, any intervention (technology-mediated or not) affects and is affected by multiple aspects of student development, not just academic skills, but cognitive skills, social and emotional skills, identity, and even physical and mental health status.
Because of this complexity, we need to pay more attention to how learning actually works (and doesn’t), rather than how we wish it worked. For example, however engaging video might be, it is not a complete instructional intervention. Most learners need to practice tasks, get feedback, and practice again in order to build expertise. So, an offering filled with terrific video but no practice guidance for teachers or learners is, at best, incomplete.
Supporting school leaders and purchasers in their work to make a difference for kids in their districts requires going beyond the marketing brochures. Evaluating these tools takes knowledge about how learning works and which interventions make the most use of this information. It also requires more detail on the the quality, efficacy, and fit for their own local context, teachers, and students.
So, how can we help schools do just this?
Technology has the power to support the transformation of teaching and learning, but only when decision-makers have the support and resources they need to know what works in what contexts. Encouraging a mindset of continuous improvement for a given context rather than a single declaration of efficacy allows educators and students to get the most from technology. Just as students learn complex subjects over time, we can begin building our body of evidence about which tools are effective in diverse contexts and school environments.
As I concluded my work on the U.S. Department of Education’s technical working group for the EdTech Rapid Cycle Evaluation (RCE) Coach, a free, openly-licensed tool, it has become clear that the repeated use of tools like the RCE Coach can help decision-makers become more fluent at this kind of continuous improvement: have a look at a number of the case studies of its use. The RCE Coach helps educators analyze and evaluate the information generated by the edtech tools used, so that the right questions are asked about how to improve learner and teacher environments, how to think about the sequence of evidence needed to inform practice or purchasing, and how to gather and analyze this evidence in practical ways.
Becoming comfortable with tools like the RCE Coach, and others such as Digital Promise’s Ed-Tech Pilot Framework and EdSurge’s Guide to Choosing, Vetting and Purchasing Edtech Products, supports exactly the kind of culture of decision-making that sends signals to the provider ecosystem that they need to be ready to fit their solutions into each context of use, and that evidence will be used to judge and adjust that fit over time. This, in turn, will make more products and services come “ready to be fitted” to context, rather than “ready to force you to fit them.”
Bror Saxberg, Vice President, Learning Science, Chan Zuckerberg Initiative