Efficacy vs. Impact in EdTech Research
Industry experts have been pushing for efficacy research as the gold standard of evidence to prove that a product “works”. While no one disagrees, it’s difficult to pull off. Let’s lay out some of the challenges, and explore why impact may or may not be a suitable stepping stone along the spectrum of edtech research.
Problem One: In order for efficacy research to be conducted, a randomized control trial (RCT) must be implemented in schools.
“The control group students are acting as guinea pigs- an idea that most parents wouldn’t be too fond of”
Basically, an RCT demands that one group of students has access to the EdTech product while the other group does not. Then, over an allotted period of time, these two groups of students are compared to determine the effectiveness of the product being tested. Undoubtedly, there are several problems with the RCT. For one, these trials are almost impossible to properly execute. For full statistical accuracy, the two groups of students would need to be almost identical in terms of intelligence, aptitude for learning, and many related demographics. This is unrealistic when the RCT is more of an observational study than a controlled experiment. The other issue inherent in this design is that the control group students are acting as guinea pigs- an idea that most parents wouldn’t be too fond of.
Problem Two: The trials can prove both costly and time-consuming for schools, investors, and developers alike.
“The best researched EdTech products are often made obsolete by the pace of innovation”
At the EdTech Efficacy Research Academic Symposium, professors described the elaborate, multi-year process that they had to navigate in order to effectively conduct these trials. The trials would be worth it if they could produce consistent, quality results. However, in most cases, we find that large portions of the data are either flawed or inconclusive. Even worse, there is a general understanding that, because of how long it takes to conduct the research, the best researched EdTech products are often made obsolete by the pace of innovation.
Problem Three: Efficacy research is not something that teachers even look for when determining whether to purchase an EdTech product.
Unfortunately for those few companies and educators plodding through the arduous efficacy research process, the vast majority of educators do not even care about efficacy research. According to Dr, Michael Kennedy, speaking at the aforementioned EdTech Efficacy Research Academic Symposium, an astounding 90 percent of teachers do not demand that the products they buy be backed up by significant efficacy research. Thus, the time-consuming, costly process detailed above might seem pointless.
Is there any other way to evaluate EdTech?
Recently, a shorter-cycle metric for evaluating EdTech products called Impact has gained some traction.
What is impact?
“Impact can provide some indication as to whether or not the product will work in the given environment”
Well, there is no concrete definition, as the term’s meaning varies from product to product. However, it can be loosely defined as an intended short term resultant from the use of a certain product. Each product has its own measures of impact based on what it claims to do, and the measure of impact reached can provide some indication as to whether or not the product will work in the given environment.
But does impact solve the problems that plagued efficacy?
Unlike with efficacy data, impact-focused evaluations do not require as much time. As a result, more products can be tested with less overhead, creating a fluid feedback loop that provides companies with information they need to refine their products. This shortened evaluation timeline is far less costly for all parties involved.
“Impact evaluations are not the most scientific form of evidence and research in EdTech”
Still, it must be said that impact evaluations are not the most scientific form of evidence and research in EdTech. The short time in which the evaluations operate generally results in data that is less statistically concrete than that offered by efficacy research. Consequently, impact is truly a loosely defined metric, less grounded in evidence and data.
But despite these noticeable trade-offs, our team believes that helping companies measure the impact of their products will enable them to start collecting meaningful feedback earlier in their iterative product development process. Eventually, they will have the stable groundwork, network, and resources necessary to conduct research that is more substantial.
“Along the spectrum of research, which is most suitable to my needs and resources at this point in time?”
The day will come when the infrastructure is so effectively set that the process of evaluating products for efficacy is as useful and efficient as it is with other industries, such as pharmaceuticals, and that day will be welcomed by every EdTech stakeholder. Until then, it’s important that EdTech providers and school district decision-makers answer the question, “along the spectrum of research, which is most suitable to my needs and resources at this point in time?”