A Math EdTech Mixed Methods Study: Literature Review (Part 1 of 6)

Image credit: Lockerdome

Technology use in schools continues to expand, with a goal of increased student achievement and improved teacher effectiveness. But the nation is still asking, and the Silicon Valley Education Foundation seeks to answer, what works? Experts in the field of education have analyzed this phenomenon and have come to a wide range of conclusions.

Silicon Valley Education Foundation and WestEd are working to contribute to the conversation. During the 2017–18 school year, we partnered with two school districts in the region to conduct a mixed-methods study on what math EdTech products are in use in 7th grade, how they are being used, and if they have an impact on student achievement.

This review precedes two case studies that we will publish in early 2019. We seek to highlight a small portion of existing research, focusing on computer-aided instruction in math, technology in middle school, and specific EdTech products widely used in US schools.

Technology can Support Mathematics Education

Investment in improving technology for student learning continues to increase, as does the quest to identify what works. In a 2016 literature review of nearly 150 studies on education technology used in mathematics instruction, WestEd and NewSchools Venture Fund (NSVF) suggested that EdTech can support evidence-based practices for middle school mathematics. Most broadly, technologies have the potential to support “connections between learners and resources, between learners, and between learners and teachers.” (WestEd, 2016)

The National Council of Teachers of Mathematics also takes the position that “Effective teachers optimize the potential of technology to develop students’ understanding, stimulate their interest, and increase their proficiency in mathematics. When teachers use technology strategically, they can provide greater access to mathematics for all students.” (Aldon, 2017)

Specifically, regarding mathematics education, WestEd writes, “One affordance of education technology is the use of highly graphic and interactive modes to promote more frequent integration of visual and verbal information during instruction.” Technology can help students see or interact with visual graphs, diagrams or symbolic expressions, combined with auditory language to enhance understanding. The two EdTech products in our study use videos to explain concepts visually, and they display graphs and have shapes that can be manipulated. We observed students engaging with these features — and in general, students had a positive reaction to elements of the products they can control. Research suggests, being able to stop and rewind videos and manipulate objects on the screen are important design features.

EdTech can also embed formative assessment into the learning environment. “Research suggests that quizzing, questioning, and assessment activities enhance student learning because they prompt students to recall information, reflect on the state of their knowledge and understanding, and offer opportunities to transfer knowledge to new problems or situations” (WestEd 2016). The products in the WestEd-SVEF study allow for teachers to see, anytime, where students are struggling as they complete individual lessons and assessments within the platform. However we rarely observed this feature in use — our teachers used the product assessment features during lesson planning and lesson review, not during class time.

Lastly, WestEd and NSVF suggest that EdTech can support math learning that allows students to “formulate and test their ideas with other students and to frequently assess how an activity is helping them gain math understanding” (WestEd 2016). Many students in the focus groups of our study shared that they find group work effective and enjoyable, but the two products, in particular, were best designed for individual practice. What we saw in classrooms is that multiple strategies — digital and analog — were used to address learning needs rather than all pedagogies being available in one tool.

Overall Evidence Continues to be Promising

For decades, researchers have been studying computer-aided instruction/computer-aided learning for math, and for the same length of time, they have found promising evidence.

A randomized study by Lisa Barrow, Lisa Markman, and Cecilia Elena Rouse specifically focused on the argument of whether computer-aided instruction (CAI) helps students in math, and they found that CAI can significantly enhance student mathematics achievement in middle and high school. They reference a 2002 study by Wang, Wang, and Ye that found that multimedia and calculating aids demonstrated a strong correlation with math achievement, but didn’t have a large effect on any other subject. They also point to a study done by Wenglinsky in 1998, who found the effectiveness over computer use was related to the amount of professional development the teacher receives on how to use the computer or program correctly. He also found that if the computer or program was used for higher-order thinking rather than drill and practice, there was a significant increase in student achievement (Wenglinsky 1998). Further supporting CAI’s benefit in math classrooms, Barrow and her colleagues in 2007 found that students who learned pre-algebra and algebra through CAI are 26% farther ahead than their classmates in traditional “chalk and talk” classrooms after one year (Barrow et. al., 2007).

In qualitative research, Robin Kay in 2011 looked at the expectations and use of EdTech in middle school math and science classrooms with over 400 students. In alignment with what WestEd and NSVF suggested, students in Kay’s study appreciated the visual supports offered by web-based learning tools, as well as ease of use and interactivity. They expressed frustration when tools were poorly organized or did not meet their expectations for academic challenge and support.

These are but a few examples of the generally positive evidence for the effectiveness of digital or multimedia learning tools since computers became available in schools decades ago. However, the story is still being written by educators, companies, and researchers alike. The landscape of programs and options that constitute “computer-aided instruction” or “computer-aided learning” has changed dramatically in recent years. Edtech companies received $9.5 billion in funding in 2017, 13% going to PreK-12 products (Forbes 2018), and there is no shortage of platforms or apps available to educators looking for computer-aided materials. As products develop, so must research. Educators and partners need to build on past evaluations and continue to surface new evidence.

The National Bureau of Economic Research issued a meta-analysis in 2017 of randomized control trials and regression discontinuity studies on technology-based approaches in education. Within a select body of 29 studies on computer-assisted learning, the team highlighted two promising examples. They suggest that math products can improve student achievement when they provide “customized practice” including immediate feedback to the student and/or the teacher as a student works through a problem (Escueta et al., 2017). One of the review’s authors shared in a blog post about the work that “CAI was most effective when used as an in-class tool or as mandatory homework support, essentially providing personalized tutoring on an individual level” (Quan 2017).

The study completed by WestEd and SVEF in 2017–18 focuses on two platforms not available in the classrooms Barrow studied in 2007 but exhibiting some of the pedagogies found effective by NBER in 2017: Khan Academy and i-Ready Math.

Product-specific Research is Limited

The What Works Clearinghouse, accessed November 2018, did not contain any publications specifically focused on Khan Academy or i-Ready Math.

Both Khan Academy and Curriculum Associates (i-Ready Math), like many EdTech companies, laudably seek and share research that helps identify where and when their products improve student learning. Cited evidence spans the continuum from user testimonials to third-party research, contracted by the company or by other stakeholders. Khan Academy’s website (accessed Nov 2018) highlights a handful of impact reports from the US and abroad across K-12 and higher education. Curriculum Associates has focused its highlights on research that supports ESSA level 3, as well as studies that link the i-Ready diagnostic to state standardized tests.

Despite efforts by the companies and by third-party organizations pursuing EdTech research, like SVEF and partners of the Learning Assembly, product-specific reports are few.

One noted challenge is the sheer scope required to accurately represent a product developed for kindergarten through twelfth grade and used in public or private educational setting across the world. Research on any one grade, in any one context, is limited. Of course, learnings from one study can often be applied to other settings. However, the availability of evidence for EdTech products across their full range of use would help schools make targeted decisions.

i-Ready

Curriculum Associates, parent company to i-Ready, has undertaken both internal data analysis and contracted research to explore the impact of the product. Predominantly, these evaluations discuss the connections between the i-Ready diagnostic test with standardized formative and summative assessment measures.

Three studies commissioned or conducted by the company suggest a relationship between student performance on the i-Ready platform and achievement on standardized tests. In school years 2014–2015 and 2015–2016 the Educational Research Institute of America (ERIA) analyzed data from thousands of students across multiple states who took both the i-Ready diagnostic exam and the Smarter Balanced Assessment Consortium (SBAC) standardized test. In each analysis, ERIA concluded that i-Ready Diagnostic scores were strongly correlated with the scores on the SBAC administered to students in 2015 or 2016. Correlation was in the 0.80–0.90 range for all grades 3–8 math in both years. Furthermore, they report that i-Ready Diagnostic scores accurately predict students’ proficiency on the SBAC assessments. In 2015, ERIA found that i-Ready’s designation of proficiency for individual students accurately reflected the students’ SBAC proficiency 84% of the time. Lastly, Curriculum Associates completed an analysis of its own data of over 200,000 students. Scores on the i-Ready diagnostic in the fall and the spring were compared between students who completed at least 45 minutes of i-Ready instruction per week for 18 weeks, and those who had not. Those students who had used the instructional platform experienced score gains 38% greater than those not receiving i-Ready Instruction (Curriculum Associates 2017). Our study will also explore the relationship between these assessments, as well as the potential impact of the i-Ready tutorials.

Research fellows at Johns Hopkins University offered a more nuanced reading of the ERIA studies (Bjorklund-Young, 2016). They explain the analysis done by ERIA represents “construct validity” of the i-Ready assessments, meaning “the extent to which the test measures what it purports to measure”, confirming that there appears to be a relationship between the i-Ready diagnostic and the SBAC. But they believe more evidence is needed to establish content validity or an “alignment between the test questions and the content it is intended to assess”. Furthermore, they found the platform to be accurate in predicting a student’s specific level on the exam out of four possible levels only about two-thirds of the time. They point out that no research exists on accuracy at the sub-item or standards level. The fellows share this as an example of where additional peer-reviewed research is needed.

Khan Academy

Khan Academy’s website, accessed November 2018, features impact reports across a variety of contexts. They include three examples about core academic content in U.S. K-12 classrooms, and others on math in post-secondary education, preparation for the SAT prep, and secondary school internationally.

Educators have submitted evidence and use cases for Khan Academy’s impact on student learning, particularly in math. For example, Oakland Unity charter school used the platform specifically to address student accountability in assessments through random question order and free response instead of multiple choice. The school’s ninth-grade algebra scores had been on the rise, but they believe that introducing Khan Academy also contributed to the subsequent jump in performance, from the 76th to 94th percentile. While no subgroup analysis was reported nor was there a control group for comparison, it is a documented case of Khan Academy being part of a school’s successful solution to address low math performance (McIntosh 2012). In contrast, a teacher at a charter school in North Carolina found no statistically significant changes in student scores on a common assessment after using Khan Academy for four weeks against a comparison group (Kelly 2017).

The research powerhouse Stanford Research Institute (SRI) did not settle the score. Their implementation study looked across nine sites in California in 2011–12 and 2012–13 (Murphy et al, 2014). They found variety in the way teachers and students engaged with Khan Academy. SRI did not examine summative assessment scores in this study but reported that, overall, Khan Academy played the greatest role in supporting classroom instruction by providing students with practice opportunities (82%) and allowing teachers to provide small-group instruction to some students while others used the program (67%). About one-fifth of teachers indicated that they used Khan Academy to introduce new concepts within a lesson. Khan Academy use represented less than 10% of scheduled math instructional time at the study sites; of that time, more than 85% was allocated to working on problem sets (Murphy et al, 2014). Students expressed positive views about using Khan Academy. The reasons SRI posited as to why included: motivation to complete problem sets by the platform’s badges and energy points, and appreciation that the immediate feedback and access to videos meant that students could try for success even when the content became challenging. These observations are generally consistent with our learnings six years later. Students in our study reported using Khan Academy for math practice and that flexible access to the platform’s resources helped them pursue that practice.

Use of Khan Academy globally is out of scope for our study and literature review, but two studies are worth a mention for their parallels with our observations in Silicon Valley. First, students in Chile were observed to struggle with videos on Khan Academy because of a Spanish language barrier. Watching the videos in YouTube required students to read subtitles while also watching the narrator draw out the math equations (Light et. al 2014). Our observations also included students who did not speak English as their first language struggling with the platform. At times, students who struggled with English wouldn’t have headphones, so they resorted to reading the subtitles. It was difficult for them to follow along therefore making the math learning more challenging. Second, a study done with ninth graders in Australia found that self-initiated use of platforms, including Khan Academy, was limited to reviewing concepts covered in class that were difficult for the student (Muir 2014). In our study, students also reported limited experiences of learning new content on the platform, and instead, both teachers and students saw Khan Academy as a tool for reinforcement and practice.

In short

General research and evaluation on the use of technology in school have surfaced evidence for how EdTech can support best practices in student learning in mathematics. This includes multimedia and visual aids as well as timely feedback on problem sets and assessments. Product-specific research and evaluation that is publicly available, even on products with a large footprint in our nation’s schools, remains limited.

Interested parties can and should take a stronger role in researching the effectiveness and impact of EdTech products being used in our schools. Product companies can run studies appropriate for the stage of development of the intervention (see Learning Assembly Evaluation Taxonomy rubric for inspiration). VCs can invest in studies to justify a product’s readiness for scale. District staff can work with data providers (such as vendor Learn Platform or DIY platforms like the USDOE RCE Coach) to support evidence-based decisions and share those with the field. We look forward to reading your work!

(Limitations of this review)

Studies or reports reviewed were limited to those that are publicly available or accessible through Stanford Library. This review focused primarily on technology use in the United States middle school math and on two EdTech products widely in use in those grade levels. It did not attempt to be an exhaustive discussion of all education technology literature.

Silicon Valley Education Foundation is the largest educational nonprofit in Silicon Valley. We are guided by the belief that all students are capable of pursuing higher education and boosting their future economic mobility regardless of their background. SVEF has an established legacy of providing proven STEM programs and being profoundly committed to empowering students to graduate high school career and college ready.

Works cited

Aldon, G et. al. (eds.). 2017. Mathematics and Technology, Advances in Mathematics Education, DOI 10.1007/978–3–319–51380–5_1

Barrow, L., Markman, L., & Rouse, C. (2008). Technology’s Edge: The Educational Benefits of Computer-Aided Instruction. doi:10.3386/w14240

Barshay, J. (2017, September 25). 3 Lessons Learned From Education Technology Research: Computers alone aren’t making kids smarter, but some educational software does work. Retrieved from https://www.usnews.com/news/education-news/articles/2017-09-25/3-lessons-learned-from-education-technology-research By The Hechinger Report

Bjorklund-Young, A., & Borkoski, C. (2016). Do Formative Assessments Influence Student Learning?: Research on i-Ready and MAP. Johns Hopkins School of Education: Institute for Education Policy.

Curriculum Associates. (2017). I-Ready Efficacy: Research on i-Ready Program Impact [Brochure]. Author. Retrieved November 26, 2018, from https://www.curriculumassociates.com/-/media/Curriculum-Associates/Files/i-Ready/Brochures/iready-essa-research-brochure.pdf?la=en&hash=499E658FD178A098543E7ECC056CE2E0806F7DD0

Educational Research Institute of America (ERIA). (2015). i-Ready and the Smarter Balanced Assessments: Findings from Independent Research Linking the i-Ready Diagnostic and Smarter Balanced Assessments. Bloomington, IN: Curriculum Associates, LLC.

Educational Research Institute of America (ERIA). (2016b). i-Ready Diagnostic and Smarter Balanced Assessment Consortium (SBAC) Linking Study Overview. Bloomington, IN: Curriculum Associates, LLC.

Educational Research Institute of America (ERIA). (2016). i-Ready and the Smarter Balanced Assessments: Findings from Independent Research Linking the i-Ready Diagnostic and Smarter Balanced Assessments. Bloomington, IN: Curriculum Associates, LLC.

Educational Research Institute of America (ERIA). (2016b). i-Ready Diagnostic and Smarter Balanced Assessment Consortium (SBAC) Linking Study Overview. Bloomington, IN: Curriculum Associates, LLC.

Kay, R. H. (2011) Exploring the impact of web-based learning tools in middle school mathematics and science classrooms. Journal of Computers in Mathematics and Science Teaching, 30 (2), 141–162.

Kelly, D. P., & Rutherford, T. (2017). Khan Academy as Supplemental Instruction: A Controlled Study of a Computer-Based Mathematics Intervention. The International Review of Research in Open and Distributed Learning, 18(4). doi:10.19173/irrodl.v18i4.2984

Light, D. (2016). Increasing Student Engagement In Math: The Study Of Khan Academy Program In Chile. ICERI2016 Proceedings. doi:10.21125/iceri.2016.0209

McIntosh, P. (2012, September). Oakland Unity reaches 94th percentile in 9th grade algebra among California high schools to become most improved school in California over 2 years. Retrieved November 26, 2018.

Muir, T. (2014). Google, Mathletics and Khan Academy: Students’ self-initiated use of online mathematical resources. Mathematics Education Research Journal, 26(4), 833–852. doi:10.1007/s13394–014–0128–5

Murphy, R., Gallagher, L., Krumm, A ., Mislevy, J., & Hafter, A. (2014). Research on the Use of Khan Academy in Schools. Menlo Park, CA: SRI Education.

Quan, V. (2017, September 5). Exploring the promise of education technology. Retrieved from https://www.povertyactionlab.org/blog/9-5-17/exploring-promise-education-technology

Wenglinsky, H. (1998). Does It Compute? The Relationship Between Educational Technology and Student Achievement in Mathematics. ETS Policy Information Center, 1–38. Retrieved December 14, 2018, from https://www.ets.org/Media/Research/pdf/PICTECHNOLOG.pdf.

WestEd. (2016, September). NewSchools Ignite Middle & High School Math Challenge [Literature Review]. Submitted to: NewSchools Venture Fund


This is the 1st of six posts we will be releasing on our research. Each week, we will release a new post, so please follow us, and continue to check back. If you are interested in hearing more, email ihub@svefoundation.org for additional information.