What you need to know about AI in the classroom
In this Q&A, MIT researchers discuss the proper use of AI in schools and policy recommendations for effective learning.
By Marisa Demers
The launch of ChatGPT in 2022 ignited a firestorm regarding AI’s influence on education and learning. Yet, school districts began using AI tools several years earlier, from optimizing school bus routes to assessing students’ knowledge to determining who can take honors classes.
Despite the frequent use of AI tools, the federal government has offered little guidance on when or how to integrate AI in the classroom. In a recent paper, “How Policy Can Help Ensure the Proper Use of AI in K-12 Education,” Daniella DiPaola, graduate student, Andrés Felipe Salazar-Gómez, research scientist, and other MIT researchers suggest ways that federal policy can create AI guardrails that advance learning while eliminating harm and bias.
DiPaola and Salazar-Gómez’s work is part of a collection of briefs that examines AI in education. The effort is overseen by the MIT RAISE Initiative (Responsible AI for Social Empowerment and Education), a collaboration between the MIT Media Lab, MIT Schwarzman College of Computing, and MIT Open Learning.
MIT Open Learning spoke with DiPaola and Salazar-Gómez about their policy brief and what parents need to know about AI in the classroom.
Q: What are the major unresolved challenges surrounding AI and education?
Salazar-Gómez: One issue is data collection and who keeps this data. These AI tools use different modalities including voice, text, and images. Who owns the data? Where does it go? What do we think about private companies owning data on children? There are a lot of gray areas that haven’t even been conceived yet.
DiPaola: We also don’t know the long-term outcomes of AI use in the classroom nor the appropriate time to introduce these tools. For instance, there’s a lot of conversation about ChatGPT and other large language models offering writing assistance. At what point have students developed enough as writers that these tools enhance, not weaken, their skills? There needs to be a continuous effort to evaluate and monitor progress in schools.
To be clear, we do not have a doom-and-gloom approach to AI. We think it can be a highly beneficial tool. Our policy brief proposes a comprehensive ecosystem that promotes evidence-based research with a special focus on eliminating bias and harm, AI literacy, and teacher training.
Q: In what ways could AI create a new digital divide?
DiPaola: It depends on the context, but poorer communities may not have the resources to gain access to AI tools that require yearly subscriptions per student. Some people believe that we can help under-resourced and understaffed schools by providing students with AI tutors. That is a harmful use of the technology. AI systems should work with teachers, not replace them. Also, algorithms can be biased, and if we leave it up to these systems to decide who can take an honors class, that can be detrimental to students who are already at a disadvantage.
Q: How can federal policy help?
DiPaola: We’re not advocating for the federal government to make all the guidelines for each state or school district, but we do believe there are efficiencies and advantages in having the federal government take the lead. The federal government is more likely to have the wherewithal, both financial and human, to fund research, set technical standards, and assist in procurement.
Salazar-Gómez: Several states have created their own guidelines on how to use AI in the classroom, and that’s great. Now, the issue is these guidelines are not aligned state-by-state. Some say AI literacy should only be taught in computer science classes while other states require teaching AI in all classes. Those are completely different approaches. For the benefit of children, we need to create a federal baseline that’s informed by evidence-based research.
Q: With a new school year underway, and no federal policy in place, what should families do?
Salazar-Gómez: Sometimes parents without a technical background think they cannot ask questions, but you don’t need any specialized knowledge to ask how data is collected, how these algorithms are being evaluated for accuracy and safety, and if there are issues related to fairness and bias. Society and families should be empowered to ask these questions and to be included in the process. One might argue that if teachers and parents aren’t involved in refining the AI technology used in the classroom, then it’s probably not a very good one.
Authors of “How Policy Can Help Ensure the Proper Use of AI in K-12 Education” include: Daniella DiPaola; Andrés F. Salazar-Gómez; Hal Abelson, professor of computer science and engineering in the MIT Department of Electrical Engineering and Computer Science; Eric Klopfer, director of the Scheller Teacher Education Program and The Education Arcade at MIT; David Goldston, director of the MIT Washington Office; and Cynthia Breazeal, professor of media arts and sciences and dean for digital learning at MIT Open Learning.