Integrating EEG Technology into Educational Systems: Challenges and Future Possibilities

NeuroTechX Content Lab
NeuroTechX Content Lab
12 min readJan 2, 2024

It seems intuitive that by understanding the brain, the organ which gains the knowledge imparted by teachers, that teachers could improve how students acquire said knowledge. For practical purposes, that idea offers little help when thinking of how to best design classroom instruction to maximize learning. In his 1997 iconic essay, Brain and Education: A bridge too far, John Bruer challenged whether neuroscience could ever really improve classroom instruction. This ‘bridge’ — using knowledge from neuroscience to shape lesson planning — has since been called the prescriptive bridge, and differences in scope between the fields of neuroscience and education make it practically impossible to cross. For example, you wouldn’t ask a physicist to develop a method for removing a cavity in a tooth. Likewise, a neuroscientist wouldn’t be asked to help a teacher add new material to a math lesson. However, in the case of dentistry, physics could change the way the cavity was diagnosed by inventing the X-ray machine. It is via this other, diagnostic, bridge that technology-enhanced learning systems, sometimes called ‘innovative technology enhanced learning systems’, or ITELS, promise to usher neuroscientists into the classroom. This time — by allowing brainwaves, or electroencephalography (EEG) data, to be added to the mix of data that instructors can use to assess student learning.

Similar to how Jeff Goldblum’s character from Jurassic Park criticized others’ endeavor to clone dinosaurs, adding EEG biometrics to ITELS could easily become a “we only asked whether we could, never whether we should”-type situation, with irreversible detrimental effects to society. With that being said, the goals of using new technology to enhance learning are ever present in the minds of influential people: educators, administrators, governments, parents, scientists, entrepreneurs, and students themselves. And as commercial mobile EEG technology becomes increasingly user-friendly, convenient, and affordable, we will inevitably start to see it in more classrooms. This essay explores the possible beneficial integration of EEG into ITELS, discussing potential applications, cautions, mechanisms, challenges, and appropriate use cases.

The Need

When Bruer doubted we could design a math curriculum based on a study of event-related EEG potentials (ERP) in response to words, he might have been right. What we can attempt, however, is to aim to integrate EEG-enhanced ITELS into teachers’ existing methods to make the benefits of student-centered education easier for teachers to achieve — whether in person or at a distance. Ideally, provided the students use the systems correctly, ITELS would give time back to the teacher while providing a mechanism for teachers and assistants to pinpoint gaps in students’ individual and collective skill sets, and respond by offering supplemental materials and altering lesson plans. Think Spotify Wrapped, but for each student’s content or skill learning. Teachers already monitor student progress — so could neurotechnology really help?

A selling point of EEG and other neurophysiological measurements like fNIRS and eye-tracking, is that, provided they are physically and ‘socially’ comfortable for the user, they can offer insight into perception and experiences implicitly — with minimial interruption to normal behavior. New technologies, like the wireless ear-bud and headphone-based EEG electrodes may be plausible non-invasive options for students to wear both in classroom and during distance learning. While teachers already take advantage of a range of implicit cues to understand student progress and engagement (such as observing body language and eye contact, noting mistakes, calling on students who appear distracted, and so on) teachers have limits to what they can detect. For example, instructors often have to turn their backs to students during instruction. Technology, particularly in the fields of computer vision and natural language processing, has made it possible to overcome those blind spots. For example, by automatically labeling pictures and handwriting, or parsing and transcribing recorded speech and identifying human speakers, all while detecting where students are looking.

Sometimes, it is difficult for instructors to tell whether a student is paying attention, even as they stare intently at the material or listen to a lecture while maintaining eye contact. This is one place where EEG has recently proven to be useful. Metrics of EEG synchrony between teachers and students can be used to identify the peak times of day when students are paying attention, as well as the particular lessons. To estimate a student’s experience with specific content, these methods could, in theory, be tweaked to look at whether students attended particular paragraphs and/or sentences.

Overt feedback provided by tests, quizzes, and worksheets often have a probabilistic element, sometimes covering a topic with a single representative question, or allowing guessing to work with multiple choice answers or word-banks. A more comprehensive coverage of the student experience could pick up on elements that tests and worksheets miss. Furthermore, using test data to track student knowledge in a particular subject requires massive teacher effort. Even without EEG, large language models (like those used for ChatGPT or Google’s Bard) could be leveraged to cluster test questions into themes and topics, which could in turn be used to synthesize detailed individual student reports. If EEG could be integrated into ITELS, it might one day be possible to eliminate tests altogether. What’s holding us back? And what could go wrong?

Challenges and Future Goals

Ethical Hurdles

Large-scale monitoring brings up ethical considerations. In order for EEG to be used in instruction, stakeholders need to ensure that the use would not devolve into an episode of the oft-dystopian television series Black Mirror. Imagine a scenario in which a crime is committed and police request ITELS data during investigation — could ITELS data be used to paint the accused as class daydreamer? Protecting the identity of data subjects while also allowing teachers to track individual progress seems paradoxical — but may also be vital for such a tool to be accepted. Pseudonymisation methods, such as assigning each student a random code, provide protection against misuse of personal data while allowing teachers to infer whose data is whose. Without ethical considerations, schools may expose sensitive data to the public or government — willingly or unwillingly. The complexities of IELTS systems, however, could be a saving grace for students concerned about their privacy. Words and image tags each might be encoded in ways that only a temporary key can recognize, and the synthesized output could be displayed in a way that is minimally revealing in case it were to be leaked. This remains an eternal cat-and-mouse game.

Secondly, buy-in from teachers is vital. Many regions within the United States are facing teacher shortages, and the last thing needed is to make life harder for those who still teach. In addition to the work required to learn to use new technologies, the modern teacher already faces threats from parents, governments, students, administrators and novel technologies, such as artificial intelligence (AI). A seemingly innocuous image shown in the classroom can stir controversy or even get a teacher fired, suggesting that life is hard enough for teachers and students alike, without their every word, action, and even neural activity being recorded. Some experimental methods even require measuring both teacher and student brainwaves in tandem, raising privacy concerns.

The possibility that teachers or students are reprimanded or rewarded based on ITELS data must be avoided lest teachers and students begin intentionally hacking the system, interfering with it, or suffering from it. For example, using ITELS to root out cheating, as some researchers have envisioned, would be like forcing students to take a test using a lie-detector and might increase test anxiety. Similarly, if students are graded according to their brain activity during class they might have constant anxiety — as if the secret loci of their attention would be under constant scrutiny, and a momentary day dream could become a failing grade. What occurs offline, during homework and studying, can be as important to becoming a successful learner than what occurs online, in the classroom. If students are only rewarded for what they achieve in class, their style of learning may change in unexpected ways.

Another ethical concern is whether such a system can live up to the hype. Can the effectiveness of the ITELS overcome monetary costs and privacy fears? Trust will be required before teachers begin adopting these systems. If they don’t work at first, that trust may take decades to build even as the next generation systems are improved. As explored in the next section, this type of ITELS remains largely untested. Solving these problems will be key to building trust among teachers and the administrators who may one day purchase them.

More hurdles

The ability to predict learning from brain waves and other data is a holy grail for human-computer interaction (HCI). By sorting the results of a learner’s interactions into categories or domains, such metrics could provide implicit markers of expertise in those domains. Unfortunately, data models that one day could do this are currently insufficient, and to work require a stereotype for how the brain of someone who knows something would respond if they heard or saw that something being mentioned. Current neurocognitive models break apart when tested against difficult conditions in the wild. For perspective, ERPs — brain responses evoked by stimuli such as words — become more difficult to predict when the context preceding the stimulus becomes more complex, like a word stimulus being preceded by an in-depth lecture. This is because different contexts can change the brain’s response to the same stimulus — even in some cases making the response to different stimuli look the same.

For example, knowing of the Acropolis (the famous ancient Greek citadel found in Athens), you would find it odd to see the word Acropolis at the end of the sentence; “Connie finally went to Egypt to see the Acropolis.” You would not find it strange to see Acropolis in the phrase; “Connie finally went to Greece to see the Acropolis”. The strength of the associations between Greece, Egypt, and the Acropolis can be estimated by observing the brain’s response when various words are presented together in context. Conversely, if Acropolis were to appear earlier in the sentence; “Connie finally saw the Acropolis in Egypt/Greece”, the brain response evoked by Acropolis in each case would be the same — but those produced by Greece and Egypt would differ. In theory, such a process could indicate the degree to which a listener is familiar with a word’s meaning, or a series of words within the same domain. If, according to neural data, an incorrect suggestion that Acropolis and Egypt are associated flies under the radar, or if Greece and Acropolis show no association, it could be concluded that a person does not know what the Acropolis is. If the Pantheon falls flat too, the person might lack knowledge of Greek antiquity in general. Such a process of aggregation could be used to gauge domain learning.

Other ERP techniques, such as response similarity, can show which parts of a scene people find interesting. And alpha waves (EEG frequency waves between 8–12 Hz) and theta waves (between 3–7 Hz) can be used to estimate when people are daydreaming or concentrating, potentially catching information that a student might have missed (though it should be noted that this technique is sometimes harder to untangle).

Modeling the brain’s response to oddness or expectation to measure a person’s familiarity with the meaning of words is an imperfect game fraught with interacting effects. In order to tell which changes to an ERP or EEG frequency indicate familiarity or engagement with a stimulus, and which are due to the oddness of a stimulus in a particular context, models need to incorporate other factors. These include not just contextual factors within a lecture or drawing seen in a book or classroom or heard from a neighboring student (these may include words, images, word lengths, grammar, sentence lengths, narrative structure, cohesiveness of the writing, and genre of writing), but also individual factors like the listener’s past knowledge and abilities — not to mention cognitive state factors like emotion and attention during that particular moment. Incorporating such a multitude of factors can overload a predictive model. This occurs even when models are built on data captured from participants taking part in a lab-based study — typically a highly simplified environment — not to mention in the buzzing tumult of the average classroom where additional variables may even include social relationships between students and teachers, as well as how well the students can hear or see each stimulus!

As you can imagine, recording and tabulating all of these factors for each student would take a ton of computational power. However — this should be no deterrent. Increases in processing power may already allow such a system to process the data and provide feedback to the teacher within seconds. Below the necessary steps to achieve this vision are discussed.

The Framework

How can the seemingly impossible challenge to illuminate the faint ‘learning signal’ be overcome, even as it is obscured by chaos? The framework can be described as an effort to create ways to:

  • Monitor contextual awareness,
  • Search for implicit markers of expertise in knowledge areas,
  • Process these markers to inform individualized models of student knowledge which can be queried in order to synthesize feedback to instructors and students.

While some of the technical and theoretical barriers to ITELS can be worked on in a laboratory, classroom applications will necessitate experimentation in naturalistic, real-world settings — sometimes referred to as ‘in the wild’. This is no small feat, but there is already some push for it in education and in other settings, by both scientists and entrepreneurs. Provided that ethical considerations can be accounted for, it’s possible that researchers will soon be able to design studies in classrooms by forging partnerships with schools where real-world data is collected, perhaps in exchange for teaching the students about neuroscience or providing some other benefit. On the flipside, aspects of the classroom can be recreated in a more controlled way in the laboratory. For example, participants in an experiment can use a system one at a time to learn in a confined space or in virtual reality (VR). Other intermediate applications might be single-user consumer systems designed to aid studying for a big test, like an entrance exam.

This yin and yang of real-world and controlled or reductionist experiments will hopefully propel advancement or ITELS. Intermediate advancements would include ITELS for foundational skills, such as reading, single content lessons in a subject area, or for entire curriculums that slowly start to allow expansion to new contexts. Ideally these systems would be able to integrate into any lesson plan by monitoring the classroom and materials and adapting accordingly. Current systems that monitor the low-hanging fruit of passive cognitive states such as attention and emotion lack the sophistication to be able to make inferences about learning, or comprehension. While they may indicate expert-style focus during some high-stakes activities, they intuitively promote punishing students for daydreaming, like some infamous classroom designs have done in the recent past. Instead, we should try to refine and expand our means of creating contextual awareness, or our ability to monitor the experienced environment.

To make systems with high enough contextual awareness to measure student responses to particular events, such as a sentence, a picture, a number, or a word, we need to triangulate audio, video, and biosensing data through processes such as automatic video parsing, speech parsing and transcription, and forced alignment. Although computer scientists trained in psycholinguistics may soon create methods for monitoring and synchronizing real-world context in recordings, effectively utilizing this data requires computational models of learning, along with refined methods for analyzing and querying learner experiences. As these partnerships grow, research in cognition and learning, mathematical skill, first and second language literacy and vocabulary acquisition, perceptual expertise, discourse comprehension, and other fields that utilize biosignals can all be applied to discover implicit measurements of incremental expertise, and to test hypotheses in more real-world contexts.

Of utmost importance, for integration of EEG with ITEL systems to be maximally useful, it needs to be able to both out-compete a teacher’s ability to assess individual student progress and be flexible enough to accommodate their lesson plans and philosophies. Importantly, it also needs to follow the nuance of the teacher-student interactions that make classroom learning so human and worthwhile.

The end goal of this R&D framework is to use brain-enhanced ITELS to provide teachers with dashboards for each student highlighting the areas of knowledge that each student understands, or does not, so that individualized assistance can be offered. Gaps in our methods for how to estimate knowledge from brain responses ‘in the wild’ can be addressed with classroom experiments. While this might not lead to a dramatic change in curriculae, it could be a compelling means for neurotechnology to create meaningful value for teachers and students alike.

Written by Ben Rickles, edited by Chiara Notaro and Lars Olsen, with original design by Han Cat Nguyen.

Ben Rickles is a recent PhD in Neuroscience and Cognitive Science from the University of Maryland, where he developed his expertise in EEG, ERP, and eye-tracking data acquisition, processing, and analysis. He is passionate about using technology to advance human health.

Chiara Notaro is a PhD-level researcher and Scientific Associate at the Graduate School of Systemic Neurosciences — LMU Munich. She works with data from a variety of imaging modalities for pre-surgical imaging in epilepsy patients.

Lars Olsen is a regulatory medical writer. He works in the pharmaceutical industry writing submission documents, and has additional experience with medical devices. He has a biology background and is interested in AI, AGI/ASI, and BCI/HCI.

Han Cat Nguyen is a neurotech enthusiast and a PhD student at McGill University. Her passion is brain-computer interface, especially brain-controlled robot.

--

--

NeuroTechX Content Lab
NeuroTechX Content Lab

NeuroTechX is a non-profit whose mission is to build a strong global neurotechnology community by providing key resources and learning opportunities.