When are machines better teachers?

Discussions with cognitive psychologists, neuroscientists, and a review of emerging research and tech leads us to the following ideas about machine teaching agents.

Bethanie Maples
6 min readJun 21, 2018
OG Tutor: “Alexander Visits the Sage Plato in his Mountain Cave”. Folio from a Khamsa (Quintet) of Amir Khusrau Dihlavi

Almost 50% of learners experience math anxiety that keeps them from advancing in mathematics or working with data at a deep level in a professional capacity as adults. Tutoring has proven to be the best intervention for this issue and general learning remediation as a whole, but factors like time and money make scaling personalized human tutoring to meet the needs of the next generation impossible. We have to look at ways machine agents can fill the gap.

I discussed this with some experts: Dr. Roy Pea and Dr. Bruce McCandliss from Stanford on cognitive psychology and educational neuroscience, respectively, Dr. Steve Fiore at NASA on distributed cognition, and Dr. Kelly Gola from U.C. Berkeley on the neuroscience of empathy. Here’s is a summary of the emerging science:

Tutoring is the Gold Standard: There is broad evidence that tutoring is the gold standard for learning remediation. Tutoring enhances brain connectivity, enhances long-term working memory and top-down modulation of cognition in children, regardless of psychological, biological or motivational differences.

Why, Specifically? Motivation and feedback as the most rigorously proven reasons tutoring is effective. Effective tutors intervene to resolve lack of knowledge, especially when a student’s tone or body language communicates uncertainty. This requires empathy from the tutor.

On the Neuroscience of Empathy:

Findings from the field of social cognition and affective neuroscience (SCAN) suggest that learning, memory, attention, and decision-making are highly interdependent with social and emotional processes. A major reason why human agents have been more effective than machine agents in tutoring may be that humans are able to demonstrate empathy toward learners — i.e. facial cues induce adaptation in tutoring activity in terms of scaffolding, feedback, and biomimicry.

But, Machines have Empathy: There is a link between brain activation and the degree of ‘humanness’ of an agent. Reliable detection of human emotions is possible though algorithmically intensive. Practitioners in robotics, brain-computer interfaces (BCI) and human-computer interaction (HCI) are working on empathy technology — specifically look at Soul Machines and USC’s Institute for Creative Technology’s SimSenei.

Where Human Agents Excel: Human tutors have excelled in a couple specific areas in the past: finishing, scaffolding, and feedback. While human tutors almost always get learners to finish a problem correctly, machines have been less successful. Also scaffolding, or ‘guided prompting’, is common in human tutoring. A critical element of scaffolding is the transfer of the burden of performing a skill gradually from the teacher to the student, and it’s highly effective. This nuanced interaction requires sophisticated but not impossible programming for machine agents. It is possible it’s the lack of scaffolding integration by machines that explains why human tutoring has been more effective than computer tutoring in the recent past.

Where Human Agents Fail: Human tutors miss many opportunities to help students learn. When we discuss tutoring, we’re often thinking about the top 10–20% of tutors, not considering a large majority of novice tutors who are not examples of excellence. It takes years for humans to improve their performance, and critical feedback on their performance is not systemic in education today.

Also, humans have a biological and psychological response to other humans. In tutoring, this means learners can be scared of tutors, they can perceive the tutor as biased, as having an agenda, or even of not being truthful.

Advances in machine learning has recently enabled a rapid evolution cycle for not only the machine agent, but also for the practice of tutoring everywhere. In fact, some believe that fixing the pedagogical mistakes of existing tutoring systems (human and machine) ‘may produce a 2 sigma effect size’, and re-engineering tutor–student interactions (VanLehn, Jones & Chi, 1992).

Where Machine Agents Excel:

  • Tutor domain expertise, not experience, matters most. Deep analysis of tutoring outcomes finds no relationship between a tutor’s experience and effectiveness, as long as the tutor is a domain expert.
  • No Turing test needed. Machine tutors don’t need to be conversational. Studies found that learner control of dialogue during tutoring is quite low. This largely alleviates the requirement to have a fully conversational agents that pass a low-level Turing test — a notional requirement that many strive for, but which studies indicate do not dictate better learning outcomes.
  • A machine agent that demonstrates empathy, but that is ‘without a human agenda’, maybe be perceived as more truthful, less biased, and less scary by students. From a neuroscience perspective, machine agents might be the solution to scaling math teaching for anxious math learners who experience high amygdala response.

The Future is Empathy, Neuroscience, and Learning:

Cognition and behavior have been the focus of education research, over emotion, even though compelling evidence of neuropsychological connections exist. Prior research on the genetic and psychological factors affecting tutoring outcomes have largely not translated into learning technology. By incorporating SCAN into tutoring, we may be able to not only remediate learning deficiencies, but also sustain long-term growth mindset, and learning curiosity (which translates into success with standardized tests).

As empathy technology, social cognition and affective neuroscience converge, the question of learner response to agent embodiment is thrown into the spotlight.

Breakthroughs in technology and data processing will soon allow for human-level empathy display. Cues previously observable only by the highest functioning tutors, may be detected easily by digital agents, allowing machine agents to affect cognition through adaptive feedback and even emotion display.

Empathy capabilities open up many possibilities for using machine agents to interact in previously ‘human only’ learning scenarios with humans. For example, the USC’s SimSensi virtual reality (VR) agent, Ellie. With funding from the Defense Advanced Research Projects Agency (DARPA), USC researchers found using a VR agent increased sharing of personal information by victims of depression or post-traumatic stress disorder (PTSD), after a 2009 national opinion survey by General Electric said that 28% of these patients lied or omitted facts about themselves to health care professionals.

We hypothesis that the more sophisticated the embodiment, the higher the arousal of learner, and the greater the amount of neural changes and learning. Fuller embodiment will aid observational learning where mirroring action would be needed. For example, observational learning is critical in the fields of music learning, building, military operations, and mechanical assembly, where psychomotor tasks are at play. Humans were previously more effective in making learners focus because of social and biological motivational pressure from human agents. But with empathy display, machine vs. human agent teaching may be brought on par.

This all leads us to the following two opinions, worth considering when replacing humans with machine teachers:

  • The degree of agent embodiment affects learner response to tutoring. Embodiment, for the purposes of this discussion, means the degree to which a teaching agent is similar to a human. I.e. humanoid robots with names are most human. A program with no personality has the least amount of embodiment. Specifically, learner motivation changes given agent embodiment. Differences are tied to personality traits like introversion/extroversion, and neurological characteristics.
  • Learners with math anxiety may perform better with less-human agents… because they will not associate personality or emotion with feedback. Abnormal amygdala responses associated with math anxiety will be decreased for this group.

Summary:

While many pieces of human-machine interface and empathy technology are being developed for psychiatric or commercial retail, few of the technologies found at the time this proposal was written are focusing on education. The ability for software and robots to detect, mirror, and maybe one day feel empathy for learners opens up a new world for educational experiences, tutoring, and learning distribution.

Thanks: Many thanks to the experts who have contributed to this article: Dr. Roy Pea, Dr. Kelly Gola, Dr. Steven Fiore, Dr. Bruce McCandliss, Petr Johanes, Daria Clark, and Elise Gonzalez.

Keywords: Dialog-Based Intelligent Tutoring Systems, Human-to-Human Tutoring, Machine Tutoring, Educational Neuroscience. Dyslexia. Dyscalculia. Math Anxiety. Machine Agent. Computer Aided Instruction (CAI), Computer-Based Instruction, Computer Aided Learning. Computer-Based Training. Cognitive Tutor. Social Cognition and Affective Neuroscience. Tutoring Feedback. Learner Motivation.

I wrote a grant proposal on this topic — happy to supply the doc with complete references to those interested. Just message me directly. The VanLehn, Jones & Chi quote is from their 1992 paper titled “A Model of the Self-Explanation Effect.”

More where this came from

This story is published in Noteworthy, where thousands come every day to learn about the people & ideas shaping the products we love.

Follow our publication to see more stories featured by the Journal team.

--

--

Bethanie Maples

AI for cognitive development. Sailor. Science fiction nut.