AI and the Future of Teaching and Learning: New Interactions, New Choices

A series of sticky notes with technology symbols including a think bubble with question marks, a magnifying glass, two settings gears inside of a lightbulb, the word “AI”, and a computer.

Key Points:

  • Artificial intelligence will enable students and teachers to interact with technology in human-like ways.
  • Individuals will find it difficult to make choices that balance benefits and risks.
  • Creating policies can strengthen how people make decisions about artificial intelligence in education.

The first blog post discussed how artificial intelligence (AI) will lead to educational technology products with more independent agency. This post adds another dimension, that AI will allow students and teachers to interact with computers in more natural ways. Individuals will find it difficult to make choices that balance the attractiveness of natural interaction with the potential risks.

Changing interactions with technology

In classic educational technology platforms, the ways in which teachers and students interact with computers are limited. Teachers and students may choose items from a menu or in a multiple-choice question. They may type short answers. They may drag objects on the screen or use touch gestures. The computer provides output to students and teachers through text, graphics, and multimedia. Although these forms of input and output are versatile, no one would mistake this style of interaction with how two people interact with each other; it is specific to human-computer interaction. With AI, interactions with computers are likely to become more like human-to-human interactions (see Figure 1). A teacher may speak to an AI assistant, and it may speak back. A student may make a drawing and the computer may highlight a portion of the drawing. A teacher or student may start to write something, and the computer may finish their sentence — as when today’s email programs can complete our thoughts faster than we can type them.

Additionally, the possibilities for automated actions that can be executed by AI tools are expanding. Current personalization tools may automatically adjust the sequence, pace, hints, or trajectory through learning experiences. Actions in the future might look like an automated agent that helps a student with homework, or a teaching assistant that reduces a teacher’s workload (recommending lesson plans that fit a teacher’s needs and are similar to lesson plans a teacher previously liked). Further, an AI agent may appear as an additional “partner” in a small group of students who are working together on a collaborative assignment. An AI agent may also help teachers with complex classroom routines, for example orchestrating the movement of students from a full class discussion into small groups and making sure each group has the materials needed to start their work.

These new forms of interaction will likely be attractive, but they also bring new risks, which require our attention.

Risk 1: People overestimate AI systems

Early in the history of AI, Massachusetts Institute of Technology professor Joseph Weizenbaum observed that when a computing device notices associations and automates actions, the people who interact with the device describe it as “human-like” or “intelligent” — hence, “artificial intelligence.” For example, the 1960s program called “ELIZA” emulated a psychotherapist by detecting really simple patterns in what people say and producing a next phase.1 The person might say, “I have difficulties with my daughter” and the computer might type back, “Tell me more about your daughter.” In this example, ELIZA didn’t understand what the person said. ELIZA just looked for a noun (“daughter”) and plugged the noun into a template response (“Tell me more about ____.”) This may seem “intelligent” or “human-like” to us even though the underlying process is unlike how people understand language. While ELIZA was not a “real” therapist, some people treated the program as if it was one. If the person needed the personal and professional attention of a therapist, this could prove very problematic.

Overestimating what the computer can do becomes more pronounced as the form of interaction is more like human interaction. When people attribute intelligence to the computer, they may be more willing to take recommendations that are not particularly smart or well-tested.

Risk 2: AI systems will collect more personal data

As computers interact in new ways with teachers and students, they are also collecting new forms of data. Collecting a student’s voice or likeness (whether in a video or a photo) is different from collecting their answer to a multiple-choice question. These forms of data contain more personal information than what was collected by older forms of educational technology, which can create risks in terms of identity and privacy.

Risk 3: AI may produce unwanted or “fake” outputs

On the automation side, there are risks as well. Computers may be able to produce a story that is newly created and responsive to a student’s interests. Yet, the same technology may make it simpler to automatically modify information in ways that distort the learning process. The same problems with falsifying images that are appearing as “deep fakes”2 in public life may occur in classrooms. As technology makes it easier to provide different learning activities to students in the same classroom, the sense of shared community in a classroom may be undermined. Without human supervision, AI systems may make it easier to change what students see or do in ways that inject unwanted levels of controversy into teaching and learning settings.

Risk 4: AI systems may not be visible or obvious

When we visualize AI as an agent, it implies we will know when we are using AI (because we know when we are interacting with an agent). However, this may not be the case. AI may not be immediately visible or obvious in certain applications for teaching and learning — for instance, when AI influences what happens next in a lesson or even what the lesson looks like. Also, interactions that seemed surprising at first may become second nature, and then go unnoticed. Risks will be multiplied because more natural interactions tend to feel less risky to people than older style human-computer interactions.

Creating policies can help individuals make good choices

Parents and educators should be trusted to make good decisions about the future products they use for teaching and learning. However, it is also important to recognize that analyzing AI systems within school technologies will be complicated. Informed decisions will be hard for individuals to make on their own. Already, people too readily give up personal data (e.g., on their phones, signing the terms of service without reading them) to gain access to convenient, interactive features. In schools, more natural interactions enabled by AI systems will be attractive — it will be difficult for decision-makers to decide how to balance desirable new features with attendant risks. Students in math class may prefer sharing their handwriting than typing math in an awkward computer syntax. Interacting by voice with a classroom assistant may be so convenient for teachers that the decide to allow devices to listen in to their classrooms. Being able to answer an assignment without typing and by submitting a voice or video recording may be very useful to students. Consequently, it is unlikely that trying to solve these problems by banning AI systems will work. It’s also risky to leave it to individual teachers, parents and caregivers to each make choices — it may be hard for individuals to assemble enough information to make an adequate decision. Instead, there is a clear federal role to support constituents of educational technology ecosystems in creating shared policies that provide a clear basis for sound educational decision-making.

1 Weizenbaum, J. (1966). ELIZA- A computer program for the study of natural language communication between man and machine. Communications of the ACM, 9 (1), 36–45. https://dl.acm.org/doi/10.1145/365153.365168

2 Biggs, T. & Moran R. (2 June, 2021) What is a deep fake? The Sydney Morning Herald. https://www.smh.com.au/technology/what-is-the-difference-between-a-fake-and-a-deepfake-20200729-p55ghi.html

--

--

Office of Ed Tech
AI and the Future of Teaching and Learning:

OET develops national edtech policy & provides leadership for maximizing technology's contribution to improving education. Examples ≠ endorsement