Finding a Fit with Learning Technology

Hari Subramonyam and students investigate interfaces that optimize what humans and technology each do best

The first computer mouse was nothing more than a block of wood with a single red button. Doug Englebart developed the revolutionary device at SRI International in 1968 to make navigating a computer screen more intuitive — a big leap from the inscrutable adding machines of the day. In the process, Englebart’s laser-focus on intuitive design birthed a new field, human-computer interaction (HCI). In its highest form, HCI produces a frictionless relationship between people and machines, the kind of symbiotic flow state that sci-fi writers and tech visionaries have hinted at for over half a century. Naturally, many of these early technologies were applied to learning and collaboration, and education technologies common in classrooms today are in many ways products of Englebart’s vision.

Dr. Hari Subramonyan builds on this tradition. In his role as Assistant Professor at the GSE and Computer Science (by courtesy) and a Faculty Fellow at Stanford’s Institute for Human-Centered AI (HAI), Dr. Subramonyam considers how intuitive digital design makes learning happen. His HCI work focuses on augmenting critical human tasks with AI by incorporating principles from cognitive psychology. More specifically, he aims to balance automation with other cognitive goals such as learning, creativity, and sense-making. “The focus of intelligent systems,” posits Dr. Subramonyam, “is on ‘making things easy’ through automation. However, there is such a thing as too easy or too automated.”

Hari Subramonyam, Assistant Professor

Inviting exploration

Dr. Subramonyam conveys these HCI principles throughout his course, EDUC 432: Designing Explorable Explanations for Learning. The objective is to teach students how to design digital learning experiences that feel more attuned to how humans think and feel, such as using text as an environment to think in. Throughout the course, students apply concepts from visualization theory and instructional design. Guest experts also share their insights into design, data visualization, and representations of research. By the end of the quarter, students create their own explorable explanations.

With HCI design principles in mind, students’ prototypes address an array of challenges in education. “Into the Chaparral: A Fire Experience,” an author-driven tool developed by GSE student Vicky Z. Chan, introduces the learner to the California chaparral ecosystem and how its plants are adapted to wildfires. “I really appreciated learning about instructional design and information visualization, discussing examples of explorables, and getting to create my own explorable,” says Vicky. “I can see these concepts informing the digital science communication work I want to do!”

“Into the Chaparral: A Fire Experience,” an app developed by GSE student Vicky Z. Chan, utilizes HCI principles to more intuitively teach ecology through technology.

In another project, “Holding onto our Best Defenders of Student Learning,” GSE student Helen Higgins prompts the learner to deeply engage with issues affecting teacher retention as well as the impact on students. “As someone without a technical background, this class helped me break down core aspects of interactive learning,” explains Helen. “Since this class, I’ve returned to some of these tools, like metaphors, focal points, and hypothesis-building when designing in-person and digital learning experiences.”

“Holding onto our Best Defenders of Student Learning,” an app developed by GSE student Helen Higgins, communicates issues that teachers face through interactive visual narration with technology.

Augmenting learning alongside AI

In addition to the course, Dr. Subramonyam’s research has explored a wide range of learner-centered interfaces, including with AI. “Today, there are many things that AI can do,” he notes. “However, shaping the right learning experience requires a multidisciplinary perspective to set expectations about what AI can do — to predict when it might fail, to align human values with artificial intelligence, and, most importantly, maintain human agency in learning.”

Towards this effort, Dr. Subramonyam has developed prototypes to intelligently support learning activities such as annotation and visual sense-making. With texSketch, Dr. Subramonyam and his collaborators created a prototype that makes it easier to produce diagrams while engaging in active reading strategies through the use of AI and natural language processing. Another application, VideoSticker, supports visual note-taking on video content through a process that enables object detection and links with the transcript.

VideoSticker allows students to extract moments from videos and connect concepts. In the spirit of HCI, the interface reduces the friction between media and sense-making for the learner.

As part of his research, Dr. Subramonyam has introduced three strategies for combining AI with human effort. The first strategy, automation after human-effort, involves a learner receiving automation support after they have demonstrated learning. This is seen in texSketch, in particular, where important relationships from a reading are automatically visualized only when it senses that the user has learned key concepts.

A second strategy, automation as a complementary to human-effort, also involves automation support as a strategy for human-AI interaction. However, complementarity is emphasized in a way that ensures the human is still in charge of the process. This is evidenced in TakeToons, a tool that helps to design animations by automatically aligning spoken dialogue to script and enabling flexible edits through speech-based commands.

TakeToons narrows the distance between user and computer through a built-in process that invites iteration alongside an AI.

The final strategy, automation as last-mile optimization, considers automation support to help humans make sense of large and complex data sources. AffinityLens, a computer vision-based tool, embodies this well; the application reduces the effort of diagramming by providing overlaid data insights on top of sticky notes and extending human-made clusters with additional recommendations.

Building out and learning across

As more learning moments occur on digital interfaces, HCI will take an increasingly significant role in the lives of educators and students. Traditionally, HCI integrates expertise in cognitive psychology, software engineering, and design. But weaving students and educators into design and development, as in EDUC 432, will be important going forward for inclusive design that centers as much on how the learner learns as what the technology can do. “The co-design processes, design representations, principles, and tools that I am developing in my research enable teachers, students, engineers, ethicists — and, of course, designers — to collaborate across expertise boundaries,” adds Dr. Subramonyam.

Moreover, building education technologies now means acknowledging how humans think, feel, and move around a screen. The key is connecting minds, technologies, and classrooms — and spreading those intentional practices to everyone building and designing technology. HCI offers a toolkit for these connections and a way to calibrate the digital boundaries of today’s learners. “Ultimately,” says Dr. Subramonyam, “by centering people in technology design, HCI can drive the creation of ethical, inclusive, and usable AI experiences.

This article was co-written by Josh Weiss and Miroslav Suzara, and published by The Office of Innovation and Technology at Stanford Graduate School of Education. For more information, contact josh.weiss@stanford.edu.

--

--

Stanford GSE Office of Innovation and Technology

Designing and delivering digital learning solutions for Stanford Graduate School of Education