Victor Lee Takes a Deeper Look at How Learners Notice

Custom tool developed alongside GSE IT visualizes and networks video-based impressions moment-to-moment

Gaining knowledge often requires us to notice moments deeply. This is especially true during training: a medical student must focus on instantaneous shifts in symptoms, a developing musician must track pitch in microseconds, and a budding teacher must follow, moment-to-moment, how students make sense of new ideas. Typically, we are trained in the art of noticing via one-on-one interactions when a peer, parent, or instructor points out where something important is happening and how to make sense of it. This process, however, is time- and resource-intensive, requiring as much energy from the teacher as the learner.

Dr. Victor Lee, Associate Professor in the Graduate School of Education, is keenly aware of this challenge — namely, that teaching others how to develop good habits in noticing is hard to do and hard to scale. To address this, Dr. Lee has been developing new ways for students to help each other notice and understand important moments through a first-of-its-kind course in the Stanford Teacher Education Program (STEP). The aim of the course is to empower up-and-coming teachers to incorporate important ideas about data, and from data science, into any subject matter. This involves training new teachers how to notice moments of student input relevant to quantitative data, and then utilize these moments to support and amplify students’ ideas. “In education and cognitive research, we talk a lot about how effective instruction really builds on what students already know related to the topic being taught,” notes Dr. Lee. “One important step in developing excellent teachers is getting them to see when students are mentioning good ideas and making decisions about how to build on those until a learning goal is met.”

Associate Professor Dr. Lee teaches up-and-coming teachers to incorporate important ideas about data, and from data science, into any subject matter.

Planning with intent

Inspired by the Transforming Learning Accelerator, a campus-wide initiative anchored at the GSE, Dr. Lee reconsidered the utility of video footage, originally collected for research, for instructional purposes. Dr. Lee approached Josh Weiss, Director of Digital Learning Solutions at GSE IT, to consider how the skill of noticing might be taught via this video footage. The two ideated around unique properties of digital video, including its capacity to crowdsource and network patterns of noticing across a class of students. Dr. Lee and Josh sketched out an initial paper prototype, applied for a grant, and went to work assembling a roadmap for the project.

An initial sketch of a data visualization to aggregate noticings across a group of students. The design would go through multiple iterations to match learners’ needs.

Project goals were laid out. First, the technology would optimally leverage the unique many-to-many sharing properties of digital media. Second, the platform would be learner-centered and promote organic learning moments in the classroom. Third, project development needed to move quickly, so rapid prototyping and iterative development would be key.

Building and iterating

As development began, Josh pulled in Jonathan Lai, software engineer at GSE IT, to start researching web technologies that supported media annotation. One platform, Frame.io, stood out for its stability and rich data-sharing features. Commonly used by production studios, Frame.io allows multiple contributors to comment, upvote, and even doodle on a video frame-by-frame. Just as importantly, the platform also exports data in a way that can be remixed and analyzed on-the-fly.

Although Frame.io supported video-based group work well, the tool did not crowdsource noticings in the manner that Dr. Lee had envisioned. While noticings were collected efficiently, the off-the-shelf data visualizations in Frame.io were ineffectual; the data presentation was not conducive to the discussions Dr. Lee had hoped to foster among students around what and how they were collectively noticing.

Despite a rich interface to annotate and share noticings, Frame.io (above) does not aggregate responses in a layered, learner-centered way. As a result, the team decided to develop an alternative visualization tool.

To address this shortcoming, Jonathan and Josh developed a custom data visualization. The initial goal was to visualize all noticings at once while also conveying the depth and frequency of noticings in particular parts of the video. For version 1.0, a vertically stacked chart of all comments across a horizontal timeline gave a sense of the pacing of comments. All comments were exported and imported manually across the two platforms during class, and students were able to see an aggregate representation of their comments within only a few minutes of starting the activity.

However, the overlapping geometry of comments with long duration hindered the legibility of the graph. As a result, version 2.0 of the data visualization emphasized compactness; the chart became more compressed vertically and aggregated data at intervals to structure discrete sections of the video. Version 2.0 also integrated data via an API back-end that allowed professors to refresh the data visualizations in real-time.

Version 2.0 of the custom tool visualizes the frequency of group-wide noticings and aggregates in-video comments by segments of the video.

Still learning

With the essential technology developed, the team performed user testing to map out the full learning experience. Alpha and beta testing was initially performed with colleagues to evaluate rudimentary learning pathways and delivery models, then expanded out to students and professors to fit their needs. Conceal-and-reveal methodologies were tested out, along with prediction, analysis, and reflection techniques. With some groups, specific prompts and deliberate pacing were employed; with others, more organic discussions evolved from open-ended prompts and looser time constraints.

Based on these deployments, Dr. Lee submitted a report on this effort for publication in the Proceedings of the 2022 Annual Meeting of the International Society of the Learning Sciences. “In some ways,” reports Dr. Lee, “crowdsourcing noticing is a natural extension to what we already are capable of doing. But we haven’t seen it done in this way, and sharing what GSE IT has helped develop with the broader academic community is a great way to inspire and spread this innovation.”

As Dr. Lee looks forward, he sees additional opportunities with noticing, data, and digital media. Paring down features to key functionalities for the classroom could streamline the experience for learners. Richer data visualizations could foster deeper and more revelatory discussion among participants about how they take note of classroom phenomena. Garnering more annotation data over time opens the possibility for comparing patterns across cohorts of learners. “A lot of the technological innovations we appreciate now — such as recommender systems — rely on large crowds sharing what they find to be interesting and important,” Dr. Lee notes. “There is a big opportunity here to leverage this crowdsourcing for education, and it is really exciting to explore what sort of crowdsourced noticing could be possible.”

--

--

Stanford GSE Office of Innovation and Technology

Designing and delivering digital learning solutions for Stanford Graduate School of Education