Safely showing students how others see their work

Andy Matuschak
Khan Academy Early Product Development
4 min readAug 28, 2018

We’re exploring a student-driven engine for supporting open-ended problem solving. This is the theme underlying many variations:

The third step is particularly tricky because, as we all know, people are awful on the internet. Even without anonymity, bullying is a problem in schools. If we’re only asking students to grade each others’ work, we can handle misbehavior by looking for students whose peer grades rarely agree with others’… but we’d like students to produce open-ended responses to their peers’ open-ended work! Ideally, those responses won’t have to be evaluative: other formats may better stimulate further thought.

How might we help students benefit from rich reactions to their work while avoiding abuse?

One approach is to make the communication channel much narrower. For instance, we could reflect a high-level emoji reaction back to students from their peers. Interaction can still be fairly rich this way, as seen in Journey’s wordless collaborations.

We could impose moderation or involve teachers in approving students’ communications, but that would slow down the feedback loop substantially. Civil partially shifts moderation responsibility to users, asking internet commenters to rate the civility of others’ responses — and their own — before allowing them to leave a comment.

Is there a middle ground? A channel wide enough to inspire plenty of follow-up thought, but narrow enough to need less moderation? We’ve discussed a few ideas so far.

Watching a student extend your work

In math, we might show a student a peer’s strategy, then ask them to solve a new variant of the problem. In the humanities, we might show a student a peer’s essay beside their own, then ask them to draw lines between all the places they and their peer were making the same argument.

Then we can show that work back to the original peer. It may even be interesting to see the literal replay. Seeing another student understand and extend your work may itself prompt new thoughts.

Did your peer deploy your strategy exactly the way you did? Maybe they took a shortcut you didn’t think of!

Did your peer have a different angle on the same argument you were making? Did they arrange the arguments in a different order? Does that order flow better? What about the arguments they used which you didn’t use?

Relationships and connections

Arrangements and rearrangements can supply rich fodder for follow-up thought over a channel narrow enough to avoid moderation risk.

For instance, we’ve written about asking students how similar two peers’ responses are to each other. We can establish clusters of student work from that data or by directly asking students to sort each other’s work into clusters.

Once we have those clusters, we can show students their work as arranged in context of peers’ work. Do they see why others have marked certain work as similar to their own? Do they disagree with the arrangements? Can they see subgroups in the clusters?

We can also ask students to establish orderings within their peers’ work. For instance, we might jumble up a student essay’s sentences and ask a peer to arrange them sensibly. What does the original author think of the new arrangement? We can even make the system ask that question only if the peer’s arrangement is different from the original.

Or we could erase all the quotes from a history essay but leave the surrounding text… then ask a student to find the passages in the primary sources which might belong in the quoted regions. What does the original author think of the substitutions? Do they provide another angle on the same argument?

Contextual reactions

In addition to allowing students to apply emoji to an entire piece of student work, we could give them a curated supply of emoji stickers to place within the student’s work, where the feel it might apply. Maybe a particularly strong argument gets a 🔥 sticker. Or in a math proof, students could put the 😎 sticker on a specific step. Besides being interesting feedback for students, data at this granularity could also help us drive more interesting rich tasks in the future.

We’ve also explored giving students a set of highlighters where each color means something specific. For instance, we might ask a student to highlight an essay in blue wherever evidence is being used, in purple wherever an opinion is stated, and in yellow wherever an opposing argument is rebutted. The original author might be surprised to see a surplus of purple or a lack of yellow — a great opportunity for revision.

We’re continuing to generate more ideas here, but we’re also narrowing down on a small subset of our concepts for a live prototype soon. Time to start building!

Discuss this post on Reddit.

Originally published at klr.tumblr.com.

--

--

Andy Matuschak
Khan Academy Early Product Development

Wonder, blunder, salve, solve! Exploring empowering future possibilities in education with team at Khan Academy, where I lead Early Product Development.