Peer evaluation in open online courses
Approaches to designing effective assessments at scale
By Jessica Sandland, PhD, Lecturer and MITx Digital Learning Lab Scientist, MIT Department of Materials Science and Engineering
Open-ended questions pose some of the biggest challenges when designing assessments in open online courses. The most effective and engaging activities and assignments are often also the hardest to assess in an open online environment. We don’t only want our learners to be able to make calculations; we also want them to be able to explain scientific phenomena, evaluate various solutions to technical problems, and apply their knowledge in a wide variety of contexts.
To me, one of the most exciting things about developing courses for MITx is that we get to make a huge variety of low- and no-cost courses available to virtually any interested learner in the world. To date, MITx has already served over five and a half million online learners, and that number continues to grow every day. However, to reach learners at this sort of scale, we need to be creative in how we assess the open-ended assignments in our courses.
Formative and Summative Assessment
Let’s take a step back to think a little bit more about what we’re trying to achieve with our course assessments. In general, we divide our assessments into two broad categories: formative assessments and summative assessments.
Formative assessments are, first and foremost, a learning activity for students. They’re designed to help learners develop their skills, and perhaps most essentially, they aim to help students understand whether they are successfully mastering the topics covered in class. They allow instructors to monitor their learners’ understanding of the course material, and they provide an opportunity for instructors to provide feedback to learners. The aim of summative assessment, on the other hand, is evaluation. Summative assessment tells us what our students have learned after a significant segment of a course has been completed. A cumulative final examination is the prototypical summative assessment.
Short-Answer Peer-Review
One of my favorite approaches to open-ended response assessment is the peer-reviewed short answer question. Here, we ask learners to give a short (1–3 sentence) response or explanation that will later be evaluated by their peers. Different course developers may take different approaches, but I choose questions that have a right-or-wrong explanation so that I can provide a straightforward rubric. The peer evaluator then decides whether the learner has identified a small number of essential points when answering these questions.
The example above uses the edX Open Response Assessment (ORA) tool. Here, learners upload their answers and then anonymously grade a small number of their peers’ submissions. This allows us to assign a grade to the learners’ work while also giving the peer evaluator an opportunity to review important learning points in the class. Peer graders can usually grade these questions relatively quickly and accurately, making them useful in both the summative assessments and (the typically more numerous) formative assessments in the course.
Peer-Reviewed Essays and Projects
We also use peer review to evaluate larger-scale writing assignments and projects. This can be a little trickier than the short-answer peer review, both because it requires a more significant time investment for the peer reviewers, and because it requires them to make more judgment calls about the quality of an assignment in a field where they are likely not an expert. For these reasons, we often limit these larger-scale peer reviews to the summative course assessments.
The example shown above uses a peer review tool developed by FeedbackFruits that embeds the peer review grading process within a wider discussion board where learners and reviewers can interact, provide feedback, and deepen their discussions. Although peers can’t provide expert feedback, they do have a wide variety of professional, academic, and personal experiences that can bring depth to their discussions with their fellow learners.
These examples touch upon only a few of the issues and approaches related to peer evaluation in an open online environment. There is a growing body of academic writing that talks more deeply about peer assessment, especially in open online courses. And if you are interested in seeing more examples of peer evaluation in practice, I encourage you to explore some of the courses available from MIT on the MITx Open Learning Library, MITx Online, and edX.