Improving Online Peer Review for Design Education

(This post summarizes our CSCW 2018 paper “Increasing Quality and Involvement in Online Peer Feedback Exchange” by Sneha Krishna Kumaran, Deana McDonagh, and Brian P. Bailey)

Image for post
Image for post

Many readers might be familiar with the design process — an iterative process of defining a problem, ideating and prototyping potential solutions, and building and testing the final solution. The unspoken, but integral, element of the design process is getting and incorporating feedback from others. Sometimes this is feedback from a client, supervisor or instructor. However, feedback from peers is equally as important.

Image for post
Image for post
This is the design process for the “Magnetic Cup” project. This product was intended to help reduce water use in the future. The team proposed to use a magnetic field to create utensils that would not require washing.

In design classes, instructors will commonly ask students to present their work to their peers in class and ask the class to critique the project. This process of peer review allows students to learn from each other’s work as well as gives students a valuable opportunity to improve their work. However, design classes are getting larger and larger, which makes it difficult for traditional face-to-face critique. Instructors are turning to online peer feedback strategies to combat the problem of scale, but there are limitations on how they can be used.

We had the opportunity to test our own peer review platform which incorporated features of peer mentorship and showing design history (context) while the provider was writing feedback. These features were intended to address some of the limitations of existing platforms. We tested these features in a product design class at the University of Illinois Urbana-Champaign. This class, called Human Centered Design, involved a team-based, semester long project. Students were asked “to create a product that would be useful in the year 2041 to solve a major societal problem.” This challenging futuristic design brief resulted in many novel ideas which required feedback to be successful.

Now, if you don’t wish to read any further, here is what you need to know about this project and its implications for online peer feedback exchange. First, students in their role as designers wrote longer responses to the feedback provided by mentors. When assigned as mentors, students reported being more receptive to the feedback received compared to students who wrote feedback for randomly assigned projects. Second, our results show that the feedback exchanged at the late design stages was of higher perceived quality when the context was shown. Showing the context at earlier stages resulted in feedback that was perceived to be of lower quality, indicating the context was more of a distraction that an aid at these stages.

Read on to find out more about our contributions. In summary, we had three main contributions. First, our results contribute deeper empirical understanding of how peer mentorship and showing context of a prototype affects peer feedback exchange in a project-based design course. Second, we provide guidelines for design instructors to determine when to use these features (e.g., show context only at the later stages of a project). Third, our results have implications for the design of peer feedback platforms (e.g., offer more flexible peer assignment strategies and allow students to decide what information to share with peer reviewers).

Assumptions of peer feedback platforms

Our work challenges two assumptions of peer feedback platforms. First, many widely-available peer review platforms assign students to give feedback randomly to different projects by default. While students can perceive this as fair, it can reduce the social connections that are formed in face-to-face critique. Second, these peer review platforms also assume that only the current submission is important. But the history of the project can also be important, especially in design projects. While these design decisions aren’t an issue for all courses, they are not ideal for courses that involve a multi-stage design project.

In order to challenge the former assumptions, we implemented two features of our own: peer mentorship and showing design history (or context as it is referenced as later). Peer mentors were assigned to one project over the entire design process. This feature was designed to make the provider more invested in writing quality feedback for the design. For context, we showed the provider the previous iteration of the design, the feedback it received and the designer’s response to the feedback. This showed the feedback provider the effort the designers put into their work.

These features were further inspired by Social Bond Theory. Social bond theorizes that a lack of relation between an individual and a community results in undesirable behavior. Since social bond is usually developed by face to face interaction, students in online environments may not feel the same bond to their peers. We designed our factors of mentorship and providing context to influence the aspects of social bond (attachment to peers, commitment to the peer feedback activity, involvement in the activity, and belief that their feedback would have an effect on the project).

Experimental Structure

We tested these two features in a product design class with 59 students. Students worked in teams of 2–3 and got feedback from their peers 4 times during the semester. This feedback was received at the end of the concept, low fidelity, medium fidelity, and high fidelity stages.

At each stage, there was one in-class feedback session. Students uploaded their prototypes before class. During class, they provided online peer feedback, and took a questionnaire to measure their social bond. After class, they rated and responded to the online peer feedback they received.At the end of the semester, students took a survey about their experience and an optional interview.

Image for post
Image for post

Our experimental structure was a 2x2x3 factor experiment where our factors were:
- Assignment: Mentors vs Random
- Context: Shown vs Not-Shown
- Stage: Low, medium, and high fidelity

Mentors
Half the students (randomly chosen) were assigned as mentors to projects and half were randomly assigned to a different project at each stage. Mentors were assigned to provide feedback to 1 project for the duration of the semester. In contrast, students in the random condition never see the same project twice.

Our hypothesis is that as a mentor, a student would feel a stronger bond with the designer of the project they reviewed.

Context
Half of the mentors and half the non-mentors were further shown what we called context before they were allowed to see the current prototype. This context included the prior prototype, the peer feedback that this prototype received and the designer’s response to the feedback. In contrast, providers who were in the no context condition were only shown the current prototype while they were writing feedback.

We believe that showing the prior condition would make the feedback provider more committed to writing feedback as it increases the transparency of the design process.

Image for post
Image for post
In the context condition, our platform displays the prototype(s) from the prior design stage, the peer feedback received on that prototype, and the designer’s response to that feedback (context — right image). Students in this condition can review the context when writing feedback for the current prototype (left image). The current prototype and context were shown on separate pages in the actual implementation.

Measures
Throughout the study, we measure the characteristics of the feedback written by the feedback providers (length, topic, and sentiment) as well as the feedback providers’ social bond. We also measure the perceived quality of the feedback from the designer’s perspective, the length of the designer’s response to the feedback, and the action the designer took on the feedback.

Quality of Peer Feedback

We found that context and stage had an interaction effect on the perceived quality of the feedback and the two stages that were most interesting was low fi and high fi. At the high fidelity stage, we saw that students who were shown context wrote higher quality feedback (which we expected). However, at the low fidelity stage, we see the opposite pattern. Students who were not shown context wrote higher quality feedback. One possible explanation for this is that designs change rapidly at early stages. Therefore, the context here may have been a distraction to the provider. This finding indicates that showing the context of a design is beneficial for feedback composition, but only for prototypes at the late stage.

Feedback Length

Students who were not shown context wrote ~20 more words than students who were shown the context (that’s half a paragraph more!). A possible explanation is that students allocate an attention budget for the feedback task, and effort invested in evaluating the context of the prototype is subtracted from writing the feedback.

We also noticed that student wrote less feedback as the term progressed, possibly indicating course fatigue.

Image for post
Image for post

Types and Sentiment of Feedback

Overall, judgment (37%), recommendation (20%), investigation (16%), and interpretation (11%) were the most referenced categories (from “Critiquing Critiques” by Deanna Daniels). The other categories were referenced much less: process-oriented (8%), brainstorming (3%), comparison (2%), identity invoking (<1%), and free association, was not commented on. A Chi-Squared test found no difference in the distribution of categories between the different Assignment and Context conditions or between the different stages.

In the dataset, 35% of the idea units were labeled as critical, 22% as neutral, 36% as positive, and 7% as indeterminate. Our results showed that mentors wrote less critical feedback (31%) than those who were randomly assigned (39%; χ2=9.31, p=0.002). Perhaps mentors, being aware that they were going to review the same design at multiple stages, intentionally wrote feedback with a less negative tone. Context did not affect sentiment.

Actions taken on the feedback

Designers reported ignoring 12%, considering 58%, and implementing 30% of the feedback (percentages are for all designers overall). Neither Assignment nor Context affected the designer’s reported actions on the feedback. However, our results show that designers who were themselves mentors considered or implemented more feedback than students who were not mentors (χ2=4.78, p=0.028). Non-mentors ignored twice (15%) the amount of feedback ignored by mentors (8%).

One explanation is that mentors were more aware of the effort taken on the part of the feedback provider, which made them more receptive to the feedback they received on their own projects. This finding is consistent with mentors providing more feedback with positive sentiment.

Responses to the feedback

Students wrote more in response to feedback from mentors who were not shown context (μ=66.9, SE=5.4) compared to students in the other conditions (μ=45.1, SE=1.8). Although mentors did not write feedback of higher quality, they wrote feedback that promotes a longer response. Results also showed that designers wrote shorter responses to providers who were shown context (μ=56.5, SE=3.1) compared to providers who were not (μ=43.4, SE=2.0; F(1,58)=10.3, p<0.01, η2=0.20). As discussed earlier, providers who were shown context wrote less feedback, which may have prompted the designer to write shorter responses.

Social Bond

The experimental factors did not have a statistical effect on Attachment, Commitment, or perceived Involvement. The high values on the measure of social bond (the means were at the upper end of the scales) indicates that the social bond may have been facilitated more by the in-class activities organized by the instructor compared to the online feedback exchange. No other significant effects were found.

However, students in the Mentor condition (μ=16.6, SE=0.52; scale ranges from 4 to 28) scored lower on the belief index than students in the Random condition (μ=17.1, SE=0.50; F(1,58)=3.48, p=0.065, η2=0.34). One possibility is that because mentors were able to review the progression of the project, they could notice when their feedback was used or not. If their feedback was not used, they may not believe the designer valued their feedback. However, regardless of the belief the provider had, our data showed that the providers continued to write feedback of similar quality. This may have also impacted mentors to consider and implement more feedback as designers than the randomly assigned students.

Image for post
Image for post

We also conducted a mediation analysis between our experimental factors and the characteristics of the feedback with social bond as a mediator variable. This analysis indicated that neither Assignment (b=-6.3, t=-1.0, p=0.34) nor Context (b=-3.3, t=-0.48, p=0.63) correlated with social bond. This result indicates that social bond did not mediate the effect of Assignment and Context on perceived quality or on the length of the feedback.

What instructors can do in their classes

To implement peer mentorship in project-based courses, instructors can create or adapt a platform similar to the one used in our experiment or implement feedback exchange in the online discussion forums available in many instructional technologies. Ideally the instructor can configure the forums for anonymous exchange and limit access to only the designer and the assigned mentors. As this forum also acts as a record of prior comments and design stages, feedback providers can also review a project’s progression if needed. Instructors should also consider assigning mentors to two or more projects so students can contribute to the progression of a project, while also benefiting from the exposure to different projects and solution approaches.

Conclusion

Design educators are increasingly using online peer review in their courses. This is generating tremendous research interest in how to promote quality feedback exchange online. This paper contributes empirical knowledge of how a strategy of repeat peer assignment (i.e., being a peer mentor) and presenting additional context for a prototype (e.g., the prototype at the prior stage and its feedback) affect online peer feedback exchange. Our findings show that being assigned as a peer mentor makes students more receptive to the feedback received on their own projects and that designers write more in response to feedback from mentors. Students who were assigned as a peer mentor enjoyed their role, while many students who were assigned randomly to projects reported they would have preferred to have been assigned as mentors. Our findings also show that providing additional context for a prototype is most helpful during the late (refinement) stages of a project. Our contributions add to the literature for features that can be implemented to improve online peer review and provide instructors with practical guidelines for when to leverage these features in project-based courses.

Researcher in Human Computer Interaction, programmer by nature. Dabbles in design, politics, and the general well fare of the world.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store