Ask Me or Tell Me?

Enhancing the Effectiveness of Crowdsourced Design Feedback

Fritz Lekschas
Harvard HCI
Published in
4 min readApr 20, 2021

--

This blog post summarizes our work by Fritz Lekschas, Spyridon Ampanavos, Pao Siangliulue, Hanspeter Pfister, and Krzysztof Z. Gajos, which got accepted to the 2021 ACM Conference on Human Factors in Computing Systems (CHI ‘21).

Traditional feedback can be ineffective when it’s overly negative or positive. Our approach is to make the feedback giver ask an open-ended question before providing the feedback statement to promote reflection and avoid the effects of negative feedback. (Sketch by the wonderful Pao Siangliulue.)

Feedback can help us do better design work, but not all feedback is equally useful. If the feedback is mostly positive, it doesn’t motivate us to make significant changes. If the feedback is negative, it can inspire good revisions, but it may also sting so hard that we choose to ignore it. How should we deliver feedback so that people accept it and it helps them to improve their work? We tested a simple twist on traditional feedback in the context of graphic design. Instead of immediately telling the designer what we think about certain aspects of their design, we first asked an open-ended question to get them to reflect on their design choices.

We found that when people were presented first with an open-ended question (e.g., “What made you stick to a single hue for the flyer?”) followed by a related statement (e.g., “The colors are too monochromatic. There is little contrast and the headline does not pop out.”), they made better revisions to their work than people who received feedback as either just questions or just statements. Below is an actual example from our user study.

Two example feedback items for a flyer. Each feedback item consists of an open-ended question followed by a traditional statement. Although the related questions and statements target the same aspects of the flyer design, the questions carry less sentiment than the statements.

Here’s why we think this small change was so effective. First of all, people rarely ask open-ended questions that are overly positive or negative, as that goes against the nature of open-ended questions. For instance, in our study, we found that questions are almost always neutral in tone, while the related statements’ tone can vary greatly. Starting with a neutral question can be particularly important for hard-to-accept negative feedback. Second, we think that open-ended questions can promote reflection even if the related statements are not acceptable or useful on their own. This is especially useful for overly-positive statements, which might only present praise instead of ideas for improvement.

But questioning is certainly not a new approach. In particular, in teaching, it has been used and studied for a long time. So what is the novelty here? In this project, we were specifically interested in the usefulness of questions in the context of crowdsourced feedback. Crowdsourcing is an efficient method for collecting large amounts of feedback quickly. However, in contrast to other ways of collecting feedback (e.g., from teachers or colleagues), crowd workers might not have been trained to assess and evaluate one’s work. As a consequence, crowdsourced feedback can be superficial and contain strong sentiment.

Research Questions

A lot of work has studied how structuring and scaffolding the process of collecting feedback from the crowd can overcome some of these challenges. And so building on these findings, our research question was two-fold:

  1. Can we successfully elicit open-ended question-based feedback from the crowd, and does this feedback carry more neutral sentiment?
  2. Is the combination of feedback questions and statements more acceptable and effective than statements or questions alone?

Two Controlled User Studies

To shed light on these two questions, we conducted two controlled user studies. We found that, overall, 85% of all crowdsourced feedback questions were open-ended and thought-provoking through the first study. To elicit such feedback questions, we asked the crowd workers to first provide traditional feedback statements and then rephrase those statements into open-ended questions. In our second user study with 36 non-professional designers, we compared the effectiveness of feedback statements, questions, and a combination of both. As shown in the following figure, our main finding is that:

Presenting feedback questions followed by their related statements led to better design revisions than showing statements or questions alone!

Left: The design revisions’ improvement (from 1 “worsened significantly” and 7 “significant improvement”). Higher values are better! Statistically significant differences are highlighted with an asterisk (*). Right: Distribution of the feedback’s thought-provokingness and usefulness (from 1 “no, not at all” to 5 “yes, very much”) and its tone (form 1 “very negative” to 5 “very positive”).

One aspect that does not seem to be influenced by how the feedback is framed (statement or question) is the overall perception of feedback. We asked crowd workers to rate the feedback’s thought-provokingness, usefulness, and tone. However, we did not find any meaningful differences between feedback questions, statements, and a combination of both. Maybe (and this is just a hypothesis) it’s not the designers’ perception that changes but their focus. As previous work by Sargeant et al. (2008) has shown, one reason why negative sentiment can change people’s affective state negatively is that they perceive the feedback as being targeted against themselves rather than the design task. Hence, an interesting future research question to explore is whether questions can change the recipients’ perception such that they perceive the feedback to be targeted at the design task.

Conclusion & Outlook

We showed that a simple twist of posing an open-ended question prior to presenting traditional feedback can improve design revisions. Moreover, since including such questions in the feedback process complements other strategies for enhancing the feedback’s effectiveness (like structuring or scaffolding), our method can be integrated into existing design feedback systems. Moreover, we believe that our findings generalize to feedback on other kinds of work, particularly creative work with similar ideation processes.

Finally, it would be interesting to evaluate which language aspects influence the quality of question-based feedback. Also, given that designers with varying expertise make sense of and provide feedback differently, it would be interesting to study if question-based feedback is perceived differently by non-professional and professional designers.

Resources

Full paper: https://vcg.seas.harvard.edu/pubs/ask-or-tell-me

--

--

Fritz Lekschas
Harvard HCI

Computer scientist working on scalable visualization tools for pattern-driven exploration | Harvard CS PhD