Feedback: How to Ask For It, Evaluate It, and Use It

Joshua Smith
Nightingale
10 min readJun 28, 2019

--

I recently wrote a piece about how we can give better feedback by describing our experience and identifying what elements contributed to that experience — allowing the creators to evaluate how successfully their choices are executing their intended design strategy.

But, the responsibility of good feedback isn’t just on the evaluator; we can structure our ask to provide our evaluators with the context that leads to better feedback. And, ultimately, it’s up to us to interpret and integrate whatever feedback we receive.

Asking: Describing your goals for better feedback

I’ll often see a viz posted on Twitter with a short plug/description and then the quick note: “feedback appreciated”. After several perspective pieces on the appropriate time and place for feedback, the community has more commonly adopted a mentality of signaling whether or not they want feedback. However, this simple notation fails to give the evaluator any guidance on how to give feedback.

For example, let’s take the commonly-criticized choice of red and green colors. Assume I’ve chosen to use them in a viz because my audience is in retail, red and green are commonly accepted indicators (and often so culturally ingrained, e.g. “We’re in the red”, that users absolutely demand them). If I post this viz with red and green colors, and ask for feedback, I will most certainly get some feedback that I shouldn’t use those colors because of accessibility issues.

But, as I stated above, I chose the colors for a specific reason. There are plenty of ways to use red and green that get around accessibility issues, such as tweaking the saturation and luminosity so that the colors are still distinguishable for red-green color vision deficiencies. But, without providing my evaluators more context, I’m putting the onus on them to reverse-engineer my goals.

But what if I had asked the community, “Hey — my users are demanding I use red and green to indicate good and bad, and those colors carry an organizational cultural meaning that is immediately recognizable to users, and the culture will be difficult to change. What do you recommend?”

The additional context helps my evaluators solve the right problems. By giving them some insight into my design strategy, they can provide feedback that gets me to my end goal.

We can find some well-structured asks for feedback in the Data Visualization Society’s Slack channel devoted to feedback channel, #share-critique. There’s a common form used to ask for feedback, which you can see Chris Love (Tableau Zen Master) using below:

Notice how much context is provided here. While this doesn’t capture Chris’ entire design strategy here, these are really key elements: the goals and the audience start to describe the experience he wants his piece to create. Add in the ask for specific feedback, and what stage he is in the project, and you’ve got a much better starting point for more refined feedback.

While I really love this format because it’s short and simple, I think there are some additional elements we can provide when asking for feedback that can further refine the feedback we get, and even draw out new ideas from our evaluators. There are three things that can help you describe your design strategy to potential evaluators:

  1. Describe your target audience. If you’re looking for a broad audience, you might need to work from a lower-level data literacy. However, if it’s a chart for the American Statistical Association, your users are going to be much more familiar with a variety of visualization techniques and advanced methodology. This information can help evaluators refine their thoughts to fit your users.
  2. Describe the experience you’re hoping to create. What do you want people to feel when they engage with your viz? Is this a topic that you want to stir excitement? Is it a morose topic that should generate compassion? Or perhaps you want to create a sense of outrage over an injustice. The emotive reaction to our work is an important, but under-considered, aspect of data visualization.
  3. Describe the key takeaways — the “story”, if that’s the term you prefer. What do you want them to learn? Should this drive some action on the part of the users? What opposing views are you competing against? What should your users remember when they recall your visualization?

While these three items don’t completely describe your design strategy, it offers a great starting point. This information allows evaluators to ensure their recommendations drive toward your goals, rather than generic best practices.

However, even if we provide this information, we shouldn’t blindly apply feedback (even from DataViz celebrities); we need to evaluate the evaluation.

Evaluating: Interpreting feedback as a description of an experience

Perhaps this sounds ungrateful, but I don’t think we should take feedback at face value. In fact, I rarely apply feedback directly as suggested. Having said that, I do also think there’s something useful in all feedback, even if it doesn’t initially look helpful.

Everyone’s feedback is, at least to an extent, describing an experience. Even directives based on best practices, e.g. “don’t use a pie chart”, can be read as an experience. While the evaluator may simply be reciting a “rule”, they are still describing their reaction: the pie chart isn’t working and is generating a negative experience for this person, even if that experience is derived from an ingrained buy-in to certain teachings on pie charts. Or, perhaps they provide even more detail: “it’s hard for me to compare slices”. Comparing slices may not have been your goal, but you’ve now learned that the user wanted to compare slices — an interesting detail about the user experience.

If you’re anti-pie, humor me for a moment and assume that I had good reasons to use a pie chart. I shouldn’t discard the above advice, but I should recognize that the chart choice isn’t working for at least one user.

I might keep the pie chart, but emphasize certain elements with annotations and highlighting, or I might add another chart and create some interactivity, or perhaps I decide that my good reasons for using a pie chart are now outweighed by the new information that users might want to compare slices.

Regardless, it’s important we take the time to evaluate the feedback. When I receive feedback, I think through several things to identify what might be useful:

  1. How well does this person represent my target audience? You can imagine a sort of weighting system here, where feedback from individuals that are representative get more weight. However, even feedback from individuals less representative can still be helpful, as- to an extent- we are describing a human experience.
  2. What was their experience? It may take some effort, but we can search all our feedback — even directives, such as “do this”, or “don’t do that” — for hints at what kind of experience our piece created. For example, criticisms of low contrast might indicate frustration and a slower pace as users struggle to read the text.
  3. What did the user take away from the viz? This one can be the hardest to search for, but it’s often there. In our pie chart example above, we can see that a takeaway must have been in the comparison between the slices. Perhaps this wasn’t a part of our “story” — consequently, we might consider some redesigns to emphasize different elements and distract the user from comparisons.

Evaluating and interpreting feedback doesn’t mean grading feedback as good or bad, worthwhile or not. It means that we should always be looking at the experience behind the feedback. If the evaluator doesn’t describe it, we should compare their feedback to our designs and our goals, and identify the choices that we made that aren’t (or are!) working for the evaluator.

Using: integrating feedback into your design

When I was in my creative writing courses, I had a professor that emphasized the revision process by breaking down the word itself: “re” and “vision”. Revision is the act of seeing our work with a fresh perspective.

Feedback does exactly that: based on others’ experiences, our work should look different to us. Our choices should carry different meaning because we’re seeing it through someone else’s eyes. With our new understanding of how our choices impact the experience we’re creating, we can return to our goals and determine how effective our choices have been.

At some point in my artistic / writing background, I learned something incredibly valuable pertaining to feedback: I don’t need to incorporate the feedback I receive, and I certainly don’t need to implement it directly as stated. I’ve learned that someone may say, “your use of white space feels inconsistent, there’s a lot more in this particular section”, and I may take that as an indication that my choices are successful. Perhaps I’m hoping to use a disproportional amount of empty space to emphasize a particular moment in my piece — in that case, I may decide my feedback is an indication my choices are having the desired effect (although perhaps I should make the effect more apparent so it doesn’t look like a design flaw).

This is particularly helpful when you have conflicting feedback. What if one user tells you that you use too much jargon, while another user suggests more jargon to be more specific and precise in your language? The interpretation of the feedback is similar for both: your text was not clear to the audience. If your design strategy is to create a more scientific, analytic experience, you might work your way up to jargon by adding in some definitions along the way. Alternatively, perhaps your targeted audience is a specific group of knowledgeable scientists; in that case, you might not incorporate the first feedback, but focus instead on identifying the scientific terms that introduce clarity.

When I’m thinking about how to integrate feedback, I again think through three things, similar to the two sets of 3’s above:

  1. How can I better reach my target audience? I’m often designing for people that are more familiar with the subject matter than I am, but less familiar with analytics. I often find that I’ve both over- and under-explained certain parts of the visualizations, simply because my own perspective is biased. Feedback helps me refine my choices into a design that I might not have personally chosen, but something that works for my users.
  2. How can I better shape the experience to what I intend? I often look at feedback for things that I need to “turn up”. This is especially true in rule-breaking, where I may have inverted a y-axis or weirdly arranged my visuals for a particular effect, but found that I was too subtle and so it looked like a design flaw; in this case, I need to make the effect stronger to create the desired experience.
  3. How can I further emphasize the key takeaways? One of the most common pieces of feedback I see/hear, both for my work and others, is “so what?”. Insights may seem obvious to us when we look at charts, but looking at charts is what we do for a living. Our users may need the key insights to be further emphasized, and likely even some interpretation provided for them. I can use feedback to identify and improve areas where things that stood out to me aren’t as apparent to my users.

Again, I don’t have to incorporate all feedback I receive (although it’s polite to acknowledge and thank the evaluator). But if I’ve carefully interpreted all the feedback I’ve received, I should return to the act of creation and look for the places where my choices aren’t as effective as I’d hoped.

Epilogue: defining “done”

Feedback should be iterative. With each new version, I can get new feedback on how my new choices are or aren’t effective.

But the danger of iterative feedback is a loop without an exit. We can always find someone to provide feedback, no matter how close we thought we were to being “done”. Consequently, the burden of defining “done” falls on us as the creators, and us alone. When we release our “finished” work to the public, it’s likely we’ll still receive criticism, as any artist does (hence art critique). If data visualization is starting to play in the arts, we should expect to be treated as artists, and so we should expect critiques from our versions of art and film critics.

So, how do we decide something is done? For me, it boils down to a question of marginal returns. I’ve found that each new iteration provides a diminishing impact to the experience I want to create. If my users are describing the experience I want to create, then I know I’m at or close to “done”.

This, however, requires me to be practicing the steps above: as I get closer to “done”, it becomes even more imperative that I refine my asks, I have to more carefully interpret the feedback (as later feedback often becomes detail-oriented or “nit-picky”), and the changes in my choices become even more subtle. Consequently, the burden of better feedback falls heavier and heavier on my shoulders as the creator.

Getting to — and recognizing — “done” is on me, as the creator. There’s always more feedback to be had, but so long as I’m focusing on my target audience, the experience I want to create, and the key takeaways I want to deliver, I can keep an eye on where my piece is vs. where I want it to be.

--

--

Joshua Smith
Nightingale

I am a user experience researcher, a data scientist, and a public folklorist.