Critiquing Design Critique

Eric Cipra
May 11, 2017 · 7 min read

Striving to make our design critiques more effective, we tried out three new formats. This is what we learned.

If you’re reading this, you’re probably already familiar with human-centered design. If not, don’t worry — I’ve got you! Take a few minutes to watch this popular video explaining the basics:

“Observe, ideate, prototype, test” — or, as we at Pivotal Labs like to say, “think, make, check” — is a wonderful loop that lets us constantly iterate on design work to suss out the best solution for a given user problem. What does our design assume will work that actually doesn’t? Where might our users get tripped up, lost, or confused? Could an actual human understand what we’ve made?

Our design team believes so strongly in this process that we devote one hour per week to design critique, a session in which (a) two project teams take turns presenting a problem and their proposed solutions while (b) the remaining designers give constructive feedback that the project teams can use to improve their designs. Our fundamental format, Dots & Cards, is beautifully described in this classic blog post. That format — involving printed designs, sticky dots, silent critique, notecards full of feedback, and a group share out — is by far our most frequently used as it is flexible enough to support a wide variety of feedback (interaction, flow, visual design, etc.).

In recent months, though, the NYC office design team noticed that our design critiques started to feel a little stale. Sure, we always left the room with solid next steps, but we’re a team of iterative thinkers and we started wondering: how can we make our design critique more effective? Might we be able to tailor the approach to suit the desired feedback?

With this in mind, we took it upon ourselves to try three new formats over the course of several weeks, taking care to ask participants to critique the experience using pluses (i.e. “what worked?”) and deltas (i.e. “what can be improved?”) after each session.

Here are our findings.


Format #1: “Next-pectations”

What is it?

A project team reveals printed screens one by one to the design team. The team takes time to silently identify feedback (initial reactions; usability feedback; etc.) and an expectation of what screen they’d see next and why (hence the name “Next-pectations”). When all screens have been revealed, each person verbally shares their thoughts with the group.

30-minute agenda:

  • 5 min — Introduction of project with name, 1–2 sentences of context, user, and user goal
  • 10 min — Screen reveal exercise — 5 screens max & 2 min per screen
  • 10 min — Feedback discussion
  • 5 min — Outro

Supplies:

  • Notecards
  • Sharpies
  • Mock-ups in a slideshow format (ie. Marvelapp, Invision, etc.) labeled with a numerical indicator (1a, 1b, 2, 3, 4a, 4b)

Why do it this way?

This approach functions as a proxy for first-time use since it approximates a user’s first flow through an application. It has the benefit of making the design team articulate their assumptions before revealing the reality; this offers the project team unvarnished feedback. Accordingly, it works best when the greater design team is not familiar with the screens in question and can bring fresh eyes to the user flow.

What did we learn?

Pluses

From a design team perspective, it was clear that people loved the “one screen at a time” concept as it increased our focus, made the anticipation palpable, and left people feeling engaged. The project team who innovated this approach also felt strongly that they’d gotten valuable feedback on the user flow, identifying several screens that felt unexpected and allowing them to leave with specific areas to address.

Deltas

We neglected to label each screen (as suggested above), making it very difficult for the design team to connect feedback with a specific screen. The session felt disjointed and took longer than expected as a result.

We took this labeling lesson to heart and were better prepared for format number two …


Format #2: Digital Click-Through

What is it?

A project team shares a link to an interactive prototype (e.g. InVision, Marvel, etc.) with the design team. The team silently clicks-through and leaves comments on each screen over a short period of time. The group reconvenes at the end to hold a live discussion of the feedback, screen-by-screen, in round robin-fashion.

30-minute agenda:

  • 5 min — Introduction of project with name, 1–2 sentences of context, user, and user goal
  • 10 min — Silent design team click-through
  • 10 min — Project team-led feedback discussion
  • 5 min — Outro

Supplies:

  • Projector or TV w/ cable
  • Interactive prototype URL with commenting ability enabled and screens labeled (e.g. 1a, 1b, 2)
  • 1 laptop per person

Why do it this way?

Paper is inescapably flat, so this approach allows the design team giving feedback to cover a bit more ground, clicking forward and backward through a screen flow, in order to get a more holistic experience (“I tried this and X happened; here’s how I felt about it” vs. “I see this and think X is what would happen”) of the design being critiqued. In addition, the project team gets excellent feedback on its designs and the greater design team gets to train their eyes to interpret design patterns — especially with regard to flow and use of (hopefully intuitive) interactions.

What did we learn?

Pluses

We loved interacting with a prototype vs. looking at paper. This difference really let us understand how the workflow felt and gave each person the chance to go at their own pace. Not surprisingly, we think this format is particularly appropriate for interaction design (especially if the project team uses a robust prototyping tool) and user flow feedback.

Deltas

Time management proved to be the largest downside of this format — we ran a bit long, thanks to ambiguous expectations-setting at the outset and some logistical kinks. To combat this, we suggest that the presenting team:

  • Give a clear task statement
  • Be cognizant of the scope to cover, selecting roughly 4 (or fewer) linked screens if they are heavily interactive (e.g. many different states, modals, etc.) or roughly 8 (or fewer) linked screens if they are relatively simple/flat
  • Email the prototype’s URL out to the group before the session

This brings us to the third and final new format …


Format #3: Live Information Architecture (IA)

What is it?

Akin to a card sort in which a project team provides the design team with printed cut-outs of content elements for a particular screen (or screens) and asks the group to organize them in an intuitive hierarchy. There is little to no visual polish included or necessary.

This exercise can be done as individuals, pairs, or groups of three depending on the scope, amount of preparation time, and number of attendees.

30-minute agenda:

  • 5 min — Introduction of project with name, 1–2 sentences of context, user, and user goal
  • 20 min — IA Exercise — as individuals, pairs, or groups of three
  • 10 min — Work time (create the hierarchy by reordering, removing, or adding elements)
  • 10 min — Review outcomes (2 minutes per person/team)
  • 5 min — Outro

Supplies:

  • Notecards
  • Paper
  • Post-its
  • Sharpies
  • Printed content elements
  • Printed prompt for context

Why do it this way?

The project team gets a diverse set of detailed input on what an intuitive content hierarchy might look like for a particular set/subset of possible screens, and the design team can grow by practicing their content hierarchy skills, something that is especially useful given that it can be all too easy to focus too heavily on the visual aspects of a design.

What did we learn?

Pluses:

Once again, we loved trying something new. It was refreshing to ignore the visual polish in order to make sure the skeleton of the application in question was solid. As a result, the team felt this was a great exercise for gathering input on information architecture and content hierarchy.

Many people also found it exciting to work with a pair, though this was not universally lauded given that overall time was limited (Note: we tried a version with two rounds of IA to complete but we do not recommend it as it dilutes focus and takes much longer to facilitate).

Deltas

On the flip side, we felt the scope was too wide and resulted in the feeling of being rushed through the exercise. As thoughtful people, we naturally wanted to give measured feedback; this proved quite difficult to do, and as a result we wouldn’t recommend doing more than one extended round of critique using this approach. Additionally, as before, setting clear expectations up-front would have been a boon to the team; several people noted that they were a bit confused about what to do and when.


Bottom Line

Our new formats, while not universally successful, offered us a great opportunity to learn and improve our practice. Specifically, we learned that:

  • New approaches are refreshing!
  • Specific feedback desired (e.g. IA & content hierarchy) now has a correlated format (e.g. Live IA)
  • Always give clear project context and activity directions
  • Sufficient time is crucial for success, so we need to make the timeline as obvious as possible
  • Prioritization is critical because we can only critique so much in 30 minutes

We will continue to fine tune our formats, holding a mini-retro after each instance so we can learn and improve in an iterative fashion. Feel free to try them yourselves — be sure to let us know how it goes in the comments — and tell us about any other style of critique that you’ve found valuable!

Product Labs

Product and Design advice and stories from the folks at Pivotal Labs

Eric Cipra

Written by

designer at @pivotallabs; product person; k-12 education fan.

Product Labs

Product and Design advice and stories from the folks at Pivotal Labs