Torpedoes, stars, and the art of the blitz

Design critique for distributed teams

Liam Greig
Designing Atlassian

--

Over the past couple of years, Bitbucket design has evolved its rituals in an effort to tailor them to the multi-city, multi-timezone nature of our team. One such ritual that we’ve evolved significantly over the past year is design sparring. So much so, that at a certain point we realized we’d stumbled onto something entirely new — we call it the design blitz.

*This post will explore design critique through the lens of two different team types: Co-located teams work together in a single location, while distributed teams work from different locations.

But, why?

A quick primer: design sparring is the most common form of design critique at Atlassian. Gathered around your team’s design wall and armed with a stack of post-it notes and sharpies, sparring is a fantastic way for co-located teams to collaborate and offer feedback on each other’s work. If Atlassian design sparring is new to you, I’d recommend a couple of quick posts to learn more:

Before jumping in to the improvements we’ve made on our journey from design sparring to blitzing, let’s explore the challenges of applying more traditional design sparring to a distributed team:

#1: VC based feedback can be… awkward

Sparring with a co-located design team in a single location around a design wall is a great way to ensure a high energy session with a natural rhythm that feels neither rushed or forced. Split across 4+ locations on a video conference however, things begin to feel too structured and ‘formal’. Each location takes a turn, rarely building off one another, as the designer patiently listens and responds until their time slot mercifully comes to an end.

#2 Capturing and sharing feedback is hard too

On a physical wall, feedback is captured and organized on the design itself. Context is obvious and the designer can simply return to their design wall after each session to review the notes. Without a wall to attach the feedback to though, the conversation tends to lose its context or worse, doesn’t get captured at all. What’s said in one location never finds its way to another and teammates feel left out of the loop. All of this can add up to lower engagement, less efficient sessions and more post-session overhead for the designer as they attempt to chase down context or missed items. Not fun.

#3: Our sessions were too large to manage

To avoid meetingitis and increase visibility across locations, we’ve always maintained a single session for the global Bitbucket design team. Combined with the fact that we invite non-designers (PM, dev, QE) to join us, sessions often include 8 or more participants. The challenge of managing such a large group is exacerbated by the distributed nature of our team. Sessions felt rushed, quieter participants struggled to contribute, and feedback lacked a cohesive direction. As a result, designers were hesitant to share and session attendance went down.

OK, who’s got the mic?

So where did we go from here?

A series of small improvements

As a riff on the more established practice of design sparring at Atlassian, the blitz still shares a lot in common with its predecessor. The session begins with a quick overview of the ground rules for giving and receiving feedback as well as a summary of the session topics. The designer walks the room through their problem statement, use cases, and the design they’d like feedback on — pretty standard stuff.

The blitz forks from sparring in its approach to giving and capturing feedback. Let’s take a look at some of the changes we’ve made as a series of small improvements:

Improvement #1: One ‘star’ to rule them all

Given our group size, it was no longer feasible (or desirable) for every participant to go through their feedback item by item. In addition, we needed to introduce some prioritization to the session. We introduced two simple changes to our format:

  1. Each participant was asked to stack rank their own feedback
  2. Each participant was asked to star one, and only one, piece of feedback

We’d then go around the room as usual, but we’d only review and discuss the stars. The star introduced a small but powerful change to our existing process. With it, we gained some much needed prioritization and gave everybody a chance to speak. In turn, however, we created a couple of new problems (surprise!):

  • Un-starred feedback was not getting captured. We asked everyone to add their remaining feedback to the page post-session but few did.
  • Not all ‘stars’ were created equal. Some stars were MASSIVE, while others were paper cuts.
  • The feedback quality was often dependent on one’s familiarity with the project.
  • We were still awkwardly going around the room, one location at a time.

So we continued to iterate…

Improvement #2: Tell me everything, but don’t say a word

To recapture missing feedback and give the session a different flow, our next improvement embodied the true spirit of distributed team work. Once the designer has finished walking through their progress, the next 5–10 minutes are spent in silence. Each participant spends this time adding feedback directly to the mockups or design spec as if you were alone at your desk. It was from this modified feedback loop that the design blitz was born.

At first, it’s a bit strange — silently adding feedback — for a designer who’s sitting right there staring at you. Once you get past that though, the session actually has a nice flow to it. Since everything gets captured in the right context, it’s easier to catch up and chime in on the conversation if you weren’t able to attend.

Be warned, though: The byproduct of a blitz is a shit ton of feedback:

OK, this is a bit of an extreme example — but do expect a lot of feedback!

Following the silent-feedback session, we continued to go around the room using ‘stars’ as a conversation guide. We’d successfully found a way to capture everything in the right context so we turned our attention to conversation flow and better feedback categorization.

Improvement #3: Torpedoes, potholes, and stars

Our latest experiment introduced 3-tiers of feedback:

  • Torpedoes: Something that could really sink the ship and needs to be addressed asap
  • Potholes: A minor usability issue or inconsistency that should be addressed but doesn’t necessarily need to be covered during the blitzing session
  • Stars: Something that’s working really well (Because we can say nice things too!)

At the end of the silent feedback session we simply ask: ‘Any torpedoes?’ and we tackle those first. Maybe that’s all we get through which is perfectly fine since these are the most important bits. Once we’re through the torpedoes, we move on to the potholes and stars which get captured on the blitz session page for future reference. Introducing torpedoes, potholes and stars to our sessions delivered a few new benefits:

  • Even better feedback prioritization that begins with only the most critical bits
  • An easy way to capture, organize, and communicate the sentiment of the session
  • A more natural session flow than simply ‘going around the room’
Typical output using Torpedoes, Potholes and Stars

Recap

The design blitz is a modified design feedback session that’s tailored for the distributed design team. It forks from traditional design sparring by introducing silent feedback followed by a discussion around only the highest priority items. Let’s recap the basic flow:

  1. Session intro: Go over the ground rules for giving and receiving feedback.
  2. Session topics: The designer takes the floor and presents their work.
  3. Silent feedback: Feedback is added directly to the design spec during 5–10 minutes of silence.
  4. Categorize: Optionally label feedback as torpedoes, potholes or stars.
  5. Discuss: Resume the conversation, beginning with any torpedoes then moving onto the potholes and stars.
  6. Post-session: Iterate on the design and keep the conversation going on the spec itself.

When to blitz

Many months in and the blitz has become a regular part of our weekly rituals. That said, it’s no silver bullet and we’ve come to understand it should be introduced in addition to, not as a replacement for, sparring. The blitz works wonders when sweating the details, catching inconsistencies, getting early signal feedback on a new idea or concept or when you simply want lots of feedback from different viewpoints and perspectives. Sparring is still preferred for going deep on a gnarly design problem or when exploring a complex model or system. Moving forward, we’ll be using a mix of both sparring and blitzing depending on the design problem and the type of feedback that we’re after.

What’s next?

As you’re all painfully aware, process is never truly ‘done’ and we’re continuing to look for ways to improve and explore this ritual. As always, thoughts and questions are more than welcome and if you’re team is doing something similar, we’d love to hear about it in the comments!

Happy blitzing!

Did you enjoy this post? Why not give it some ❤️ or a share? Want more of the same? Consider following Designing Atlassian.

--

--