Introducing Bug Bashes: How to Eliminate Bugs and Release a High-Quality Product

Mark Piana
wholeprodteam
Published in
10 min readMar 8, 2021

Finding bugs immediately after launch is the worst. One minute, you’re celebrating with the team and watching user traffic start to trickle to your feature. The next minute, the support team files your first bug and bursts your bubble. While no product ever launches without issues, how can you minimize the risk that bugs slip through the cracks and be confident that you’re releasing a high-quality product?

Let me introduce you to your new best friend — bug bashes.

What is a bug bash?

Despite its connotation, a bug bash is not a time to play bug whack-a-mole and actually fix known problems. Instead, it is a session during which you search for and record issues with your product. Ron Patton coined the term in his book Software Testing and describes it as:

A bug bash is a procedure where all the developers, testers, program managers, usability researchers, designers, documentation folks, and even sometimes marketing people, put aside their regular day-to-day duties and “pound on the product” — that is, each exercises the product in every way they can think of (2001).

The point is that internal stakeholders from across the organization test the product as an end user would and attempt to pinpoint issues with it before launch. Afterwards, the team can fix any identified problems and then release the feature with more confidence.

Why should I use bug bashes?

If you’re a Product Manager (PM) like me, your first reaction may be asking why you need to bother with a bug bash, especially when your engineering team already has created a robust automated testing suite. You have so many other responsibilities leading up to launch, right?

The problem is that too often no one stops to test the feature as an end user would before release. Does this scenario sound familiar? As a launch deadline approaches, the PM is juggling various marketing, sales, and support related tasks and relies upon their engineers and designers to ferret out bugs. However, the engineers are racing to finish development and have to deprioritize writing tests. If they’re able to allocate some time to testing, they focus on only their component of the product, not the whole thing. At the same time, the designers already have been pulled onto a new project and don’t have the bandwidth to validate what has been implemented. In the end, no one thoroughly vets the feature. The user experience ends up not matching what was intended, and bugs slip through the cracks. Worst of all, you don’t find out about these problems until the feature is already in customers’ hands, and your product’s chance at a strong first impression is dashed.

As the PM, you are responsible for guiding the development and delivery of quality products, so this kind of buggy release ultimately points back to you. Therefore, you should be heavily invested in pressure testing your features before release and not falling victim to these oversights. By forcing the whole team to step back and use the feature as an end user would, bug bashes can help to mitigate these issues and to ensure that your product is truly ready for launch.

What does a bug bash actually look like?

Enough pontificating. Let’s make this idea a little more concrete by giving you an example of how we have utilized bug bashes at Klaviyo.

My Data Science pod recently released a new feature called Benchmarks, which allows users to compare their company’s performance to their industry’s and peers’ results. Prior to launch, we conducted multiple bug bashes to root out any issues and followed three key steps to turn this concept into reality.

First, create a bug bash guide

Before even thinking about scheduling our first bug bash, we built a testing guide that helped us to structure our sessions and consolidate feedback. While we could have let our coworkers hammer away at the feature as they saw fit, creating this plan allowed us to direct our testers to areas where we needed the most input and to leave each session with the feedback most valuable to the team.

We built our guide as a spreadsheet with a few key pieces of information — test accounts, testing assignments, and test cases.

In the first tab, we provided a series of test accounts that depicted the range of possible users — from the ideal account to one with multiple edge cases. During our bug bashes, we had people use this mixture of accounts to ensure that we tested the product under a variety of real-world scenarios, instead of just on the happy path.

Screen shot of our Test Accounts tab

The second tab, test assignments, included a list of all the user journeys we wanted to evaluate along with a space where we could assign a tester to each case. This sheet made it easier to track which cases had been tested and by whom after each bug bash.

Screenshot of our Test Assignments tab

Lastly, we configured a distinct tab for every test case, or user journey. In each tab, we delineated a sequence of steps that a user would follow to complete a task within Benchmarks, with each step being a different row. Then we added two columns for every test account: one where we stated the expected result of the steps and one that the tester would fill in with the actual result. While they may seem excessive, these details helped to stymie questions about what was and wasn’t a bug during the bug bash and allowed us to highlight known issues in advance so that we could eliminate unnecessary feedback from testers.

Screenshot of an example test case

Second, schedule the bug bash

Once our test plan was ready, it was time to schedule our bug bashes. We decided to book hour-long slots two to three days before the end of each sprint because this timing allowed the group to assess most of the updates from the current sprint while also giving us enough time to evaluate the session’s findings and prioritize any work before the beginning of the next sprint.

For each bug bash, we invited the project team — the engineers, data scientists, designers, and product manager (i.e., me) for the feature — plus a variety of stakeholders from across the organization, including the broader Data Science, Design, Product Management, and Product Marketing teams. It would have been even better to incorporate individuals from the Customer Success, Support, and Sales organizations as well to garner feedback from entirely different perspectives.

Third, host the bug bash

Bug bash day was finally here! We had a solid test plan and a group of excited testers. Now what?

To start, we provided an overview of the feature and the test plan. Generally, the lead data scientist or I would spend five minutes summarizing the Benchmarks feature, the various tabs in our spreadsheet, and our expectations for how people would document their findings.

Then we assigned people to the test cases. We would pull up the second tab in our spreadsheet and divvy up the test cases across everyone present.

Next, we let the horde loose! Once everyone knew what to test, we let them start pounding the product in search of issues. We encouraged everyone to complete their test cases and then broaden their search to the feature as a whole. Oftentimes, people going off script led to the biggest rewards, as they threw scenarios at the product that we would never have considered. We tried to keep the group focused and to ensure that we got what we needed from the bug bash while also letting the team be creative and have fun too!

Lastly, we compiled, reviewed, and prioritized the feedback. Once we captured the insights from our bug bashing crew, we combed through everything to pinpoint high priority issues, documented them in our sprint planning tool, and queued them up for our next sprint.

Our testers found more bugs than our customers did!

How bug bashes improved Benchmarks

Following this process irrefutably helped to refine Benchmarks leading up to launch. In each bug bash, the fresh eyes on the feature quickly spotted numerous potential user experience improvements and bugs. As a result, the team was able to release a few minor but highly impactful updates before we put the product in customers’ hands.

To give a simple but concrete example, the bug bashes informed us that we needed to change our navigation menu to simplify discovering and visiting additional pages within Benchmarks. Heading into our first bug bash, the main Benchmarks page looked like this design.

Original Benchmarks overview page

Can you figure out how you would navigate to other pages within the Benchmarks tab? To everyone on the development team, the functionality was quite apparent. A user would notice the down carrot next to Overview, click on Overview, and find a dropdown list of additional pages.

Close up over original navigation dropdown

However, in our first session, a significant number of people from outside the team could not find this dropdown menu and even would forget how to discover it after we told them. Clearly, the development team had gotten too close to the feature and could no longer notice potential issues for first-time users.

To rectify this problem, the team quickly iterated on a few design options and settled on a new version with tabs in place of the dropdown. After implementing this change, we were eager to see its impact in our next bug bash and were delighted when people immediately noticed the tabs. Our navigation issues evaporated with this update, and we became more confident that real users would be able to find each of these additional pages without assistance and, thus, would discover the full value of the feature.

Benchmarks Overview page

How bug bashes have helped us going forward

Our bug bashes obviously allowed us to release a more robust version of Benchmarks, but these sessions have had a bigger payoff than just identifying bugs. Since we started hosting bug bashes, there have been a variety of supplemental benefits to the team:

  1. Bug bashes keep us close to the product. They force us to step back from our day-to-day tasks, assess the current state of the product, and truly feel our users’ pain points. In turn, the experience helps us to reprioritize feature requests and bugs by thoroughly understanding the areas that need the most improvement.
  2. Bug bashes allow us to identify issues earlier. While creating a new feature, it is easy for a team to become too familiar with the product, adapt to its quirks, and become unable to see its flaws as a new user would. Our bug bashes bring fresh eyes to the product and allow us to witness the experience for first-time users. These new perspectives help us to identify usability issues and bugs early in development, so we can adapt well before release.
  3. Bug bashes instill a quality mindset in the team. Building a quality product does not begin with testing immediately before release. Instead, each team member has to take ownership of the product and feel accountable for quality throughout the process from design to implementation to launch. Bug bashes have taught our team to speak up early about problems they find, have forced them to feel the consequences of their work, and have helped them to get feedback and improve going forward. As a PM, I have learned to write clearer and more robust requirements. The engineers have developed a knack for spotting edge cases more easily. This feedback loop has created a higher functioning team that delivers better quality work faster.
  4. Bug bashes encourage cross-team communication. By bringing people from different teams together, bug bashes have facilitated identifying dependencies with other team’s work earlier in the process. They also have enabled stakeholders to provide input and to feel heard throughout the project. As a result, they have prevented those last minute blockers and complaints about development happening in a silo. Plus, hanging out with teammates for an hour or two has strengthened relationships across the organization and been pretty fun too!

Now go try it yourself!

Bug bashing is only one tool at your disposal for testing a product and obviously is not bulletproof. However, it undeniably is my favorite approach for evaluating the product from an end user’s perspective and finding issues (especially those pesky small ones) before launch.

Now that you’ve seen how we have leveraged it at Klaviyo, go give it a whirl for yourself! Adapt our approach as you see fit for your team. Remember that the goal is not to adhere strictly to the process but to find the method that works best for you and your team so that ultimately you can deliver a quality product that provides value to your end users.

--

--