(originally written for and published in Testing Trapeze)
Wikipedia defines a bug bash as: “… a procedure where all the developers, testers, program managers, usability researchers, designers, documentation folks, and even sometimes marketing people, put aside their regular day-to-day duties and ‘pound on the product’ — that is, each exercises the product in every way they can think of.”I first learned to do a bug bash after hearing the term about 8 years ago. I taught myself a crash course from a Google search, and then devised my own flavour of bug bash (which continues to evolve every time I run one).
The term “bug bash” seemingly first appears in Ron Patton’s book “Software Testing” (first edition published in 2000). As with many others, it does not surprise me that the term has been around for much longer than I have known it.
In this article I will describe why and how I run a bug bash these days — and hope you too find value in it, as I have since my Google searches years ago.
Bug bashes add value whether or not there is a dedicated tester on a team. They should be additional to regular testing activities, manual or automated. They are a cognitive process, manual or semi-manual but never pre-scripted. They are always optional and complementary to other testing. To prevent “testing blindness” they should not be too regular.
To echo what Michael Bolton said on my Rapid Software Testing course,
“a bug is not a thing, it is a relationship between someone who matters and the product.”
I really like this definition. It emphasises the human aspect of software testing and sets it apart from the now outdated idea that software development, and identifying bugs, can be done via a mechanised production-line process. The word “relationship” also points to the social aspects of testing.
Testing is a social activity. It is not performed in isolation on an application thrown over the wall by developers. It requires interaction with product owners, designers, developers, business analysts, systems engineers, end users and other stakeholders.
Sitting in an ivory tower and being judges about what passed or failed is an outdated and unworkable expectation of a tester. You are neither the gatekeeper of software releases, nor judge or jury of software crimes. You are an instrument to assist with feedback on software quality. Which is why, incidentally, I call “QA” Quality Assistance or Quality Analysis, never Quality Assurance. Quality assurance is an unachievable absolute and a fallacy.
Quality is subjective based on the assessment of those who matter.
Through your interaction with the software and those who matter, you help identify bugs.
Anyone whose bacon is on the line in any way (“pigs” in “The Chicken and Pig” tale), anyone committed to the software delivery, matters, especially in the context of their role and specialisation.
Product owners matter. Their relationship with the product matters to you. If they do not see functionality they expected to see or if functionality is not behaving as they expected, they will say so. Or they may have questions or concerns which you cannot answer.
Developers matter. Developers really matter when it comes to technical integrity of the application. If they are raising concerns about the consequences of using technical platforms, architecture, dependencies or implementations, you should care.
Recently we witnessed the 2016 online census bungle in Australia. Here the Australian Bureau of Statistics matters. They paid for and are the public face of the census, and one can argue they matter the most. But it is clear that the vendors in question matter too. Some matter more than most. But if your bacon is on the line, you matter. Ownership extends to beyond the people who pay the bills.
As you embrace the social nature of testing, you develop a ‘quality confidence radar’ which detects when those who matter have low confidence in the system under test. Testers learn how to use this confidence radar in making decisions, including when to call a bug bash.
If anyone who matters demonstrates or expresses notable uncertainty of completeness or lack of confidence in quality and one cannot disprove this concern, or even when a sanity check is preferred before releasing a product, a bug bash is warranted. Your confidence radar should beep on this blip. This is not black and white and may be too fuzzy for some, but believe me, testers do develop a kind of spidey sense when they are plugged in to the subtle messages of the stakeholders and the application.
As far as I am concerned, the definition of “bash” in bug bash could be either definition of “bash”: of the striking nature, or the party nature. I suspect the original intention was for us to use the former definition, but in my bug bashes I have combined both meanings to have fun in striking the application with much focus and determination to find new information about the application under test.
Consequently, I run a bug bash as a game and competition with the highest point earners winning a famous Melbourne coffee, sponsored by me, the tester. I encourage playfulness and count on the competitive nature of my colleagues to make it fun and rewarding to find new bugs. As a team member I have no ego about anything I may have missed during testing. I encourage team ownership of our product; and want our team to use the bug bash to discover, remedy and learn from what WE may have missed.
I truly treasure the feedback garnered from a timely bug bash.
So without further ado, here is the format I follow in my latest iteration of bug bashing:
- The objective is to find the most, and highest priority in-scope bugs, in a time-boxed period.
- All core team members (“pigs”, not “chickens”) are absolutely required, i.e. everyone directly involved in delivering the software: designer, BA, product owner, developers, systems admin, etc. — more or less a two-pizza-sized group of 5–9 members.
- Tester and Product Owner attend but are excluded from the competition:
- Tester runs the session,
- Product Owner awards the points.
- People are paired up by the Tester into teams, e.g. one person can drive, and one can think of scenarios to test and write down the bugs. Ideally these pairs will consist of one technical and one non-technical person — people who do not pair on a daily basis.
- The Tester declares the scope at the start of the session. This is usually to test the product’s latest iteration but while using devices and platforms and browsers not routinely used in development; my current team tends to be blinkered on OSX Chrome on our local developer machines.
- The time-boxed session is roughly:
- 10 minutes for setup (pairing done, environments ready, passwords on hand, and so forth),
- 20 minutes for bug finding (session begins with the Tester declaring the start, with words to the effect of “Go!” and ends with “Tools down” or “Time’s up”),
- 15 minutes for demonstration of bugs found at each pair’s workstation and awarding points (this is a group activity, where most of the banter usually takes place),
- 10 minutes for capturing the validated bugs by each bashing pair, onto the tracking tool of choice — the Tester is not the “bug admin person”.
- When a pair finds a bug, they hail the Tester and Product Owner pair who are roving.
- The Product Owner validates bugs during bug-finding but awards points at the end of the session.
- Once a bug is validated by the Product Owner, the pair captures it on their notepad, paper sticky note, or digital notepad, and keep testing until the time-box ends.
- In the spirit of play, the Tester may warn of the approaching end to testing time, with “5 minutes left”, “2 minutes left”, etc.
- At the end of the session at tools down, all pairs immediately stop testing, cut their losses and attend a show and tell of bugs found by other pairs. A walking tour is undertaken to each pair’s workstation to witness demonstration of bugs found and hear the points awarded. As mentioned, banter is usually at its loudest here.
- To be countable, a bug must be in-scope, the relevant story/requirement should be ready for testing and deployed to a testing environment, validated by the Product Owner as a bug, and cannot be a current known bug. In the event of duplicate findings the first pair to find the bug earns the points.
- Points awarded by Product Owner for in-scope bugs found:
- 3 — High Priority,
- 2 — Medium Priority,
- 1 — Low Priority,
- Out of scope bugs will not be awarded any points unless the Product Owner feels it is a showstopper before next release.
- The Tester tallies the points after all pairs have demonstrated their bugs. The pair with the most points is declared the winning team.
- In the event of a tie, a bug-off will occur and the first team to find another bug (validated by the Product Owner), of any priority, will win.
- All validated bugs are captured (in the tracking tool of choice, e.g. JIRA or sticky notes or index cards on the wall) by the team that found it and placed on the virtual or physical wall ready to be prioritised and/or played.
- The winning team claim their coffees at a time that suits them.
Bug bashing is always done after most or all planned testing has already occurred. It may discover show stoppers, or may only find a few niggles. The improved confidence comes from knowing more about the quality, not from a feeling of relief at not finding anything.
I do not think that I have ever run what could be considered an unsuccessful bug bash. My approach has slightly shifted as I have matured, learned more about people and testing and gained confidence in my role, but valuable feedback always presented itself during a bug bash, even in the earlier days.
To conclude, a bug bash is a group activity in the social context of software development, which assists in providing feedback to those who matter. It is a tool and not an end. My advice is to employ it contextually.
I hope you try this and I look forward to hearing your feedback on how your bug bash (or your tweak of it) has fared.