The way organizations produce working software has changed dramatically over the past 15 years. Waterfall is mostly thought of as a mythical prehistoric process that our Taylorist ancestors employed because until then, physical manufacturing was all they knew. Basic agile engineering practices are now considered table stakes at most companies. The emergence of DevOps has knocked down the wall between Development and Operations (or at least, knocked a few feet off the top). While all this has dramatically reduced the number of specialized roles required for a team to ship code, one role has more or less remained intact: The Tester.
Make no mistake — the Tester role has evolved significantly. Stacks of detailed test plans have been replaced with practices like Test Driven Development. Endless Excel files full of manual test steps have given way to Selenium and Capybara. “Manual Testers” have become “Software Engineers In Test.” And everything is baked into the CI pipeline. Bonk a button in your version control system, and the whole thing kicks into gear.
Despite all this, there is one thing that hasn’t changed much: For many teams, “Quality” still belongs to someone. They might hold the role of “QA Analyst”, or “Software Engineer in Test” or simply “Tester”. My favourite is “Agile QA Analyst.” And so, on countless teams, a hero is tasked with saving the team from themselves. The Tester writes the tests. The Developer might help (if she has time), but 9 times out of 10, the Tester will end up writing the tests (automated or otherwise), making sure they pass, and adding them to the regression suite.
Agile and Lean
The tenets of Agile tell us in no uncertain terms: The TEAM owns quality. So if the TEAM owns quality, why would we only ask ONE person on the team to do the testing? Many teams have made strides on this front by asking their testers to bring the rest of the team on board with testing. This certainly helps to share the ownership of quality throughout the team, and if your teams haven’t done this yet, I’d strongly recommend it.
EventMobi is, by far, the most lean engineering team I’ve ever been a part of. We are a 100% bootstrapped company, and that means our contempt for waste is widespread. Taking the shared ownership of quality one bold step further, we don’t hire Testers. To use an unfairly strong metaphor:
If you want to encourage people to produce less trash, take away the garbage collectors.
Culture + Process
When I first joined EventMobi, the lack of Testers was quite jarring on a very personal level. I come from a QA background. In fact, my job used to be to hire and manage full-time Testers. But as I’ve spent time with the teams here, something has become clear: Developers actually care about quality. No-one wants to disappoint their customers. Developers take pride in solving problems, not creating them. So to that end, they will do whatever is in their power to not ship shabby code. The trick is in giving them that power.
Quality within Engineering Culture
On the engineering team at EventMobi, here are a few ways we motivate developers to keep the bar high (aside from their intrinsic motivation to keep customers happy, of course):
- Our Definition of Done includes “Code is deployed to and tested in production.” This is an expectation we all set for ourselves from the outset.
- Code Review is a foundational element of our engineering culture. In some cases, we’ve gone so far as to employ a mob programming approach to code reviews, so that everyone understands the codebase, and can have an impact on it’s quality.
- We do not employ an Ops team. If corners are being cut, the only person who will feel the pain (besides the customer!) is the development team. The thought of being woken at 3:30AM to deal with an outage that you caused, unsurprisingly, is a pretty strong motivator to do the right thing.
Quality within Product Development Process
On the product development process side, we bake quality into each phase:
- Our product managers consider all possible edge cases as part of defining the product. They make these clear as part of the acceptance criteria for each story, which they define with their teams.
- Our developers write tests at all appropriate levels, extending all the way up to automated, end to end UI tests. Our Tech Leads keep tabs on the test infrastructure, and if they notice things are getting slow or flaky, they make sure that the team sets time aside to address those issues.
- Our amazing support team is always pre-seeded with new product releases, as part of our process for launching new products. This is as close as we get to a traditional ‘manual testing phase,’ and in our opinion, it’s much more effective. Our support team has developed an incredible level of empathy for our customers, given that they spend most of their days understanding and solving their problems. This makes them much more motivated to push the product to it’s limits. Manual QA teams are rarely able to build up that level of empathy.
Challenges We’ve Faced
If this all sounds like music to your ears, that is great! But like any framework, it’s not all rainbows and unicorns. There certainly are challenges that we’ve faced and continue to encounter.
Bugs will slip through — be prepared
By not fretting as much about manually testing everything all the time, bugs WILL slip through. Anyone who tells you different is probably selling some kind of snake oil. This is a calculated risk though. Think the 80/20 rule, only in this case it’s probably more like 95/5. That is, automated testing (5% of the cost, compared to an army of manual Testers) will probably catch 95% of the problems.
We have a couple systems in place to combat those cases where something DOES slip through.
- We find out about it quickly. Our support team is constantly dog fooding our app (as mentioned earlier), and we have enough monitoring in place to usually find out about a bug before a customer comes across it.
- We’ve invested heavily in building a quick deployment pipeline. From the time a fix is identified and committed, an engineer has the tools and autonomy to get their fix rolled out to production on their own within 60 minutes.
Infrastructure and Flaky Tests
“Flaky Tests” will happen. Things will fail for no obvious reason. As I alluded to, our Tech Leads keep tabs on this, but there are times when the pipeline will grind to a halt.
We’ve taken a two pronged approach to this. Firstly, each team has a rotation of ‘build masters’ who are expected to do any babysitting that is required to get code deployed.
And we also have a small dev tools team who is focused on making it easy for anyone to see how things are going, and to see what happened when things go wrong. Among other things, engineers can get an overview of build health, and quickly drill down into a failing test.
Would this approach work for you?
At our current size and level of product maturity — it’s been a very successful approach for us. Would it work for you? It’s certainly worth considering! As we scale, and as our product continues to mature, our inspect and adapt cycles will tell us if the process is breaking down. If we feel that the bar is starting to slip, we will change.
[If you liked this article, click the little ❤️ on the left so others in your network know. If you want more posts like this, follow EventMobi publication with the (Follow) button on the right!
Finally if solving problems like this seems up your alley, we’re hiring! Checkout our current open positions at http://www.eventmobi.com/careers/]
Matt Dominici is an Agile Coach and Professional Scrum Trainer (PST — scrum.org) based in Toronto, Canada. He works with all sorts of companies, helping them to improve their ability to deliver customer value. He has worked with bootstrapped technology start-ups, VC-backed hyper-growth companies and 150 year old incumbent Financial Institutions. You can email him at email@example.com or find him on LinkedIN, Twitter or his website.