When I start testing

Nick Quaranto
The Greenhouse
Published in
3 min readFeb 27, 2017

I’ve been working on a new feature for Agrilyst that helps growers keep track of what’s important at their facility. We’re calling it KPIs (Key Performance Indicators), and you may have heard of them before in important meetings or on good clip art.

Also a graph of our test suite duration

We’re just about ready to start our QA process, which includes several team members trying to break the feature and a quick pass with some engaged growers to make sure we’re on target for what they need. I spied quickly at the list of TODO’s left and noticed a pattern:

Footballs are for punting features.

Right in the middle, a crucial moment appears: Tests!!!!

For this brand new feature, my process was clear: first, I went digging in the mines of our back-end code to lay some new tunnels of capability. I excavated towards our front-end after discovering the treasures of new abstractions to extract. Other issues emerged as I was tunneling, and they got tacked onto the list.

Smack-dab in the center of this process was an important baseline: writing tests after figuring out some of the deeper back-end code gave me greater confidence and speed when dealing with the issues I unearthed. I was able to get through the issues found during the initial feature exploration quicker because the tests had my back.

Whoa! Why I didn’t write the tests first? I stopped to consider why I chose to write the tests when I did, and how we all got here.

Tiny features growing next to Hessian Lake during our last company retreat

Tests allow us to fall back and make sure code is still working while exploring new development territory. Fresh features introduced to a codebase are full of risks and paths best left unexplored. Tests are an anchor, a parachute, an escape rope: you don’t always need one, but you’re better off with one.

I could pretend to write tests first 100% of the time, but it’s just not going to happen. I’d rather be honest and just say “I’ll test it later” than feel guilty about not writing any up front.

We want to instill confidence into our code. Our job should be remove guilt, anxiety, or worry that it might not work.

Here’s my approach for when to write tests:

  1. Test-Before: I’ve got a known problem or bug to deal with. Write a test first. Watch it fail. Make it pass. Follow Test-Driven-Development style.
  2. Test-During: I’m exploring new ground. Strike out on a new path, then backtrack and layer down a thick coat of confidence before finishing.
  3. Test-After: The task is done, and it’s important enough to warrant some automated testing for all of the usual reasons: to keep business value intact, make refactoring possible, prove your code works.

I’m curious about when you start writing tests. Is it before your app’s implementation is even cracked open? Is it after your users tell you a bug exists? Or — I hope — is it somewhere in between? The next time you’re faced with this dilemma about when to start testing, try asking yourself why you felt that way, and how you can help increase confidence that your code works.

At Agrilyst we’re not just growing software, but helping grow plants too. If you’ve got a home garden or plans for one this Spring, get data-serious. Our hobby plan is just $50/month and you can try it out for 2 weeks for free.

--

--

Nick Quaranto
The Greenhouse

@qrush is a short, sturdy creature fond of drink and industry. Working on @Agrilyst, @CoworkBuffalo, @MxDesk.