Why do startups believe they don’t need testers?
Three TDD assumptions that show why startups do need testers
Part of the problem is a software development methodology called Test Driven Development (TDD). The theory behind this methodology appears quite simple and obvious.
If you generate automated tests before or as you generate the code for your product, your code will be so clean that you won’t need anyone else to test it.
TDD works on a few basic assumptions that time has shown me aren’t really true.
- Developers understand how and what to test in their own code
- Developers will write tests before code
- Automated tests find all the problems in code
In the interest of full disclosure: I am a software tester by nature and by career. I can write code, but I don’t. Why? Because like many other testers, I don’t find it fun, comfortable, or to be the best use of my skills. I believe that for most developers, the opposite is true — they don’t test as much as is needed because it isn’t fun, comfortable, or the best of their skills.
Assumption 1: Developers understand how and what to test
Developers and testers have very different skill sets. That doesn’t mean they can’t do each other’s jobs. Nor does it mean that coders can’t test and testers can’t code. What it does mean is that the people who do the two jobs are good at doing different things.
Developers are good at finding and creating solutions. In order to do that, they have to be good at figuring out how to do things. They believe that the solutions they come up with are the right solutions. Coders generally believe that the way they code those solutions are correct. By definition, a coder looks every bug as the last bug. (It’s called programmer’s optimism.)
If developers didn’t feel this way, they would have problems getting anything deployed, delivered, or completed. Most of them believe that the world (and their products) can be improved, but that what they have done is the best that could have been done at that time.
Testers look at the world in a different way. Testers tend to see the broken things in the code. Testers are detail oriented people much more than developers are. Testers don’t try the normal path through code first, they try the error path without even thinking of it.
Assumption2: Developers will write tests before code
If you ask a tester to design a piece of code, the first they will look at will be the different cases — the different paths through the code — the ways the errors will be handled. A developer will look at the correct path first — the path without errors — the path most people will take.
Developers don’t naturally look for what will go wrong in a piece of code. Testers don’t naturally look for what will go right.
When developers are asked to write test cases before they start thinking about what the piece of code should do, they will do anything in order to avoid writing the tests. That’s not a shortcoming in developers, it is human nature. Humans do what they like to do first and what they don’t like to do last.
On the other hand… Ask testers to jump in and code up anything without thinking about the potential problems with the code and you get the same result. The testers (myself included) will do almost anything in order to get back to figuring out what can go wrong.
To put this another way: If you ask a developer to design something, they will design up the correct and expected paths through the code. If you ask a tester to design something, they will design up the error paths and the unexpected paths through the code. It’s human nature.
Assumption 3: Automated tests will find all the problems in any given piece of code
Here we get to the crux of the matter. Automated tests are usually written to make sure an error doesn’t happen again.
Once a problem is found, automated tests are great at making sure that problem never happens again. It’s the nature of the beast. Computers aren’t able to do independent thought (yet). But independent thought is needed to find bugs that are new.
When you ask a developer to write an automated test, they (in general) won’t have much luck at it. If the only tests you want are the problems that have already been found, have a developer write them. Developers are good at writing tests that verify their own idea of what the code should be doing.
If you want tests to be written that are finding new problems — problems due to unexpected actions — ask a tester to work with the design and the product. Because testers look at the world differently, they will find problems other people won’t find. If you can’t afford testers, at least developers write tests for each other’s code. This gives a small amount of independence — a second set of eyes to see things differently from the original developer.
Once a problem is found, definitely put together an automated test for the problem. That will make sure future changes won’t break the code. It will make sure that if the problem exists somewhere else in the product it will be caught. After all, that’s what automated tests are good at.
Is TDD a totally failed methodology?
No. TDD is a great idea. But it needs a tweak. Instead of having developers build the test cases, have testers build them.
This isn’t a popular approach in most startups. After all, it means hiring people who think differently. It means hiring people who currently aren’t considered “necessary” in most startups. People you won’t find under the normal development hiring process.
Another option for fixing TDD as it is implemented today is to train developers in looking at the world a different way. Train them to look for problems in each other’s code. Go back to the simple solution of stricter code inspections. I don’t mean looking at the code to see if it is the best way to do it. I mean implementing the type of inspection that looks at the code from outside. That looks to see if errors are handled in a customer friendly manner.
But that’s a discussion for Part 2: Why software startups actually do need good independent test teams!