Manage your biases as a tester — Part 3/4

Stéphane Colson
Blog articles
Published in
7 min readOct 26, 2016

--

In the first part of this series dedicated to biases, we saw a list dedicated to biases due to “Too much information“. Then in the second, some cognitive biases due to “Not enough meaning“. If you didn’t read them yet, I suggest you to start with these two articles before this one. In this third part, we’ll see that the “Need to act fast” can also lead to some biases.

Risk compensation

Theory which suggests that people typically adjust their behavior in response to the perceived level of risk, becoming more careful where they sense greater risk and less careful if they feel more protected.

As a Software tester, you may feel more protected if you know that a lot of unit tests, integration tests and end-to-end tests are running on each build (and that they are green and enabled), but that doesn’t mean that there is no risk to evaluate in the new version, in particular if new developments have poor unit testing, useless integration tests and no end-to-end tests. You may have more checks and at the same time may have to be more aware and cautious with what is candidate to release. Please try not to compensate the risk.

Appeal to novelty

Fallacy in which one prematurely claims that an idea or proposal is correct or superior, exclusively because it is new and modern.

Let’s face it, we all love novelty and don’t want to spend our live with old methods and old tools. Some like to change nothing and live in the past (see “Status Quo bias”) but I’m sure they are not reading this blog or any other one; they are not interested in finding new ways to improve their craft, and to them, novelty is to avoid because it has always worked like this.

On the other hand, being attracted by everything that is new is dangerous too. You will see that with developers who are able to find a new library almost everyday. As a tester, you will also find a lot of new tools frequently, a lot of new techniques and it is always interesting to try them.

You should try first to respond to this: “What problem do I try to solve?”. If you don’t have any real problem to solve, you may carrying on working in the same way. If the problem you have can be solved with something new, of course you should give it a try, but be careful to not spend too much time only because you found something shiny and appealing.

Sunk cost fallacy

The phenomenon where people justify increased investment in a decision, based on the cumulative prior investment, despite new evidence suggesting that the decision was probably wrong.

If running this project costs 10k€ each month, after one year the bill will be 120k€. If you continue, assuming you’re going straight into the wall, 10k€ more will be lost each month…again, until it stops.

I saw this with some components which are not good enough and full of issues. We spent a lot of time integrating it into our product, then trying to fix it. It is always a hard decision to get rid of his own work (also due to the “Ikea Effect”) because the “sunk cost fallacy” tries to convince us that we shouldn’t give away any hard work.

As a tester, information about the quality of a component which needs to be abandoned is very precious, you can help decision makers to not be duped by the Sunk cost fallacy and help them understand what cost is really attributed.

Ikea effect

Cognitive bias in which consumers place a disproportionately high value on products they partially created

I guess you know Ikea, the company which gave you the joy to spend 2 hours in a furniture retailer, then wait for half an hour to retrieve the stuff you want to buy, then go home and spend another hour to assemble the furniture and sometimes have to go back to the store for a missing screw. What happens when you’re finally done? You’re so proud of you that you probably take a picture and share it on Facebook.

This is the same effect you have to fight when a developer doesn’t want to get rid of a piece of software that is clearly useless or at least unusable. People who partially created it cannot imagine the real value of it, they overestimate it and as a tester it is always hard to be the one who sets the record straight. You have to be aware of this one, you have to adapt your communication in order to not upset other team members more involved in the creation of the product and who have taken decisions that you are attempting to criticize.

As a tester, you may also be fooled by this effect. For example with the automated checks you have written. If one of them fails, you will blame the Software Under Test first. Most of the time, a change in the code of the product just needs a modification, unless the check is too flaky and randomly fails, unless the automated check was badly written and needs to be rewritten to be more stable and sustainable.

At this point, it is easy to understand that this “Ikea effect” contributes to the “Sunk cost fallacy” with this example of a manager or a decision maker carrying and devoting resources in a now useless project which he has spent a lot of time and energy for.

Status quo bias

The tendency to like things to stay relatively the same .

Unlike the “Appeal to novelty” fallacy, the “Status quo bias” tends to guide people to prefer what is already existing and block any new idea. Not everyone is subject to this one, and it is never easy to find the balance between doing nothing and choosing the new stuff. Just remember to ask yourself if you have a problem to fix or not.

If your automation framework fails 50% of the time then you should look what could be done about it and not let it fail. But if it fails 2% of the time with no easy-to-fix reason, and if you don’t have time and brains to work on it, it is probably more profitable to let it fail and relaunch it a second time and pray for it to be successful.

Information bias

An example of information bias is believing that the more information that can be acquired to make a decision, the better, even if that extra information is irrelevant for the decision

We have to work with lots of data, and always want more to help us in taking the right decision. This “information bias” concerns this wish to always have more information, which may however be totally useless or irrelevant. You may want to know who discovered the issue that you are retesting, who fixed a specific bug, who did the code review, which tests have been executed so far, which unit tests have been written…etc. In five words, you want to know everything. Most of that information is useful, but some is without any doubt useless and probably even dangerous if you remember the “Halo effect” or the “Illusion of validity”.

Try to gather only relevant information and try to avoid the rest. Stay focused on important things.

That’s it for this third article of the series about cognitive biases. The last one will be about the category “What should we remember” in the article “Manage your biases as a tester — Part 4/4”. Meanwhile, don’t hesitate to leave a comment.

References
Buster Benson: “Cognitive bias cheat sheet — Because thinking is hard”
Michael Bolton: “Critical thinking for testers”
Maaike Brinkhof: “Mapping biases to testing”
Wikipedia: “List of cognitive biases”
Daniel Kahneman: “Thinking, Fast and Slow”

Originally published at www.lyontesting.fr.

--

--