In the first part of this serie dedicated to biases, we see a list dedicated to biases due to “Too much information“. I suggest you to read it first if you didn’t read it yet. In this second part, we’ll see that “Not enough meaning” can also lead to some biases.
Not enough meaning
Illusion of validity
A person overestimates his or her ability to interpret and predict accurately the outcome when analyzing a set of data, in particular when the data analyzed show a very consistent pattern—that is, when the data “tell” a coherent story.
You are probably a victim of the “Illusion of validity” when you are testing a feature with a limited number of environments. If you couldn’t find any issue testing within the 10 major environments, the pattern is that the feature works with all those 10 environments, the story told by those 10 results is that there is no problem with that piece of software. Those 10 environments are only a part of all the possibilities, and if you stop there you may miss something very bothering. Testing can be infinite and you will hardly be able to test everything, but it’s your job as a tester to communicate that you couldn’t test all environments and give a good overview of the risks.
Tendency to attribute greater accuracy to the opinion of an authority figure (unrelated to its content) and be more influenced by that opinion.
Your boss comes to visit you with a piece of software with this argument: “Test case A is tested and is OK, it works. Could you please check that Test case B (derivative of Test case A) is OK too?”. Well, that is your boss, he tells you that Test case A is OK, why would you check? As a tester you shouldn’t be sure of anything. Who tested this Test case A? Is it done by a good tester or someone else? What exactly has been told to your boss? Maybe that he just said “Yes, I couldn’t find any issue so far” but didn’t have time to deeply test it.
In this case, maybe you should have a talk with tester of Test case A, and depending on what you learn
about his testing, it might be a good idea to also have a look at Test case A. Your boss won’t blame you for finding something wrong in this part, but he might blame you for not finding something wrong in it. Take real care of this Authority bias.
Propensity for humans to favor suggestions from automated decision making systems and to ignore contradictory information made without automation, even if it is correct.
Automatic checks are green but testers found issues. One tester tests the same scenario of one of the green automated checks, and sadly it does not work for him. If other members of the team don’t have much time to invest to check this, and if they are victim of the “Automation bias” then they will probably think that the problem is in the chair, not in Automation (PICNIC). The probability that they are right is not null, but it definitely has to be evaluated.
An observer’s overall impression of a person, company, brand, or product influences the observer’s feelings and thoughts about that entity’s character or properties.
Some people are able to sell you anything, that is a talent. Some people with a big ego can influence you in your feelings. Be careful of that, you shouldn’t trust more anyone than any other one. Even the best developers in the world who tell you that a change has no impact can be wrong and break a part of the product.
Furthermore, as a tester with a huge experience and highly respected in your team, don’t play with this “Halo effect” to try convincing any other member of the team if you can’t be sure of what you say.
If you are wrong, you may say farewell to your reputation.
Tendency to more easily recognize members of one’s own race
Discussions between developers and testers often tend to be one race against the other one. You will be more willing to believe what another tester says than what a developer or a user tells you. That is also true for a developer speaking with other developers more trusted that testers. As a software tester, you should be aware of that and not give more credit to the more alike person, you must be unbiased.
Anything that can go wrong, will go wrong
Being a software tester should not be about being pessimistic. In real life, anything that can go wrong can go very well, shit happens sometimes and sometimes not. So even if you find an issue when filling this field with a Cyrillic string at an exact length of 255 characters does not mean that one day a user will do that. Even if one does, maybe the 150.000 other users won’t. That is fine, let this bug live in your product and relax.
“Irreproducible bugs become highly reproducible right after delivery to the customer.” – Michael Stahl’s derivative of Murphy’s Law
— Lyon Testing (@TestingLyon) September 20, 2016
Brooks’ law is a claim about software project management according to which “adding manpower to a late software project makes it later”.
For testers, let’s say you estimated that you have 15 days remaining for ending one testing task (test a Release Candidate, test one feature with all environments,…). Your boss think it is too long, he wants to release in 5 days, so he suggests you to add 2 more testers on the task. However those testers don’t know this component, so you will have to train them. They will find small issues already known, so you will spend extra time finding for them the corresponding issue in the issue tracker. Since they also are
just discovering this module, they will be naturally slower that 2 clones of yourself, they are not aware of the tools you are using to help your testing, and they are full of cognitive biases. If they are subject to “Information bias” then they will just pollute your next week.
Don’t forget in this case to send him one or two exemplary of “The mythical Man-Month” by Frederick P. Brooks Jr.
I bought my boss two copies of The Mythical Man Month so he could read it twice as fast
— Randall Koutnik (@rkoutnik) April 21, 2016
That’s it for this second article of the serie about cognitive biases. Next time we will see some more due to the “Need to act fast” in the article “Manage your biases as a tester – Part 3/4“. Meanwhile, don’t hesitate to leave a comment.
Buster Benson: “Cognitive bias cheat sheet – Because thinking is hard”
Michael Bolton: “Critical thinking for testers”
Maaike Brinkhof: “Mapping biases to testing”
Wikipedia: “List of cognitive biases”
Daniel Kahneman: “Thinking, Fast and Slow”