Manage your biases as a tester – Part 1/4

Manage your biases as a tester – Part 1/4

It is a long time  since I wanted to write about cognitive biases. I have been first made aware of it while reading this fascinating book “Thinking, fast and slow” by Daniel Kahneman (not a newbie, he won a Nobel prize), and with several blog posts or articles (references at the end). A list of all cognitive biases is available on wikipedia, but this is such a huge disorganized “tangled mess” that the task of writing about biases in Testing frightens me, and I have to admit that I procrastinated doing it. And then a blog post from Buster Benson (Product at SlackHQ) saved me. Thanks to his paternity leave, he decided to organize the mess and found four problems that biases help up to address: “Too much information”, “Not enough meaning”, “Need to act fast” and “What should we remember”. And now everything is almost crystal clear…at least if you go and read this.

An awesome visualization has been done by John Manoogian III and has been since added to the wikipedia page.

cognitive_bias_codex_-_180_biases_designed_by_john_manoogian_iii_jm3

 

I will use this classification and selected a few biases and fallacies that I will tend to illustrate with situations Software Testers may experience. Some examples are using Software Developers, also because it is not only important to understand our own biases, but also understand biases of other team members (and management).

Most of the quoted definitions in Italic comes from Wikipedia.

Here is the first part of this serie with some biases due to “Too much information”.

 


Too much information

Availability bias

The tendency to overestimate the likelihood of events with greater “availability” in memory, which can be influenced by how recent the memories are or how unusual or emotionally charged they may be.

photo-256889_1280Software Testers must evaluate risks: risk of software glitch, risk of data loss, risk of incomprehensible user interface, risk of angry users, risk of bad performance, risk of broken Interface with some localization, risk of data injection or hacking, etc. If you spend 2 months working on testing and improving performance of the product and release this brand new and quicker version, then you will have the performance subject still in memory and you will be very aware of it while working on the next version. If correctly managed, it’s not a problem, but you really should not forget all other aspects of the product, your testing must not be driven by previous experiences, or just a little and intelligently.

Also when a very bad experience occurs, you should have the ability to move forward and not be polluted by old memories. For example, if you miss a big issue in version 2.0 that goes in production, if that issue is that the interface was reduced to a blank page in all browsers localized in ZH (Chinese), do you really have to test all localizations in the following versions of the product? The issue fix may have enforced that functionality, hopefully some unit or integration tests have been added and you probably only have to check 2 localizations to have a sufficient level of confidence.

 

Anchoring effect

Common human tendency to rely too heavily on the first piece of information offered (the “anchor”) when making decisions.

shipping-1573495_1280Anchoring effect is everywhere. In a negotiation over the price of a home, the seller makes the first move by setting a price and this is a big advantage in the negotiation. Except if this price is totally non-sense, you will suggest a lower price based on the anchor defined by the seller.

Another story: your cunning manager wants you to estimate the time needed for testing version n. He comes and say: “Hey beloved tester, this version n is only about 4 Major issues, it has been fixed in 4 days by our developers, what about testing this in 2 days. Seems legit!”….

OK, stay calm and relax. Even if you think that 2 days is way not enough, because of the anchoring effect you will hardly answer with a big number to his question…So maybe that you will quickly suggest a shy “4 days”, maybe “6 days” if you’re kind of a rebel. But when you will later think about it, then you will discover that these 4 major issues are impacting the database, so you will have to test all database upgrade and also new installations. You will also realize that the UI is very impacted, which means your end-2-end checks will probably fail and will need some adjustments. Finally, since you cannot automatically check on mobile, then you will have to test with several Androids, some iOS and Windows Phone, not only smartphones but also Tablets and Phablets.

Moreover, one of your tester is off, the other one is a junior. We all know that testing cannot easily be estimated (read my misconceptions about testing), but after thinking about the issues of this version n you will conclude that at least 10 days are necessary for your team. That’s just an example, but if you’re fooled by this anchoring effect, then you’ll have the discomfort to later ask for some extra testing days and it will be hard to justify.

 

Confirmation bias

More likely to pay attention to results that confirm my opinion

agree-1728448_640Let’s say you have a big sheet of test results. Before consulting it, you probably have an opinion about what could be wrong, and what could be OK based on other biases (see “Halo effect” or “Murphy’s law” for example). Because of confirmation bias, you will be more focused on results you were expecting and not on other results that are maybe more interesting. You will treat those results which confirm your opinion first and then neglect the latter.

All results should have the same weight before running any deep analysis. Your opinion as a tester should only be formed when you have enough understanding of the whole picture…but be also careful with the “Information bias” which we will see later.

 

 

 

Naive realism

The human tendency to believe that we see the world around us objectively, and that people who disagree with us must be uninformed, irrational, or biased.

b_1_q_0_p_0Everyone has his own point of view. Developers have their own understanding and interests, Product Owners and testers also have their own. With bad communication between them, everyone will probably think that he is the only one owning the truth. In order to avoid “Naive realism”, communicate, try to understand developers, their issues and difficulties, try to understand the architecture of the product. At the same time, try to understand the business, what the users are expecting, what their needs are, what is expected in the upcoming 6 months, 1 year or 3 years…You will take good decisions, and you will help Product Owners with your information, if you understand well the whole environment and not only the software issues.

 

 

 


 

This is the end of the first part. Next time we will see some other biases due to “Not enough meaning” in the article “Manage your biases as a tester – Part 2/4“. Meanwhile, don’t hesitate to leave a comment.

 

References
Buster Benson: “Cognitive bias cheat sheet – Because thinking is hard”
Michael Bolton: “Critical thinking for testers”
Maaike Brinkhof: “Mapping biases to testing”
Wikipedia: “List of cognitive biases”
Daniel Kahneman: “Thinking, Fast and Slow”

Share this... Tweet about this on TwitterShare on LinkedInShare on Google+Buffer this pageShare on FacebookPin on Pinterest

3 thoughts on “Manage your biases as a tester – Part 1/4

Leave a Reply

Your email address will not be published. Required fields are marked *