This is the fourth and last article of the series about cognitive biases. Please start with this first article if it’s not done yet: Manage your biases as a tester Part 1. In this last one, we’ll see some biases of the category “What should we remember” of the Buster Benson‘s categorization according to his article
What should we remember?
Something very positive will generally have less of an impact on a person’s behaviour and cognition than something equally emotional but negative.
Be careful not being too negative about someone else work, because it will have a greater impact on people than the opposite positive words. Because of the “Ikea effect”, everyone is very sensible about his own work and want it to be appreciated, not criticized. We, software testers, have this responsibility to be good communicators in all circumstances; for good news, but also for bad news. Also, giving some empathy will always be rewarded.
Here are a few reads we recommend this week
Do I need to write an automatic check or should I manually test (with or without tools)? Alan Richardson (Eviltester for friends) has his own point of view in this article: http://blog.eviltester.com/2016/10/q-when-do-we-prefer-manual-testing-over.html
No testing in DevOps, or testing everywhere. Dan Ashby has an answer here: https://danashby.co.uk/2016/10/19/continuous-testing-in-devops/
Who are you testers? The Good, the Bad, the Ugly: Teamwork for Software Testers.
Interested in a feedback about a work with remote Software Testing students? Read this story: https://medium.com/linagora-engineering/working-with-software-testing-students-b8fddccb53b7
In the first part of this series dedicated to biases, we saw a list dedicated to biases due to “Too much information“. Then in the second, some cognitive biases due to “Not enough meaning“. If you didn’t read them yet, I suggest you to start with these two articles before this one. In this third part, we’ll see that the “Need to act fast” can also lead to some biases.
Need to act fast
Theory which suggests that people typically adjust their behavior in response to the perceived level of risk, becoming more careful where they sense greater risk and less careful if they feel more protected.
As a Software tester, you may feel more protected if you know that a lot of unit tests, integration tests and end-to-end tests are running on each build (and that they are green and enabled), but that doesn’t mean that there is no risk to evaluate in the new version, in particular if new developments have poor unit testing, useless integration tests and no end-to-end tests. You may have more checks and at the same time may have to be more aware and cautious with what is candidate to release. Please try not to compensate the risk.
In the first part of this serie dedicated to biases, we see a list dedicated to biases due to “Too much information“. I suggest you to read it first if you didn’t read it yet. In this second part, we’ll see that “Not enough meaning” can also lead to some biases.
Not enough meaning
Illusion of validity
A person overestimates his or her ability to interpret and predict accurately the outcome when analyzing a set of data, in particular when the data analyzed show a very consistent pattern—that is, when the data “tell” a coherent story.
You are probably a victim of the “Illusion of validity” when you are testing a feature with a limited number of environments. If you couldn’t find any issue testing within the 10 major environments, the pattern is that the feature works with all those 10 environments, the story told by those 10 results is that there is no problem with that piece of software. Those 10 environments are only a part of all the possibilities, and if you stop there you may miss something very bothering. Testing can be infinite and you will hardly be able to test everything, but it’s your job as a tester to communicate that you couldn’t test all environments and give a good overview of the risks.
“Software testing conferences are so expensive that the best way to attend one is to be selected as a speaker.”: those have been my words for the last years. Having been selected as a speaker, I headed to Tallinn, the capital of Estonia, in June 2016, for a promising 2-days conference that I had been recommended by one of my previous managers as being one of the best he ever attended in Europe.”
If you have read my first article about this conference as an audience member, you may have already been convinced that this conference is awesome to attend! You may however still need to be persuaded that it is also a really good experience to be a speaker there; In that case here are few points detailing my feedback as a presenter. …
Here are a few reads we recommend this week:
- Some tips to influence developers: https://mysoftwarequality.wordpress.com/2016/10/04/5-easy-steps-for-testers-to-influence-developers/
- A good automation strategy is mandatory, read this to help you choose which test cases to automate: http://automation-beyond.com/2016/10/10/how-do-you-choose-which-test-cases-to-automate/
- The evolution of the testing pyramid: http://james-willett.com/2016/09/the-evolution-of-the-testing-pyramid/
- An interesting feedback about remote working in a podcast (blog article linked inside): http://testinginthepub.co.uk/testinginthepub/testing/testing-pub-episode-36-part-remote-team/
- You don’t know how to write test code? You don’t have the time to do it? Here are some useful capture and playback tools available online: https://medium.com/@bcjordan/using-sweet-robots-to-test-your-websites-%EF%B8%8F-without-writing-code-3ccdb2ae9c67#.opbjfi4vt
- Your users are your testers 🙂 : https://medium.com/swizec-a-geek-with-a-hat/you-dont-need-tests-6bf64b442072#.qqld6zhdc
It is a long time since I wanted to write about cognitive biases. I have been first made aware of it while reading this fascinating book “Thinking, fast and slow” by Daniel Kahneman (not a newbie, he won a Nobel prize), and with several blog posts or articles (references at the end). A list of all cognitive biases is available on wikipedia, but this is such a huge disorganized “tangled mess” that the task of writing about biases in Testing frightens me, and I have to admit that I procrastinated doing it. And then a blog post from Buster Benson (Product at SlackHQ) saved me. Thanks to his paternity leave, he decided to organize the mess and found four problems that biases help up to address: “Too much information”, “Not enough meaning”, “Need to act fast” and “What should we remember”. And now everything is almost crystal clear…at least if you go and read this.
An awesome visualization has been done by John Manoogian III and has been since added to the wikipedia page.