Are-you-a-thinking-testerWhat is a Thinking Tester?
A Thinking Tester is someone who actively improves their skills, questions the status quo and continues to learn about their profession.
Every legitimately context-driven tester is a Thinking Tester. However Thinking Testers can also be found in traditional testing projects following prescribed or “best-practice” processes. These testers are aware that their IEEE 829 test plans are almost identical between releases and are out-of-date immediately after achieving sign-off. They’ve probably heard about exploratory testing, but aren’t sure whether it would work for their particular project…
I’ve come up with some tips to assist you with identifying these Thinking Testers. With a little encouragement these testers could join the online global testing community, contributing to the ongoing improvement of testing methods. Ideally the terms Thinking Tester, Competent Tester and Sapient Tester will become unnecessary and obsolete.

Non-Thinking Testers
When asked about their testing approach, Non-Thinking Testers start talking about the ISTQB-prescribed Software Development Life Cycle phases. They feel strongly that test automation is somebody else’s problem, and that they shouldn’t be forced to learn to use a new tool. When these testers require new skills they expect to be trained on company time. They buy into the underlying assumption of traditional testing approaches: that anyone off the street should be able to come in and execute test cases with minimal training, and that this would constitute ‘testing’.
The “Diary of a Fake Software Tester” section in this magazine is an excellent portrayal of a Non-Thinking Tester. As you’re currently reading a magazine about testing I’d likely class you as a Thinking Tester.

Recognizing a Thinking Tester
I talk passionately about the software testing profession, and I get an interesting variety of responses from testers.
The responses from Non-Thinking Testers hardly vary: “No I haven’t heard of James Bach.”
“You go to tester meetups after work?!”
“Oh right, that’s good…”
On the other hand, responses such as these ones give me hope:
“How will I know what to test without test cases?”
“But how do you know when you’re finished testing?”
“How would we report on the progress of testing?”
“How did you find out about the tester meetup?”
Each of these responses indicate that the tester is engaged and would like to know more. Often their tone may sound defensive, because they’re confused or fear change. This is a natural reaction considering the time and effort which they’ve put into their existing test cases and documentation. I look at this defensiveness as a positive step. The tester is considering my views, and the potential impact that adopting a new approach would have on their role and on product quality.

Are you a Thinking Tester?
Let’s assume for the moment that you’re not following a context-driven approach to software testing.

Using test cases as an example:
– Test cases are unwieldy.
– It’s time consuming – if not impossible – to maintain detailed test cases which are precise and accurate.
– The number of tests grows with every product release, as does the amount of time needed to execute these tests.
Faced with these facts, some Thinking Testers look for minor ways to improve their processes. Here I’m calling these ‘compromises’. Compromises are ways to continue following traditional testing methods, while allowing for the fact that those methods have begun to slow down testing. In software testing, we regularly make compromises, which isn’t a bad thing. Until we forget that we’ve made a compromise, and treat the current process as the “best way”.
One example of compromise is to write one-line test cases, rather than step-by-step test cases with expected results for each step. This eliminates the need to write “Login as a valid user” as step 1 for most test cases, with the expected result of “User is logged in”.

Does this look familiar?
Does this look familiar?

Having one-line test cases frees up tester’s time from copying and pasting vast quantities of information. Note that “freeing up tester’s time” is a euphemism. What it really means is “stop wasting the company’s money”.

Another compromise is to also test around the area of each test case during execution, rather than trying to capture every discrete test in writing. If this is combined with a screen capturing tool or session report, you’ll be moving towards a structured exploratory testing approach.
At some point, usually when the test team is under pressure, I’ve noticed a tendency to forget that these compromises are in place for a reason – that it’s impossible to capture all relevant information in test cases. The team revert back to believing that the test cases are the complete record of everything to be tested for the product release. The most notable sign of this is the “Percentage of tests completed” metric, found on daily and weekly status reports. To the Non-Thinking Tester, this metric really means something: it shows that there is a finite amount of testing to be done; exactly how much has been completed so far; and that testing will be finished as soon as the green bar reaches 100%.
To the Thinking Tester, this metric will be frustrating and annoying. At best, it’s a vague representation to management of approximately how much test effort is remaining for the project. At worst, it will be used as the sole measure of test progress and as an indication of product quality. This “frustration with the status quo” is one heuristic for identifying Thinking Testers. It should not be confused with the “complaining often” heuristic for identifying Non-Thinking Testers! It takes practice to detect the difference.

Context-Driven Testing (CDT)
Some context-driven testers – including myself – used to follow traditional methods of testing. As Thinking Testers we were increasingly frustrated while using documentation-heavy processes. We are testers who continue to ask questions and learn, are genuinely concerned with the quality of our products and the quality of our own work.
To overcome frustration Thinking Testers need to research and read more widely about their profession. To excel at any profession, a commitment is required which goes above and beyond your day job. All testers with a computer, tablet or smartphone at home could be reading about various approaches to testing, and practising those test methods.

The CDT Tipping Point
I’ve identified a few ways for Thinking Testers to reach the CDT tipping point, i.e. the moment where testers realize that there are fundamental flaws in the traditional “one size fits all” approach to software testing, and become receptive to considering CDT methods.
The most effective method is to work with a coach or mentor. A coach can guide testers through this period of professional growth, providing resources and support. The Association for Software Testing has the details of a number of test leaders who are willing to provide coaching services. Many more AST members are also willing to provide online coaching sessions if approached by email.
Reading books about context-driven and exploratory testing can help Thinking Testers to reach the tipping point in their own time. As a starting point, I recommend reading Lessons Learned in Software Testing by Caner, Bach & Pettichord and Explore It! by Elisabeth Hendrickson.
Finally, increased exposure to experience reports of professional testing approaches is an excellent way to be inspired and motivated. For example, attending conferences on software testing, viewing past conference talks on YouTube, reading testing magazines and subscribing to blog sites such as the Software Testing Club.

This article was published in our October 2014 edition. EngelArticlesTesting ArticleWhat is a Thinking Tester? A Thinking Tester is someone who actively improves their skills, questions the status quo and continues to learn about their profession. Every legitimately context-driven tester is a Thinking Tester. However Thinking Testers can also be found in traditional testing projects following prescribed or 'best-practice' processes. These...
The following two tabs change content below.

Kim Engel

Kim Engel is a pragmatic software test manager focussed on user experience, risk and fostering communication between stakeholders. She has a passion for learning; is an avid reader of testing blogs; posts occasionally on; and tweets as @kengel100. Kim is a member of the ISST and is based in Auckland, New Zealand.

Latest posts by Kim Engel (see all)