Michael Bolton

Organisation – DevelopSense

Role/Designation – Principal

Location – Toronto, Ontario

Michael Bolton is a consulting software tester and testing teacher.  He is the co-author (with senior author James Bach) of Rapid Software Testing, a course that presents a methodology and mindset for testing software expertly in uncertain conditions and under extreme time pressure.  Michael is a leader in the context-driven software testing movement, with 20 years of experience testing, developing, managing, and writing about software. Currently, he leads DevelopSense, a Toronto-based consultancy.  Prior to that, he was with Quarterdeck Corporation for eight years, during which he managed the company’s flagship products and directed project and testing teams both in-house and around the world.

1. How did you become a software tester? How long have you been associated with software testing?

I became a software tester when, back in high school, a friend provided some instructions for entering a lunar lander program on an HP49C calculator.  I entered the instructions correctly—or at least I thought it was correctly.  When I tried the program, it didn’t work.  I had to investigate why. I became a software tester when I got a job that involved querying a database for information—and then writing programs to do that more effectively.  If you want to write programs and have them do what you’d like them to do (and not do what you’d prefer they not do), then you have to test. I took  my first job called “tester” at Quarterdeck in 1994.  I had been a technical and sales support person for the company, and the test manager thought I’d make a useful contribution to the test team and that I’d enjoy it.  He was right on the second count, and I hope he was right on the first.

2. How different is software testing today from the day you started software testing?

I don’t know, really.  What’s “software testing today”?  Is there a single craft or discipline of software testing today—or was there one then?  I don’t get a big enough picture of the field to give a good answer.  I can only tell you about the bits that I see.  I’m happy that some people have recognized that testing is more than checking to see that a product can produce some expected output given a predicted input.  But I don’t believe that group is large enough or influential enough that that idea has taken hold across the wide world, as much as I would like it to.

3. ISTQB has been surviving despite people like you have been offering stiff opposition, would you be able to see ISTQB end in the next 5 years? What are your views on Certified Agile Tester which is debated in Testing circles?

I don’t predict the future.  I don’t know how to do it.  I’d like to see people stop paying an employment tax to the ISTQB, a tax levied without a shred of representation.  The sad thing is that the ISTQB’s marketing is predatory on people’s fears; employers’ fears that they’ll hire an unqualified tester (the ISTQB provides certifications, not qualifications), and testers’ fears that they’ll be seen as unqualified without pay the pound of silver for the piece of paper.  And, for many kinds of marketing, fear works. The “Certified Agile Tester” is yet another attempt by a company that produces training programs to market what is almost certainly—and at best—a perfectly ordinary four-day testing class as something grander than it is.  Here’s one of the claims:  “During the four day training, you acquire all the skills required to test agile successfully.”  I mean, I shouldn’t have to say anything about that at all. No person who takes testing seriously would take a claim like that seriously.  Here’s another one “’Certified Agile Tester’ provides you with all skills and competencies required to run agile projects efficiently.”  Wow—not just the skills to test on Agile projects, but to run them.  And who are the people who developed the certification?  Can they be trusted?  Do they have a known body of work that we could look at and discuss?  If you go to the site of the “Institute” that promotes the certification, you’ll find nothing about the qualifications of the people who designed it; you won’t even find out who they are.  They might as well be sock puppets, for all I know.

4. Tell us something about test framing.

Test framing is a skill that I believe all testers need.  It’s the skill of constructing and describing a chain of logic that links a test and its result all the way up to the overall testing mission. I believe that’s a critical skill. In our travels and in our online coaching, James Bach and I have both observed that many testers seem to be remarkably unprepared to present the framing of their tests.  You perform some action, you observe some result, and you determine that there’s a problem.  What, specifically, suggests that there’s a problem here?  What was the risk?  For whom would that be a problem?  Why did you choose that test?  Why did you not choose some other test?  How do your choices meet the mission?   We’ve found that testers often stumble when we ask these kinds of questions. I’ve written a good deal about test framing here:  http://www.developsense.com/blog/2010/09/test-framing/

5. We read your experience report of testing an in flight entertainment system in Pradeep’s blog long ago and we know you travel a lot. Do you see that airline has taken some action?

I haven’t been on that airline for a while, but a year or so after I went through that exercise, I was able to reproduce the problems I found.

6. You have long association with the Association of Software Testing (AST) and contributed to BBST courses. Do you still active in their activities and how you promote the organisation from your own perspective?

I admire the work that the Association has been doing—and I especially admire Cem Kaner and all of the people who have volunteered to deliver the Black Box Software Testing courses.  Those courses have my highest recommendation; I refer to them in my classes, in my talks, in my blog, and so forth. I attend the CAST conference regularly, and I was conference chair back in 2008.  I have contributed plenty of feedback to the courses, and especially to the Test Design course.  All that said, I’m not a core organizer of the Association for Software Testing.

7. I meet teams where exploratory testing is vigorously practiced on traditional and agile developments. They tell me that if they don’t introduce high level of automation in their development cycle they have less time left to spend on exploratory testing. You believe that automation is no more than checking which is contrary to my thinking. I believe that automation require great degree of intuitivity to build the right tests so they can be run multiple times to gain confidence in the application without having to be run them manually. You have contrasting views. What advice do you have for the testers?

Ian Mitroff says that when there are enough problems, you’ve got a mess.  I’d have to start by saying that you’re describing a mess of problems with the way I prefer to think about testing.  It’s difficult for me to figure out where to start here. Let’s start with the idea that exploratory testing is something that you might not have enough time for.  Exploratory is not a thing you spend time on, but a way to approach all of your work.  It’s not a thing that you do; it’s a way that you think and a way that you work.  It’s not a task to be performed after you’ve done all of your other testing (whatever that might be).  Exploratory testing is characterized by the extent to which the information you’re learning feeds back into your test design and your test execution.  I’ve written most recently about that here:  http://www.developsense.com/blog/2011/12/what-exploratory-testing-is-not-part-2-after-everything-else-testing/ A related problem is the idea that test automation is also something that you do.  But like exploratory testing, automation-assisted testing is an approach, not an activity or a technique.  The two approaches—automation-assisted and exploratory are not incompatible.  You can do exploratory testing that uses a great degree of automation; and you can use automation in a very exploratory or in a very scripted way.  I’ve written about that here:  http://www.developsense.com/blog/2011/12/what-exploratory-testing-is-not-part-3-tool-free-testing/. You’re making a claim that I’ve said that automation is no more than checking.  That claim is simply incorrect, an issue also addressed in that same blog post.  But it’s true that automated checks are no more than checking.  Each check produces a single bit; true or false, one or zero, yes or no, green or red, pass or fail.  Testing isn’t about passing and failing a check, though—and it’s not even about passing or failing a whole bunch of checks.  Testing is about investigating the product and looking actively for problems in it.  Every check uses a single oracle, a single principle by which we’d recognize a problem.  At the unit level, it makes a good deal of sense to develop those kinds of checks.  At low levels, it’s easier to anticipate the kind of problems that checks can help to identify.  You’re closer to the code, the intended behaviour of the code is easier to spot, the potential problems are more atomic.  Plus, for the programmers, the feedback loops are faster.  At higher levels, you have two choices:  you can get humans to interact with the product in as many ways as they can (and often with the assistance of tools); or you can try to anticipate the kinds of problems that checks can find, and program as many checks as you can.  But why bother with the second approach when programming the checks is much harder, and when the machinery is incapable of recognizing new risks and new problems. The next problem is the dangling motivation for throwing enormous amounts of time and effort into automated checks.  “if they don’t introduce high level of automation in their development cycle they have less time left to spend on exploratory testing”.  Well, what problem is the high level of automation supposed to solve?  What’s the risk that people want to address here?  The problem that programmers might make mistakes because they’re working too quickly?  Is there evidence that mistakes or regression problems happen a lot?  If there are a lot of regression problems, or if people are worried about the possibility, I’d like you to consider the idea that you already have a test result:  the speed of development may be more than your programmers and managers can handle confidently.  I’d further suggest that if that’s the case, and people actually care about addressing the risk, that the programmers work less quickly, or work more carefully, or check each other’s work more thoroughly.   More high-level checks seems to me a really poor way to address lower-level problems.  I have no problem with the programmers developing automated checks for their own work, and collaborating with testers on ideas about risk.  But there’s this assumption that because we should have lots of automated, low-level functional checks, we should apply automated checks to the same degree at higher levels of the program, and that testers should be programming those checks.  To me, those conclusions don’t follow. Yet another problem:  the idea that testing (or worse, automated checks) are there to provide confidence in the program.  To me, that’s not the point of testing at all.  To me, the point of testing is not to build confidence, but to demolish false confidence and identify risk that’s still there.  If you want confidence, ask a programmer.  Most of them have plenty of confidence.  How do they acquire that confidence?  Many of them acquire that confidence by not testing at all, or by testing in a fairly shallow way.  As a tester, it’s not my job to build confidence, but to question it.

8. I meet teams who claim to be doing exploratory testing during each sprint but they do not record the sessions. Instead they find the problems and build tests in their automated suite. What message do you have for these testers?

Consider cost versus value versus risk in each test activity.  Not recording the sessions has a very low cost—nil—but is there a risk there?  Is there value in recording some aspects of the session?  Maybe so—but maybe not.  That’s up to the team to work out.  I will say that.

9. Where do you see Software Testing in next five years/ten years?

I don’t make predictions like that.  They give little but a million chances to be wrong.  I’m okay with the world turning as it will, and responding to it.  (Predicting the future is an unsustainable management approach too, by the way, especially for testing.)

10. Michael Bolton minus tester is —-–

A question I’m not going to answer.

11. A different question – What is your opinion about Indian Software Testers?

That’s like the question about software testing today, way above.  I’ve met some Indian testers—the Weekend Testers and the people from Moolya come to mind—who are interested in the investigative, learning-oriented approach to testing.  I’ve met others who believe that testing is about showing that the product works, which is a very weak kind of testing indeed.  Those testers and their teachers learned that point of view mostly from big, dumb, Western organizations, so far as I can tell.  I hope that comes to an end.

12. Complete this sentence – “I use twitter because -”

As someone who works in a virtual office, it provides me with a virtual break room where I can have quick chats with my colleagues and find out what’s interesting to them.

13. Last question – Do you read Testing Circus? If yes, what is your opinion about this magazine?

I don’t read Testing Circus ask much as I’d like.  I’ve got a huge backlog these days.  I’ll look forward to the next issue, though.

https://i2.wp.com/www.testingcircus.com/wp-content/uploads/Michael-Bolton_Testing-Circus.png?fit=1024%2C1000&ssl=1https://i2.wp.com/www.testingcircus.com/wp-content/uploads/Michael-Bolton_Testing-Circus.png?resize=150%2C150&ssl=1Ajoy Kumar SinghaInterview with TestersInterview with TestersMichael Bolton Organisation – DevelopSense Role/Designation – Principal Location – Toronto, Ontario Michael Bolton is a consulting software tester and testing teacher.  He is the co-author (with senior author James Bach) of Rapid Software Testing, a course that presents a methodology and mindset for testing software expertly in uncertain conditions and under extreme...
The following two tabs change content below.

Ajoy Kumar Singha

Ajoy is the founder and editor of Testing Circus magazine which is read and subscribed by thousands of professional testers around the world. He is associated with various testing forums such as NCR Testers Monthly Meet as a founding member. Follow Ajoy on Twitter.