The Way to Involve Stakeholders in Testing
When I got my first testing course, eighteen years ago now, I was taught that a typical testing project includes a test strategy, a test plan, test scripts, executing the testcases and reporting the outcome. And I was taught that testing is a profession and you need professional testers to do good testing.
But sometimes you are in a situation where this just is not the best way to test. In this article I’ll describe a project in which the testing was done differently and without professional testers. By now I know some people call this ‘bug hunts’ and until today I am convinced bug hunts are the best way to test in this particular situation.
The context; organization, project and system
The organization where I did this project is a governmental organization. They initiate, fund and coordinate all kinds of research projects and public campaigns. A relative small, dedicated and enthusiastic group of specialists. But they didn’t have knowledge or experience with IT-projects in general or testing in particular. The IT department of this organization was very small and they didn’t have much experience with IT projects.
The aim of the project was to make a web application where the people could get personal advice concerning energy savings. The advice was based on a complex questionnaire that visitors needed to fill in. There was quite some functionality in the questionnaire and the advice. The development of the application was outsourced to a software house. In the project I was a requirements analyst, quality assurance officer and tester at the same time. Furthermore a project manager, half a dozen users (content experts) and two employees from the IT department were involved. The application was built in iterations, the requirements were elicitated and documented upfront.
The problem with testing
The iterations generally took two or three weeks. During the iteration we discussed the requirements with the developers. We did several workshops with users and developers to refine the requirements as well. Making a test strategy, actually setting priorities for the iteration, was no problem. But the problem was that I was only part time involved in the project and I just didn’t have the time to write test cases and test scripts. There were no other professional testers in the organization and there was no budget to hire a tester. So there was a lot of test work to be done. The project evolved and the test work was piling up.
During a discussion about this situation with the project manager he suggested to let the users test: “After all it is a user acceptance test and they have domain knowledge!” At first I wasn’t very enthusiastic about this idea; they were not professional testers and after all testing is a profession. On the other hand, the users were definitely enthusiastic, open minded people with a critical mindset and a lot of business knowledge. And they had time. So I came to the conclusion we had to give it a try.
We started with a workshop in which I told the users what my expectations were and how we were going to work. We decided to start with a test preparation workshop after the prioritization workshop. We called it a test specification workshop but we didn’t make test specifications we know from script-based testing. We listed all things we could test for those parts of the system that were built in the current iteration. The output was not test script, it was more like a checklist. Today I call this checklist a set of test ideas or test points; short statements about what to test but not in detail how to test it. These test points were divided in different test charters.
We decided to execute the tests in test sessions, at Friday morning after the software was delivered. This way I was able to monitor the test activities. Furthermore we were able to work together on the project. The tests were mostly done in pairs. One of the advantages, besides the fact of four eyes instead of two eyes looking at the screen, was that the tester who wasn’t hitting the buttons could document the defects.
The first test session was, to be honest, not a great success. I asked the testers to bring their laptops for testing the software. That didn’t work out. The first problem was that I couldn’t install the software on every machine due to limited user rights. The second problem was that the system didn’t work properly on some machines due to specific settings. At about 11 am I was completely over-stressed and most testers were just drinking coffee. We did not do a lot of testing and I learned (again) that the test environment is an essential and complex part of a test project and that you need to prepare and pre-test your test environment.
During the second test session we used separate laptops on which I had installed the software and performed an intake-test prior to the session. During this session we had four users, an IT-employee and the project manager, paired up in three pairs. I walked around and discussed defects, requirements, specific screens and I saw it turn out very well. During the last half hour we discussed the most important defects together and after the session I handed over the defects to the software house. I was more than satisfied!
The testing only got better
From that day on we did test sessions, or bug hunts as I call these now, every week. And they only got better organized and managed, mainly because we repeatedly evaluated the test sessions. I once heard someone say ‘If you are allowed to implement only one single agile practice it should be retrospective.’ And I fully agree with this! Due to these evaluations the test sessions continually improved.
After the first session we decided to do the test sessions every Friday morning. Everybody who was involved in the project knew about it. Not all users were present every week but we invited others to join the sessions as well, even the managing director joined a test session once! This way stakeholders got involved in the project and influenced the final result. Acceptance was not an issue, it was incorporated in the approach!
During the project we noticed a growing enthusiasm for the test sessions. The sessions were not only useful, they were fun as well. I took care of cookies (those you can eat) and candy bars. Sometimes I had a little gift for the best defect found that morning. And always I tried to boost the energy! Testing was fun, the stakeholders were really involved.
To manage the test sessions I made test charters with a specific scope, test goal and test points. Every pair got a test charter and that was enough work for about one hour. Per session the pairs had about two hours test time; instruction, breaks and debrief not included. This extra hour was used for exploratory testing (we called it free-testing), preferably within the scope and goal. Note taking wasn’t very good but at least I got the testers to log defects well.
Talking about defects: the test sessions led to loads of defects, for me this meant a lot of managing. Most of the testers reported their bug pretty well, that wasn’t a problem. Prioritizing and discussing them with the software house was. We started with a prioritization-meeting but this took too much time. In the course of the project the project manager and me prioritized the bugs, based on suggestions by the testers. In practice this meant downscaling the priority. With the software house we discussed a lot about whether an issue was a defect or an additional user requirement. So, this part was business as usual.
Of course not everything was bright and shiny. Not every (pair of) tester(s) was disciplined enough to do what was within the goal, scope and test points of a specific test charter. And we sometimes missed defects, especially in the more complex parts of the software. I tested some parts of the system myself using global test scripts to resolve this. Another issue was that most testers were not very motivated to re-test solved defects.
Every organization, every system and every project is different, so every test project requires a unique approach. Situational testing offers various forms of testing and allows you to complete your testing projects flexibly, at the lowest possible cost and in the shortest possible time. Among the forms of testing recognized within Situational testing are factory based testing, global scripting, session based testing, bug hunts, test tours and free style exploratory testing. Testers choose one or more of these test forms, depending on the organization, project and system. This way, Situational testing encourages you to approach testing in a flexible, structured and pragmatic fashion. With Situational testing you achieve better results than using a single specific approach.
The value of bug hunts
There is not one way of testing that is always fits best, it depends on the context whether a specific way of testing is effective. But in the context described above bug hunts worked very well. We implemented it in a ‘trial and error’ way but at the end we had an approach that fitted the context. I’m sure bug hunts, or a variation on it, can be successful in other projects as well and I’m convinced a lot of test professionals don’t consider them.
Personally I have the opinion test professionals should master different ways of testing and they should be able to judge which way of testing fits best in a specific situation. To help test professionals do this at SYSQA B.V. we developed Situational testing. You can find more information about this in the box highlighted above.
So for me this leads to two conclusions. The first is that successful testing with non-testers is possible. Not for every organization, project or system but at the end of the day bug hunts are a kind of structured testing with specific conditions and advantages. The second conclusion is that successful test professionals should master different ways of testing, situational testing helps them to achieve that goal. For me personally I not only learned a lot in this project, we also had a lot of fun. We did a good job and that feels good.