When hiring managers are asked about the skills they are looking for in their new hires, soft skills are increasingly listed as of high importance as their knowledge of key technologies. As software teams become more agile, valuing the interactions between team members and their stakeholders, this trend is likely to continue.

The phrase soft skills is really code for a group of related skills and abilities that enable people to effectively interact with other people. The skill most likely poised at if not near the top of that list is the ability of communication. Communication skills apply especially to verbal and non-verbal personal communication, but we should not forget that this soft skill extends to our electronic and written communications as well.

For software testers this includes our test documentation which may cover a broad spectrum of artifacts including: written test strategies and test plans, bug and test reports, performance analysis documents, supporting documentation or walkthroughs, wiki and knowledge base articles, and it even extends to our email and chat correspondence.
The most talented team of software developers and testers will be doomed to failure if they lack the ability to effectively communicate their needs, questions, and concerns to their team mates and project leadership. As testing seems to so often be crunched for time, effective communication is perhaps even more critical for software testers, as key decisions about what to fix and when hinge upon it. So how do we as software testers improve our communication skills so that the information we hope to communicate is passed on accurately and in a way that is received well by the teams we support?

If we had to start with one area of our communication portfolio where should we start growing our craft first? We might choose whatever we feel is our primary tool. For some of us, email is the most used tool for reporting, for others it’s a bug tracking system designed to capture metadata about the issue we have discovered as well as to detail the questionable behavior we find in software. The mechanisms we choose to use may even extend to wiki articles designed to pass on knowledge to the rest of the team. Or it can be customer documentation on running a complex process. Maybe it’s a procedure designed to help quickly deploy and arrive at the next location to test.

Because of the varying modes of communication used, the context of our environment will greatly impact what is called for in our messages to our team. For many the test or bug reports we author are our primary means of communication and have long lasting impact and consequences – especially when the timing of report follow up may lag into subsequent development iterations. Thus, we must sharpen our craft of writing just as much as we hone our skills as testers. For the sake of this article, let us focus on the area of bug reporting.

When testing, I like to start with the five Ws plus an H and a T: Who, What, When, Where, Why, How, and To What Extent. While digging into issues we encounter during testing, these questions can help us guide our inquiry into the software. Likewise, the answers to these questions will form the beginnings of the record of our testing discoveries. These questions help us craft our content for presentation to our stake holders. Just as good software development has its intended customer and audience, similarly good writing, whether it be a fantasy novel, a song lyric, an article in a news journal, or even our latest bug entry into our issue tracker has its own intended consumers for whom we must consider as we begin to put words to the screen.

All reports start with a ‘who’: that being the audience who will view it. It may also include a ‘who’ in terms of what type of user or the role a particular user may have in the system when the issue is found. But in general I start by considering who will be reading this bug report? It is important to identify who the primary audience of this test report will be. Will it be the developers who you work alongside every day? Will it be reported up a chain to higher management who are more interested about higher level design concerns than extreme detail? Will technical writers use your report as a basis for a knowledge base article or FAQ to help customers better use the system? Would you write the report differently if it was going directly to the project manager or your test manager? If so, it might be useful to ask why that may be. Communication with all of these people can be effectively classified into three groups: upwards, downwards, and lateral.

The first group of people, the upwards, are those to whom our reports may be aggregated and rolled-up into one smaller concise report. Often in large organizations, reports of many kinds are collected from the direct team leads and managers and passed on to higher level managers, product owners and further depending on just how large and deep the chain goes to the c-stack level. Direct managers may want detailed information about what you are reporting, but it may also be that they only need just enough information to understand what the issue is and what is necessary to see it resolved.

We must remember that while many managers cut their teeth at the same level we may currently work on a team, some managers are particularly gifted at managing the people aspects of the project and may not have had the chance to gain the depth of technical knowledge of the average team member. So it is important to understand and know your direct manager so that you can effectively give the right level of detail and be as technical or as simplified in description as necessary to convey the report accurately. The risk of not taking time to get to know the managers is to waste time compiling extra information that will lose much of its context and meaning when it gets rolled upwards or to be sent back to dig for more information and then report back again.

Skipping to the third group of people, the downwards for a moment, let’s discuss people that may consume what you’ve written downstream. Unfortunately in software there may often be multiple ways to perform a particular operation. In some cases the most obvious mechanism to trigger an action, may not be working in the build you are testing. It may be that you can find a work around, through an alternate menu or key combination to perform the operation to test.

There may still be a bug because of the missing trigger mechanism, but you may still be able to test your software, and deliver value downstream to a technical writer who can include it in the help documentation, troubleshooting guide or knowledge base for the help desk to better assist the customers. In this case, the bug may have to do with something as simple as a key binding, and because of that, it may be that fix is not a high priority one to resolve. Downstream stakeholders need sufficient detail of possible workarounds so they can effectively communicate the alternate procedure to the customers.

The key-bind example is just one example where a downstream stakeholder might need to have detailed data. Another would be an issue with how an API was deployed. It may be that you discover a discrepancy in the public documentation for this API. It might even be an easy fix, but one that you or those on your own team lack the direct access to update. That fix may then have to be updated in some artifact file, and then a change request put in for a deployment engineer to roll out to the content data network when the next deployment window opens for the production system. It would be important in this case, to be clear about what the defect is in the documentation, as well as to communicate what the suggested update changes, that should clarify and fix that gap.

It may also be necessary to include instructions on how to roll the change back, in the event something unexpected happens in that deploy step as well. The later may be something better suited for someone on your team to write up, but as a Tester you may find yourself asked to take ownership of the fix if it removes a possible distraction from the rest of the team.

Returning to the second group of people, the lateral recipients could be any number of teammates to include but not limited to fellow testers, operations engineers, and a wide variety of developers. The challenge here is one of communicating to people with differing viewpoints of software (tester versus developer for example).

For starters, a developer, as a title, is rather generic and could apply to everyone from application coders, user experience designers, database experts, to network and administrative team members. Developers need enough detail to be able to reproduce a bug and sufficient detail about the observations that occurred in close proximity in order to effectively understand the problem and begin to get to the root cause. This often will include the smallest set of steps necessary to trigger or elucidate the errant behavior. It may include reporting the stack trace if one is given, or the responses that come from an application service. There is more that could be necessary in a bug report, but knowing more about your product under development is crucial to knowing just what makes sense to report.

Let’s consider the kind of details that might be important. As part of testing a website, you might find yourself verifying the look and feel of important images. How would you tell the developers that an image was wrong? One way, would be to indicate what page it was on, and describe how it appears on the page in terms of relationship to other elements on the page. That would inform the users on where to look, and maybe tell the graphic artist what was wrong with it, but more detail may still be required.

By using inspector tools, like Firebug in Mozilla Firefox, you can discover some properties of the web elements around the defect. The properties might include its id, name, CSS class, and in the case of an image, the path or file name set to the source property (SRC). However, the image could be loaded from a style sheet and thus not set in that manner. Another option would be to right click on the image and then click to save the image to get the default file name. That would be information that the developers could then trace to a particular file. Additionally, a quick review of the style sheets used by the page, if applicable, might reveal if that file is applied by an appropriate style rule. All of those are elements of a puzzle that could be just the right detail a developer needs.

That would give the developers a basic idea of where to look to potentially fix the issue, but what if the image itself isn’t what’s causing the problem? It might then be necessary to describe the steps you used to get to the page, any pertinent data that was entered along the way, and always include a screen or video capture if possible. Developers really appreciate details like screen captures, as it can help show them just enough to get their own investigation started.

As demonstrated, the detail required could vary greatly depending upon what is actually going on where you find a bug. It may also be necessary to be as familiar as possible with some of the underlying technologies, like in the case of a web page, the browser tools like Firebug, or developer tools, and even how CSS and HTML works. Note that it’s not necessary to be an expert on an underlying technology while being a tester, but the more you learn and explore, the more effective you will be at reporting bugs.

Knowing all of the detail necessary to supply an adequate bug report depends largely on our understanding of the technology being used, just as it is important to know the people who will be responsible for triaging the bug and recommending a fix. The example of an errant image on a web page could be as involved as described, or it could be much simpler. If the image in question was a prototype for a new advertisement, a simple screen cap or description of how it appeared might be enough for the graphic artist or user experience engineer to update and fix. Clearly, the more complex is the underlying issue, the more detail you might expect to report.

There is one problem with this example. It is still possible as a tester to be digging too far to determine the nature of a bug. It is important to consult with your team or manager if you find yourself in a place where you think you may have ‘gone far enough’, so trust your instincts. It may also be possible that your role on the team may be defined well enough as to indicate the boundaries of responsibility. In the example for the webpage, digging into the style sheets may be easy enough, but digging into source code for the application, the native code, might be a sign that you’ve crossed the boundary to where the developer might be better positioned to continue the root cause analysis. Either way, knowing the team’s composition and responsibilities will help you define where you can provide the most value.

No work of Software appears on its own out of the void of electronic ones and zeroes. It is instead crafted, fashioned with intent of design and purpose of interaction. The intent of that software is meant to solve some problem for us humans as its intended audience. As we grow ever more dependent on the electronic media that we consume on our devices and computers, that purpose must be to make our lives better.

That mission cannot be achieved without the dedication of the software teams who work to see these designs and their purpose from the realm of pure ideas into reality. While software developers and engineers work hard on choosing patterns of design, and methods of implementation, we who are Software Testers seek to assist that creational journey of purpose from its beginning through its transformation into applications and services that impact all of our lives.

Just as the software engineer works hard to craft the structure, detail, and function of the lines of code they string together, so too must we as software testers be engaged in crafting the structure and detail of the support that we provide to our teams. No testing journal, bug report, or supporting documentation comes about by accident. These artifacts that we produce as software quality advocates are intended to provide valuable communication about the software we help grow into maturity.

Because of the varying modes of communication used, the context of our environment will greatly impact our delivery. Knowing our team as the audience for our many test reports is critical to effectively communicate our findings in a timely way to our stakeholders. Software Testers must remain diligent as our reporting artifacts have long lasting impact and consequences. Thus, we must continue to sharpen our craft of writing and discover the audience that is our team, just as much as we hone our specific technical skills as Software Testers.

This article was published in our April 2014 edition.

https://i2.wp.com/www.testingcircus.com/wp-content/uploads/know-your-audience-testing-circus.jpg?fit=435%2C239&ssl=1https://i2.wp.com/www.testingcircus.com/wp-content/uploads/know-your-audience-testing-circus.jpg?resize=150%2C120&ssl=1Timothy WesternArticlesTest Reporting,Testing ArticleWhen hiring managers are asked about the skills they are looking for in their new hires, soft skills are increasingly listed as of high importance as their knowledge of key technologies. As software teams become more agile, valuing the interactions between team members and their stakeholders, this trend is...
The following two tabs change content below.

Timothy Western

Timothy Western is a software test professional of 11 years and holds a Bachelors of Science Degree in Computer Engineering from West Virginia University, Morgantown, West Virginia, USA. In addition to being an Eagle Scout, Timothy is a member of the Association for Software Testing and a student in the Miagi-Do School of Software Testing. Timothy is involved with the Let's Code Blacksburg movement, and hopes to continue to spread good testing practices in the New River Valley. Timothy currently serves as a Software Tester for Harmonia Holdings, LLC (http://www.harmonia.com/) in Blacksburg, Virginia. In the past, he has worked as a Software Developer, Tester, and Software Developer in Test for Rackspace, ManTech International, and Stenovations. Timothy brings a passion and enthusiasm to testing and has worked on a wide variety of projects focused on e-Learning, collaboration, computer aided transcription, software tool automation, business processes, security checking, e-mail, application development, and enterprise integration. He sees software testing, itself, as a service, and seeks ways to effectively apply good test practices along with test automation at the right level and scope in supporting his teams.

Latest posts by Timothy Western (see all)