In most projects, testers are the keepers of quality. Sharing the vision of quality with the entire team helps everyone involved in a project play a more active role in determining the state of quality in a product. In this column, Jeff Patton shares several innovative ideas he's seen in practice lately that have helped an entire team own up to the quality of its software.
Testers in software development generally have it rough. Testing software is often the last major effort completed before shipping software. Since the terms "software" and "late" have almost become synonymous, it's the testers who often draw the scrutiny of the entire company as they work to test the software at the eleventh hour. It's the testers who get the pressure to work faster and then to declare the software "releasable" often before they've had enough time really to be confident that it is. Then, to add insult to injury, if issues are found with the software after it ships, everyone looks back to the testers and asks, "Why didn't you find those bugs?" It wasn't the testers who created the bugs, but, unfortunately, they have to saddle some of the blame for the released bugs.
I've been working in agile development for many years now, and, to some degree, testers there have it even rougher. Agile development divides the construction of software into short development cycles—one to four weeks is common. At the end of each cycle, the software built in this cycle should be tested and be at a quality high enough to ship, although it usually doesn't ship. This means that a tester in an agile context may feel the end-of-project pressure not every few months but every few weeks. It's out of that agile pressure cooker that I've recently seen some interesting and innovative practices emerge.
The first realization the enlightened team needs to understand is that the whole team is responsible for the quality of the software. Secondly, for the whole team to begin to take responsibility for the quality, they need to understand quality. It's often the testers who have the best idea of what quality is for the product and who are in a good position to keep the team informed about quality.
Will, the senior tester on an agile team I'm currently working with, has taken to heart the responsibility of keeping quality visible and has come up with some innovative and useful ways to do so. Will arrived at these strategies partly on his own and as a result of working with James Bach when he visited the company recently. (Learn more about James Bach and his teachings.) Using the strategies Will has arrived at, we might end up with something like the figure below. Will's actual work is considerably higher quality.
Report Test Depth and Subjective Quality
Will knows that it's not as simple as saying the quality is high or low. For example, if his team only had time to do a shallow first pass at testing a particular product area and found no bugs, was the quality high or low? Of course it's impossible to say. What Will and his team have chosen to report is both pieces of information: the "depth" of testing they've been able to complete and their subjective quality assessment.
To report depth of testing, try using a number between one and five—one meaning a shallow first pass and five meaning thorough testing of all aspects of the software including boundaries and extreme failure conditions. If you're testing in a team, ask each of your team members to think of a number between one and five to represent how deeply he feels the software has been tested. Then, when everyone has a number, ask everyone to show the number at the same time by raising that number of fingers on a hand. This feels a bit like playing rock-paper-scissors, but no one loses.If you find that not everyone agrees, which you probably will, discuss the outliers as a team. For instance, if I choose one and you choose three, we can discuss why you believe the depth of testing was moderate and why I believe it was shallow. After some discussion, we can vote again to see where we all are. If after some number of votes we still don't agree, we'll choose the most pessimistic assessment.
Use a similar approach to report quality, this time using your thumb-thumbs up for high quality, down for low, and sideways for moderate.
This quick, collaborative exercise also has the benefit of leaving everyone on the testing team with a common understanding of depth and quality.
Report the pair of assessments to the team as a one-to-five score for depth and high, medium, or low for your testing team's quality assessment. Will uses some number of stars for depth and a happy face, neutral face, or frown face for quality.
At Cycle-end Time, Report Quality of New Features
It's common in agile projects to stage a demo of the product at the end of a development cycle. During the demo, each new feature is demonstrated for the entire team to see. It's here that Will reports the depth of testing and quality ratings individually for each feature of the product.
In the past, the testing team may have complained that they didn't have enough time to test thoroughly. Now no complaint is necessary. The depth of testing is clearly reported. It's up to the team as a whole to decide what to do about it. After watching this practice in place, I've seen developers agree that they need to finish coding earlier in the sprint to give testers more time. Really, I'm not kidding.
Report Cumulative Quality on the Product as a Whole
As development continues to build onto the product, more and more features are added. Testers continue to work hard testing the new features that are added and the product as a whole. They're interested in how all these features fit together and in increasing their depth of testing across the product. To keep the team informed on their assessment of the quality of the product as they see it, the testers assess the entire product quality at the end of every cycle.
The product being built is big, so offering a single assessment for the entire product wouldn't be too valuable. Fortunately, this product is divided into a couple dozen functional areas. For each functional area, the test team offers a depth and quality assessment. The result is a "report card" of sorts composed of depth ratings and quality assessments. The goal of the entire team is to push both depth and quality as high as possible. Keep Open Bug Counts Visible
The number of bugs found isn't always an assessment of quality. For example, low testing depth may return a low number of bugs, where high testing depth will likely return a higher number of bugs. Even with a low number of bugs, a subjective quality assessment could indicate low quality: "I didn't test much, but everything I tested was broken."
The team should know about all bugs and do their best to address them as quickly as they're found.
Will keeps the number of bugs found visible all the time by using a chart that looks similar to an agile burn chart. This two-dimensional chart uses number of known bugs for its vertical axis and time for its horizontal axis. Will publishes a new version of the chart every day and posts it on the wall in the team room. The chart shows a curve that increases slowly over time and lurches down every once in a while when the development team fixes the bugs in an effort to reduce the bug count. Keeping the total low has become a point of pride for the development team, which now reserves a couple days at the end of each development cycle to work on bugs. They don't want the cycle to end with a high bug count since they know this number is visible to all, and a high number of bugs will affect the testing team's quality assessment.
At release time, the product manager ultimately decides to ship or not, but at least the whole team is aware of the quality of the software being shipped. The decision to ship a product with a low level of testing and moderate quality would be a risky one, but now that risk is visible to all.