Exploratory Testing in an Agile World: An Interview with Matt Barcomb

[interview]
Summary:

Matt Barcomb works at LeanDog where he is passionate about building collaborative cross-functional teams and finding ways to make the business-software universe a better place to work, play, and do business. Heather Shanholtzer talked to Matt about exploratory testing in an agile project.

Matt Barcomb works at LeanDog where he keeps busy with organizational transformations. He is passionate about building collaborative cross-functional teams, providing holistic sustainable solutions to organizations trying to improve, and trying to find interesting ways of making the business-software universe a better place to work, play, and do business. I had the opportunity to talk to Matt about  the challenges of exploratory testing in an agile project.

Heather Shanholtzer: How do the challenges of exploratory testing on an agile project differ from those of a traditional development project?

Matt Barcomb: The challenges mostly revolve around how to incorporate both exploratory testing activities as well as a testing mindset into an agile workflow. For example, one of the most notable changes is involving testers up front during requirements analysis, applying a good testing mindset to gaps and edge cases before they become problems. Another challenge is how to incrementally execute exploratory testing in small batches such that it is not all left until the end. The key to incorporating exploratory testing into an agile workflow is small batches of execution and fast feedback cycles to the team and customer.

Heather Shanholtzer: Are there any benefits to exploratory testing on an agile project that you don’t get on a traditional dev project?

Matt Barcomb: Sort of. While the testing itself would probably find similar issues, the tightness of the feedback loop helps teams validate quality assumptions more quickly, as well as give the customer more frequent data points when deciding to ship a product with a certain level of quality.

Heather Shanholtzer: Why is “visualizing” exploratory test results important? What exactly does that mean, and what are some techniques you recommend?

Matt Barcomb: It’s more about visualizing the possible exploratory activities than the results, although it’s definitely important to share the results from those activities with the team, too. The reason this is important is to help decouple product quality from quality assessment activities. The whole team owns quality activities, usually led or signed off by testers, but it’s important that customers own the product quality. In order to do this, customers need to understand what activities have been—and could be—performed to give them more information about product quality. Ultimately, the decision to ship should be the customer’s along with features, quality, date, cost, etc.

My favorite way so far to visualize these possible activities is to add a backlog for quality initiatives as well as a separate swim-lane for them on a team’s card wall or kanban board.

Heather Shanholtzer: Your tutorial description mentions testing “quality considerations beyond the story cards” and “employ[ing] exploratory testing at and beyond the story level.” For those new to agile, please name some of the quality considerations and explain how exploratory testing differs at different levels.

Matt Barcomb: So, in agile, a user story is a small, specific feature that an end-user would find useful. A story card is simply a placeholder for the detailed specification of that feature. Many times, dev teams (even agile ones) only test the functional specification with regard to that feature. The problem is that there are many considerations for quality other than just the functional spec. One easy example is functional integration testing—making sure all the features work well together—or systems integration testing. Many of these can be automated, but they should also be explored with varying contexts and events.

However, there are broader concerns. What about legal concerns? Or usability issues? Or interaction? Or maintainability of the code? Or the build/deploy process? Or performance? The problem is that quality can mean so many different things. That is why it's important for a customer to determine what facets of quality are key drivers for a minimally marketable feature set!

Heather Shanholtzer: For those who weren't at the conference, what was the major takeaway you wanted people to gain from your tutorial?

Matt Barcomb: Whole team ownership of quality. Visualize the work to be done, including quality activities. Use small batches and fast feedback cycles to inform the team and customer about quality.

Matt co-led the tutorial Brewing Up Exploratory Testing in an Agile World at the Better Software and Agile Development Conference West June 10-15, 2012.

About the author

Upcoming Events

Oct 13
Apr 27