Nate Oster is giving two sessions at the upcoming STARCANADA 2013 that deal with empowering developers and testers through acceptance test-driven development. Oster's experience as an agile player-coach and working with test-first practices are detailed in this terrific interview.
Nate Oster explains how acceptance test-driven development (ATDD), even if met with initial hesitation, alleviates stress and workloads for testers and developers alike. With hands-on gaming exercises, Oster helps teams, and attendees to his sessions, see the benefits of true agile testing.
Noel: As an agile coach, what types of strategies do you use to help people "unlearn many of the patterns that traditional development taught us."
Nate: I'm passionate about what we call “player-coaching,” where a coach joins an agile team as a participant, and they then model new behaviors and practices over a few iterations. That way, team members are learning new skills just in time, using their own work, and without disrupting their flow. I've found that player-coaching dramatically increases the team's odds of making long-term improvements.
By contrast, traditional classroom training might be good for creating buzz or planting new ideas, but unless people immediately apply the lessons to their own work, it’s not very likely that the new technique will take root. After a few months, retention of new concepts is usually very low. I first noticed this “half life” of information in college when I'd cram for tests, and then promptly forget everything a few weeks later!
It turns out that the secret of retaining new information is forming an emotional connection with it. Lectures aren't very emotional. One strategy we use at CodeSquads is immersive simulations instead of classroom style lecture and exercises. Our simulations strip down agile development practices to a few essential concepts, and create hands-on “synthetic experiences” that give participants an emotional connection with those ideas, mostly through the stress of friendly competition. We're only going to remember a few key concepts in six months anyway, so we should laser-focus on the concepts we want to retain at a gut level.
A good example is the new “Kanban Racing Challenge” we're hosting for private clients, where players learn the basic practices of a kanban team by building a racetrack for RC cars. It's a physical case study that gets us out of software development, and just focuses on how we work together to maximize the flow of new features with high quality. It's interesting to watch: Team members instinctively begin testing and developing in parallel because the tracks are error-prone and the best design is unclear.
They adopt the agile testing principle of "building quality in" with continuous testing, because they can see at a glance that if they test later, they won't maintain a high-quality product, they'll fall behind the other teams. What always surprises me is how fast participants transfer these lessons from a physical game and apply them to software development—focusing on a few essential concepts with competition really does allow people to "get it" and then retain those lessons for the long term.
So our pattern for installing durable improvements is hands-on simulations followed by hands-on player-coaching.
Noel: As an advocate for acceptance test-driven development, do you think that ATDD benefits testers or developers more, or are the benefits truly equal for both parties?
Nate: I think that when we master agile testing with ATDD, there's no sharp line between specifications and tests. They're just a specification of what a feature should do before we'll consider it “done.” That means everyone on the team needs to be specifying collaboratively, to the point that there doesn't have to be a dividing line between the tester and developer roles. It's really just about who has the right skills to complete the work.
Noel: You've mentioned that testers on agile teams may struggle to keep up with the pace of development. Could you go a little more into how that happens, and what testers can do to attempt to introduce ATDD as an alternative?
Nate: Testers on agile teams can struggle to keep up if they have a waterfall mentality that tests are verifications that happen after development is done. This sets up a dangerous and unnecessary cycle, because testing tasks get compressed at the end of the iteration. Eventually, the testing starts creating “back pressure” on how much the team can complete in an iteration, which is what causes painful reactions like working overtime to fix the issues we find near the end or even testing in the next iteration. I call this “mini-waterfall testing.”
The problem of late feedback from tests was always present in waterfall, but it didn't really hurt until late in a project. With agile, you get all that pain every two weeks! Rather than Band-Aid the issue, we have to let that pain guide us to a systematic solution. If late feedback from testing is the problem, why don't we test earlier? If earlier is better, what's stopping us from testing first?
We have an opportunity to stop treating tests as verifications, because at best all that does is test the bugs out. We can shift to treating tests as specifications. To do that, we need to work together as a whole team to specify the behavior of new features in the language of the business by using concrete examples. Now we have a shared definition of done for a new feature before we develop it. With ATDD tools like FitNesse and Cucumber, we can even make these plain-language examples into executable specifications without changing them into a format that only a programmer can understand.
Noel: Have you witnessed any initial resistance from testers or developers to the "test first, code later" mantra—and what's the reason behind it?
Nate: I think the most common cause of resistance to test-first is the myth that test-first means we write all the tests up front. I think that's a really bad idea. In fact, it's a prescription for rework and frustration. By contrast, ATDD is incremental. It's enough to start with just a few key examples of the new feature, pick one, and elaborate a handful of tests. Then we develop just enough new code to make those tests pass without breaking anything that's already passing.
These new passing tests give us confidence that we're really done with part of the work, like a “save point” in a video game. We go back to the key examples and pick the next behavior, elaborate with tests, and implement enough to make those tests pass. When we're all done we might have, say, twenty passing tests for this new feature, but we did not write them all up front. We actually collaborated throughout the iteration to specify these tests and make them pass incrementally, and along the way we avoided a lot of rework and guessing. I love the positive tension of specifying “just enough,” because it makes everyone collaborate daily instead of wasting time at a big long meeting.
Noel: For those who attend your session at STARCANADA, what do you hope they're able to take home to their own teams and projects?
Nate: At “Concurrent Testing Games: Devs and Testers Working Together,” we're going to play a tabletop simulation of the dysfunctions that occur with testing, even on agile teams. It turns out that when we treat testing as a verification process and depend on testers to test out the bugs, it creates all kinds of bad incentives that actually reduce our quality and productivity! People are so focused on the false efficiency of their job (developing or testing) that they lose sight of the real goal, which is producing a high-quality product. I'll walk through the agile practice of concurrent testing and share what I think are the persistent issues with testing on agile teams. Then we'll play the simulation again based on what we've learned.
In my experience, people walk out with a deeper appreciation of how bad incentives can sabotage an agile team's best intentions. They also experience how changing the way we collaborate with concurrent testing aligns our incentives to our real goal: frequently delivering high-quality software. By learning the five most common agile testing dysfunctions, attendees can explain the root causes of these behaviors on their own teams and design experiments for improvement.
Participants are also welcome to take the game and slides back to their own teams. Better yet, bring your team members to the session!
An agile player-coach and founder of CodeSquads, Nate Oster helps clients adopt lean and agile methods. Nate builds high-performance teams that adapt to change, embrace a pragmatic philosophy of continuous improvement, measure progress with new features, and deliver high-quality software that delights customers. As a coach, he inspires adopters with hands-on mentoring and simulations that provide a safe learning environment for new ideas. Contact Nate through his LinkedIn profile, here.