Ending Right

[article]
Summary:

Jeff Patton has been building software using the agile approach for a while now. His observations of how others are implementing agile development fall short of complete, but he has noticed is that the adoption breaks down during the evaluation phase. In this column, Jeff goes through the agile development process and offers guidance on the correct way of conducting an agile evaluation during this phase in the software development lifecycle.

For many, "agile" means "We've stopped writing documents," which doesn't actually mean you are practicing agile. It means you're good at justifying bad behavior. One of the tenets of agile development is the idea of a healthy development cycle. In XP it's called an "iteration;" in Scrum, a "sprint." But the basic idea of a complete cycle is the same across these methodologies. And, sadly, many agile teams have broken cycles. This column is about diagnosing and fixing busted cycles—in particular, at the ending of the cycle where I see most teams get a bit sloppy.

Figure 1: The three parts of a healthy development cycle.

A good cycle has three parts: planning, performing, and evaluating.

Planning
In the planning part we decide what we're going to do. Usually, that means the amount of work we're going to take on. To do this, we'll talk about the pieces of software to build and, ideally, write down acceptance criteria for each piece so we're sure we know what "done" means for this piece. Many teams even define a working agreement, referred to as the "definition of done" (DoD). The DoD usually determines that done means it has to be coded and tested—which is a big advance for testers for whom "done" used to mean only "coded."

Performing
Building software and all the collaboration it takes to do this is the performing part. An important part of performing well is transparency, which is another way of saying it's easy to tell how far along we are. Agile folks often show progress on task boards or using burn-down charts.

Evaluating
Evaluating is the most important part of a healthy cycle. It is where we look at what we've done and make corrections to the product, the schedule, and the process we're following. This responding part is what makes agile development agile and the part where I see most teams start to fall down. The 3 Ps of Evaluation
Evaluation is difficult. Honest evaluation might reveal we're building the wrong thing or not moving fast enough. It may result in the realization that we're not being rigorous enough with the process we're following or that we're mistaking process compliance for process effectiveness. The stuff we evaluate falls into three categories: pace, product, and process.

Pace
Pace is where we measure how much we've done. We may have planned on building five features in the last sprint but only completed four. So, what does that say about how fast we're moving? How does that effect the scheduled delivery date? Do we fool ourselves, say, "We'll get faster next time," and make excuses for poor performance, or do we face facts and adjust the schedule to reflect how fast we're really moving. It's OK to say, "Let's go one more sprint. If our velocity stays the same, we'll adjust the schedule." It's not OK for managers to say, "You guys committed to getting this stuff done. You'd better figure out a way to make up the lost time."

Product
We'll need to inspect what we've built. Not the parts, but the whole thing. I see many teams do a product demonstration, which is a good idea. Knowing you'll have to show product to your peers and stakeholders is strong motivation to finish and do a good job, but don't stop there. A common concern of folks adopting agile development is that in the rush to build more functionality faster the software will become riddled with bugs or turn into an unmaintainable ball of mud. If you don't pay attention to quality, you will indeed end up with bug-filled behemoth.

When evaluating your product, it's not sufficient just to look at it. We need to dig a bit deeper. This digging deeper part is bit of a blind spot for many agile teams. Before you wrap up your product review, do the following:

  • Ask testers to speak to functional quality
    Is the product full of bugs, or is the functional quality good and improving?
  • Ask developers to speak to code quality
    Is the code in good shape? Well designed? Easy to change and to add new functionality? Or is the rush to finish features compelling us to take shortcuts?
  • Ask UI designers to speak to user experience
    The product may look OK, but how is the user experience? Is it easy for users to accomplish what they need to in the system? Does the product have the look and feel and general quality of experience in line with your company's standards? Or, are we just adding many crappy features into the product?
  • Ask the product manager to speak to benefit and releasability
    Since we don't get value from software until we release and people use it, it's important to speak to how close we are to being ready to release. As anyone seasoned in software development can tell you, finishing all the features we planned on doesn't necessarily mean we're arriving at a product people will use and value. The product owner should talk about what is necessary to build towards a releasable product. This may involve adding more features, changing some features already built, or removing features that aren't really important.
  • Use simple, subjective assessment
    Don't get too rigorous about your measurement of quality here. Adjectives and subjective evaluations are pretty good ways to describe where you believe you are.

To make things quick, I've been using a fist-to-five approach. For example, ask testers, "How do you feel about the functional quality of the product after the work done in the last cycle? Give your evaluation on a scale of 1 to 5-5 meaning fantastic, 3 meaning so-so, 1 meaning not at all acceptable." Since most people have five fingers, this approach works pretty well. I ask the question, and then ask everyone to give his rating by raising a hand with some number of fingers up. When the team sees a disparity between different members, we know it's time for a conversation.

Process
The last thing you might do to end your cycle's evaluation phase is a process retrospective or reflection session. This is where we look back at how fast we're moving, the quality of the product, and the general health of the team. Looking at all those things lets us answer the question "What will we try to do differently?" The answer to that question usually will be changes to the agile process—the specific ways we do things.

The retrospective is much more productive when done on the back of a healthy pace and product evaluation. If you're engaged in agile development, look closely at how you evaluative the results of your cycle. Do you look thoroughly at pace, product, and process? If not, you may not be getting the real benefit of short development cycles.

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.