In this interview, independent software consultant Howard Deiner explains how agile testing is different from traditional testing, how to get an organization to try new things, and how the Apollo space program utilized agile techniques.
Howard Deiner is an independent software consultant who specializes in agile process and practices at Deinersoft Inc. In this interview, Deiner explains how agile testing is different from traditional testing, how to get an organization to try new things, and how the Apollo space program utilized agile techniques.
Jonathan Vanian: Let's start with a little background about yourself: what you do, what you work as. You can take it from there.
Howard Deiner: Sure. I mean, how many years has this been? Since 1975. I don't even want to do the math on that. It just makes me sad. You name it, I've worked on it in software.
I started off with being a programmer, doing scientific programming, all kinds of stuff. I eventually got into commercial software. I still think of myself in that realm. And after working a few products, I found myself in a position where I was, instead of making things, which is what I really liked doing, I started making teams and then making people more effective.
JV: Ah, so you got into a management position?
HD: Well, yeah, management, then into coaching.
JV: Coaching, yes.
HD: Yeah, the stuff I do nowadays is coaching organizations, everywhere from the executives and the C-levels and stuff, all the way down the stack to what I really consider myself at home with, which is like, developers.
JV: That's where you started from and that's where you feel a little bit more comfortable with.
HD: I still like building stuff, to tell you the truth (laughter). Nowadays I get called in for gigs everywhere, from “Help us manage our business better” to “Figure out how to align ourselves and get better results.” I'm doing some stuff right now with agile engineering practices, some of that. So, we do weddings, bar mitzvahs, whatever works.
JV: (Laughter) Yeah, so I think this would be a great segue to get into this conversation about agile testing. Explain a little bit about what is agile testing and how it differs from traditional testing approaches.
HD: You know, it is an extension, in a sense, of the whole continuum of where we used to do very predictive sort of development, software engineering, and now we're moving more towards the craftsmanship stuff. And that hit a sweet spot with me because it's not a matter of developing this stuff and then testing it at the end and magically everything will work. Software's too complex to do that.
JV: What do you mean by craftsmanship?
HD: The whole aspect of writing good code. What does it mean to satisfy people? How do you keep them happy? And like I say, there's things on the product side. The lean start-up stuff is a big deal for me.
JV: Sure.
HD: But on this end of it, it's kind of pivoting organizations away from the old, traditional style of analyze it, design it, develop it, then test it, and expecting that that's going to give you quality products. Nowadays, I'm trying to tell every place to push all that testing early. If we can get testing and requirements back together again and start iteratively developing our products, they turn out so much better in the end.
JV: What's been the response like with organizations when you tell them to pivot towards. Is there some resistance?
HD: Yeah, to be honest with you, it seems like an easy message. I'm thinking from my developer kind of stance. But there's too many times that management needs to also make that pivot into leadership. And many times it takes an act of faith, a bit of courage to jump over that chasm and say, "You know what? Let's give it a whirl." I mean, honestly, organizations that do it never look back. Developers who have never been exposed to things like test-driven development, acceptance test-driven development and stuff, once they see it, they never look back. It's infectious.
JV: Does it matter what industry or what an organization is or do you just see it sort of across the board?
HD: Look, I've worked in like a number of different industry sectors, and I've had really great results in embedded systems, real-time stuff, things that are non-deterministic even, all the way through more traditional IT sort of development. Like I say, it's a message that most developers naturally want to hear. It's a natural thing to want to make a little bit of change and test that and make a little bit more change and test that. Making it a little bit more concrete and saying, "Well, actually there's a process. If we're going to be doing this little testing along the way, can't we make that repeatable? Can't we make it fast?" And all these things start to twine together.
JV: Looking at your website it's interesting seeing some of the presentations, and you have one that sort of outlines the traditional manufacturing role and how that sort of doesn't really fly too much in today’s testing environments. Can you explain that? Testing in small batches as compared to the old-school methodologies.
HD: Oh, absolutely. You know, that comes out of the lean manufacturing, the production sort of work, and people forget about the fact that if you can get rid of wastes inside of the system, you're going to get better results out of it. I've been running a new exercise called "Death by whip."
JV: That sounds interesting.
HD: Yeah, it actually has to do with like Legos, binary numbers, plastic bags— you know, the traditional sort of agile exercises. We simulate in one scenario what manufacturing would look like if we had stations, and then we kind of pull all the things back together, which is kind of what waterfall does. We send them off to the silos, get our results at the end, and then try to put them all together. Usually people don't make very much with that. Then we have another scenario where we set up an assembly line and try to push work onto it, and that gets nice and messy. I love that during the exercise. And by the time we tell people to single stream, get really small batches, they actually find out that they have lots more sufficiency.
When you start to apply this, like why is it that small batches work so well with software development and tests in general, you're getting less defect waste. There's a quasi-exponential curve where the cost of defect remediation is really, really high if you find stuff at the end, and then as you work your way down, instead of testing at the end, if we test it during integration, there's still some waste involved because now we don't remember what we're doing. As we keep pushing closer and closer to when the error actually occurs, when we actually code something, make a mistake, and say,"Oh, my god, we had a side effect." Imagine that. We figured out that defect waste goes down.
JV: And what exactly could constitute a defect waste? What's an example of something that you could find?
HD: Let's take something that takes an entire software stack. Let's say we have a three-tier type of architecture. We got some back-end database kind of stuff going on. We have a middle layer with logic. And then we have a presentation-type layer. And we make a change. We make a change to add a new column that we need to bring all the way back up. We may find that making that change made something really obscure that used to work stop working. And the cost of doing a complete regression test is usually really high, so people tend not to do them or they do them only at the end when they're under time pressure.
And it ties in with the inventory waste. We have a large batch of stuff that's yet to be tested. You know, we're going to forget what went on. We got all this stuff piled up in front of us, so things are going to happen. We get all those hand-offs in lean type of thinking. Each of those hand-offs, those documents that say, "Here's what I did," you know, I got this big, old list of stuff. That waste of going through there, missing stuff, it's a bad thing. The waiting that occurs, the time between when the developer makes that mistake, that little side effect, and the time that it's caught, all that time, they forget what they did. Even if it's a couple of days, it's a hard thing. That's why continuous integration is such an important part of agile engineering and that whole set of things that we want to do to get things better.
JV: Do you think there are some projects that sort of lean themselves more to the use of agile techniques versus some projects that, I don't know, maybe because of the culture of the organization, they're inherently going to be made this old-school way. Is there some projects that just are more adaptable to agile practices?
HD: You know, I'll be honest with you. There actually will be some.
JV: There will be some.
HD: It's not going to be the stuff that people normally deal with. It's going to be where the risk is high.
JV: Okay.
HD: If you're developing an avionics system, and you want to make sure that the plane can land itself or whatever it's supposed to do that's where— human life is at risk—you're going to have to go through a much different kind of process to actually figure out that everything is good. In fact, when you think about the real end game there is certification, verification, and certification, which is different than just testing. I wouldn't want to be in a rocket where only agile development was done (laughter). I'd actually want to make sure at the end that we did a slow and careful valuation of it. The reason it doesn't come up very much is that those sort of systems are kind of few and far between.
JV: Right.
HD: Most of the stuff that we do nowadays, we take these short cycles, and that's the best way to go about it. Curiously, if you actually look back in history, even NASA, when they were developing the Apollo program, really had a very agile technique of going through and taking small steps and verifying as they were going that they were doing the right kind of product.
JV: That's interesting. Can you into a little bit more of that?
HD: Well, it comes from Craig Larman. He did a lot of this original research on it. Just think about the project in whole. You have a ten-year product, and eventually the goal was to put men on the moon. And along the way, there were lots of small steps. They would build a booster and have an unmanned launch of it and make sure the booster worked okay. Then they'd add a complexity on top of that, and they'd say, “Okay, let's put a booster on with the right kind of weight and check that the center of gravities all work.” Then they would put men inside of it. There was a Mercury part of the component, then a Gemini. Again, it was incremental development towards that final product.
And if you compare that with things like the Hubble Space Telescope, which was the classic big waterfall kind of project, they had a fairly minor defect where they forgot about the fact that gravity on the earth will curve the mirror differently than when it's in space and there's no gravity. This basically blinded the telescope. And I forget how many, it was like eight billion dollars to go fix that. Classic sort of like, “Well, we'll fix it at the end when we find there's a problem.”
JV: If only we could have found that earlier on in the stage.
HD: Absolutely. I mean, again, it would have been incremental sort of stuff. They would have launched stuff and then realized, "Oh my god, look at that. The mirror that we wanted to get up there is going to change a little bit in curvature. Hmm. Let's figure that out."
JV: Talking to these people who have been part of these organizations for so long with this old mindset, what are some of the challenges of trying to get someone to pivot to using an agile technique?
HD: Well, you know, it works best if you don't just say, "Here's a bunch of practices. Follow these things."
JV: Sure.
HD: Because the complexity of software that we write is huge. There's a lot of moving parts. The thing to do is to say, "What problems are we trying to solve, and how can we measure those things along the way?" So if the problem that we want to solve, for example, is faster delivery of product, then we can start to understand how to bring that testing forward, verify the stuff that we're doing iteratively, do it such that it costs very little to actually ship it and get feedback, complete that feedback loop, get feedback from our users and our customers that we're actually making the right thing, and then shorten that feedback loop about have we built the thing right—which is going to be all of the unit testing, the acceptance testing, the automated regression testing.
Once you start presenting that to executives, they may get a feeling that, “Hmm, maybe there's something to all this. Give it a try. You know, what's the worst that could happen? Could we spin our wheels for three months and produce very little?” Possibly, but, you know, if you look in other industries, that's usually not the case. So, once you over that leap of faith and give it a whirl, I've seen really good results from just about every place that's tried it.
JV: Fantastic. All right. I think this is a good stopping point for us, and thank you very much for taking time.
HD: It's been a pleasure. I'm so happy that you had the time to chat with me.
Howard Deiner is an agile coach who works with individuals, teams, and organizations to help them achieve their business objectives. He has a varied background spanning thirty-eight years in the industry, with extensive domain knowledge in commercial software, aerospace, and financial services. He has played many of the roles in the development arena, such as developer, analyst, team lead, architect, and project manager.