Matthew Attaway has worked as a tester, developer, manager, and elephant trainer. He currently manages the open source development group at Perforce Software. In this interview, Matthew talks about automated testing and agile as well as dealing with excessive test documentation.
Matthew Attaway has worked as a tester, developer, researcher, designer, manager, DevOps engineer, and elephant trainer. He currently manages the open source development group at Perforce Software. In this interview, Matthew talks about automated testing and agile as well as dealing with excessive test documentation.
JV: We are on. All right, this is Matt Attaway. Matt, thank you for joining us today.
MA: It’s my pleasure.
JV: Why don’t we start off with just having you talk a little bit about yourself, your career, and your experience.
MA: Sure. I’m Matt Attaway. I’ve been here [at Perforce] for almost fourteen years now, actually been my entire career.
JV: Oh, wow.
MA: And I started as an intern, and in 2000 we didn’t have a QA department, so I was one of the first two QA engineers. We started the whole group and worked together during that for six or seven years, started getting into development. I moved into the R and D team and did all the research on how to do version control and how to do software development in general. And then, I guess it was four years actually, I moved back to the QA department to lead a team. Right at the same time as we were doing our transition and moving to automation was a big life change in our development organization, so I got to come in right as that was happening and kind of figure out how to make it happen. I had, at the time, just a small team of three people, but we had ten projects that cycled through at random times.
JV: Wow.
MA: Yeah, it was interesting. It was all great and everyone came in on time, because we had slotted them out, so that every two months or something there would be three products to work on, but every once in a while there would be the backlog and all of a sudden all eleven would just hit us at once.
JV: Oh, yeah.
MA: Wonderful. That was great.
JV: Yeah, let’s probe into that then. We know you’re getting right into the heart of the matter, so let’s talk about this testing. What went on in this project? You all of a sudden had to get agile thrown at you—explain that.
MA: Right. Well, I think it comes to many software companies where the cool new buzzword at the time … you know, that was a while ago now, but agile was really … it was already hip. A lot of people were already doing it, but we decided that was something that we needed to do to become more responsive to our customers, right? The promise of agile development. And we make a set of software that, historically, our customers only want to consume and they only want releases from us every … just a couple times a year, because it’s a big thing for them to take on and to upgrade and deploy across their organizations.
JV: For people who aren’t aware, what does Perforce make?
MA: We do version control software generally toward large enterprise companies, so tens of thousands of users out there using one system. So yeah, they’re not just looking to take on change on a regular basis. So we ended up in a situation where we were trying to move to an agile development process, because we wanted that rapid iteration and that responsiveness. On the flipside, we had a customer base who expected us to kind of release in a waterfall fashion, so our release teams were all organized around waterfall releases. We still like, “Ok, every six months let’s stop the big thing, but let’s do it agilely inside of that.” The software we deliver isn’t SAS and it’s not hosted in any way shape or form; it’s all delivered to the customer.
A lot of the excitement that’s come up with agile and now continuous delivery is a lot of it is targeted around people who can deploy software at a moment’s notice and upgrade everyone in one dose. Do a slow roll out like a Twitter or Facebook and it’s a very different thing when you’re effectively … we don’t actually have boxes, but we’re a box software company.
It was kind of an interesting transition. As engineers, we want to do this, because it seems like the right way to make software and we hear a lot of great things about agile process. Also, I think every time you do one of those process changes, it’s kind of like a fad diet. It’s not that the diet actually helps you, it just kicks you out of your bad habits.
JV: For a period of time. For six months you’re looking great and then all of a sudden all heck breaks out.
MA: Exactly. So yeah, it was good for us. It broke us out of some of our old habits and waterfall, but it was still there. It was kind of this layer on top of us in release management waterfall. So, it was a lot to rectify, from a process standpoint, of how do we deliver this software, how do we do these cycles and iterations. When you release every two weeks you feel the pressure of “Oh yeah, I really need to build and develop and test and be done in two weeks.” But you know you really have five months until this feature goes live, you don’t have that same pressure and it’s interesting.
JV: That’s a very interesting thought and I’ve talked about this a lot with other people about that idea of putting, all of a sudden, that amount of pressure on your team where they may not be used to that sort of immediate deadline. What is a way that you went about to reassure your team or get them used to that sort of rapid delivery?
MA: Honestly, I don’t know if I ever entirely succeeded at it. What I would say is we ended up doing, as part of moving to using agile workflow, we started having burn down charts and you’re really tracking the development within a sprint, or at least within one of these releases, within these release cycles. For me as a QA manager, what we try to do is get ourselves integrated into that flow, so that we can see that five features have dropped and they are not sitting in a state waiting for us to complete the testing so they could bog down and really finish out the burn down chart.
As testing backed up and we didn’t keep on top on that cycle, we’d see these plateaus where it was out of development and in test, and so we had this nice visual indicator that we really need to step up. On the flipside though, you look at that sometimes and say well, yeah that took you two weeks to develop, but that’s really only like a day for us to test, and we’re working on this other product with my team. I was talking that we had, like, eleven products, so sometimes it was well, that one’s on fire so you’re going to say to them, “Wait a minute we’re going to put that one out and then we’ll come back.”
JV: You’ve got to prioritize right.
MA: For me, a lot of demands involved just a lot of communicating with the other development leads. Like, “It’s cool, we got this” and we’re trying to build metrics that show trends that given this set of data you know we’re good for this; we’ve done this many times before and we’re going to make this work.
It was good to have that visible kind of accountability there and the burn-down chart, but it did take a lot of communicating across teams to be able to explain why there was those small plateaus sometimes. It was, I would say, always stressful for the QA team. I think even today, and we’ve been doing this for a while now, back in the waterfall days you would get a big pile of code dropped on you; you were kind of like a pig—you just bathed in it. You’re rolling in this stuff, you got to just play with it.
JV: I like that image a lot of the pig bathing in code.
MA: Well, especially for exploratory testers, that’s kind of what … they like getting something that’s fairly complete, because then you know the work that you’re doing is meaningful. The agile cycle, sometimes when we do exploratory testing we find that the developer is trying to get that burn-down chart to behave, and so they’re pushing out code as fast as they can. Sure, the automated test they wrote passed, but they didn’t necessarily do their due diligence to really examine the code and play with it. Something to you as a tester that’s either potentially not fully tested or just in progress, like okay, we’re doing this piece.
My favorite is always, we’re going to do the functionality for this sprint and next sprint’s sprint we’ll handle the error cases. It’s like no, no, no, no. I’m a tester; my how job is to find the error cases. You can’t give me functionality, ask me to sign off it and then in two weeks come along and see what the error behavior is.
JV: Yeah, you’re not used to working like that. That’s not your mind state.
MA: Not your mind state and it’s too easy to wonder off the path. They’re like “Okay, here’s the five paths that work.” Well, if you know the five paths that work it’s not a whole lot that I’m going to bring to it for looking for something, because my job is to find the stuff that doesn’t work.
JV: Sure.
MA: So I feel like when we’re in the waterfall level we tended to get more complete features. Development had been beating on this thing for three months, “threw it over the wall” as the saying goes. But you kind of, in that code and me as a tester, you actually tended to have some downtime. There’s always a little gap in between the moment from the hand-off and finishing up the last round of code, and so you had some downtime. You could kind of explore problems or try to.
I find testing is kind of a destructive behavior; you’re constantly always breaking someone else’s stuff, so it’s nice to have time to build code and collect yourself before the next round comes when actually you don’t have that. And so its’ been a lot of adaptation on our part of just this new constant flow of work and figuring out how to find those moments to pause even as the work keeps coming and coming, and in various states do quality where we worked.
JV: Right. And you were saying how you added automated testing, so how did you go about balancing automated testing with the exploratory testing that you obviously are experienced with?
MA: So I came to the conclusion we had a really small testing for the size of products we were testing and the number of products we were testing and the release cycles and so we needed to never do something that wasn’t useful once it was done. For me the thing that felt like it wasn’t useful once it was done was the test plan. If it’s been a fair amount of time, like okay, there’s still stuff coming in, let’s figure out what it is and use the situations in test cases; when they write it all up it’s meaningful for us.
It’s a good kind of exploration of the problems faced and to get it all out, but no one generally looked at the document afterwards, and agile stories evolve a lot. One week you would get a story and start doing your test cases and the next week they go, “Oh, we got rid of that design. Oh, that’s right you weren’t at the meeting, because we didn’t invite you.”
JV: It’s agile, it’s fast paced.
MA: Exactly. And that’s important. I like that adaptive nature of agile workflows and that people are always kind of rethinking and reworking and things and showing them to customers and getting that feedback is great. But it means, coming from a waterfall work test organization where that’s how we did things, the test plan, we got the code and we tested the code, it just wasn’t working, but we needed that traceability. You need to be able to come back to a feature and say “Yeah, this is all the stuff we ran against it.” So that when we’re doing an analysis to figure out what the root cause was we can say “Oh yeah, we just totally skipped that test stage. Oh, we skipped that platform. So I ended up doing two things.”
One of them was to change the way we wrote up our test plans, and what we started doing was using our exploratory testing to explore the product and find bugs the way we always have, and then we’d keep almost like a diary of these test stages that I ran. You might spend eight hours with a product and do fifty or sixty different attacks and at the end of it you would have this big list of attacks that you had run and platform variances. Then, we could just paste that right into the issue. I have the belief that the issue tracker is the one point of communication that developers and testers share.
Theoretically, you might have a wiki confluence or media wiki or something where you store design docs and you communicate about them. But there is no guarantee either party is going to go to that or pretty much any other tracking system. But your issue tracker, that’s where your testers are going to file the issues.
JV: Everyone is going to look at that issue tracker.
MA: Exactly. So I started putting as much information in that as possible. Most issue trackers are really easy to run reports against and all that jazz, so I build metrics afterwards to show how many test cases do we have for the epic or how many test cases for a story or how many of those are automated. It was really easy to just go to the issue tracker and say, “We don’t need this field, prime it as text.”
JV: Was this any particular issue tracker tool that you were using?
MA: We actually had one built into that we used and we actually use it now and are hooked into it. You can use either side and they’re integrated, which is pretty cool.
JV: Okay, cool.
MA: But yeah, that ended up being a big thing for us, so it was a more fluid process. We weren’t pre-generating a ton of work that we would throw away, and it put the information where everyone was looking. I remember not to long after we started doing this that I had one director’s company and say, “This job is amazing, like I’ve never seen this much information about how things get tested and what was broken.” I’m like, “That’s what we do all the time. Look at the test plans that you asked us to generate.” But just really bringing that information right into their faces where they saw it on a daily basis, that was a huge win for communication throughout the team.
JV: Yeah, and that seems like it’s not an excessive amount of test documentation. If you just put out that plan, it’s something you could easily read and notice right there. It’s not written in document jargon.
MA: Well, sometimes people want kind of like an old-fashion test thing, because that information, you just suck it up and put it in a CSV file. So, yeah, it was powerful. The other thing we decided was that our regression tests were going to be automated and that was just going to be the end of the story. I was pretty much done with manual, and, probably, in waterfall we didn’t have a lot of automation here. We [did] a regression pass to two to three weeks as we went through and made sure everything was celled.
If you take that two or three weeks from every product release you can say, “Let’s take all that energy and put it into test automation.” You get all your regression test happening instead of doing all that manual time. I had to make every person on my team a developer.
JV: Wow.
MA: Yeah. I had ten people on my team and at the end of it all of them could code, all of them could build test for their various automated systems.
JV: And some people didn’t have any experience coding before?
MA: Nope.
JV: No. All right. Well, what better way to learn than on the job.
MA: Yeah. We did a whole lot of work to make sure that they had access to training programs; code school is great. But yeah, that was a big thing to do. We had to free up time for them to be able to learn, because it was really important to me that we didn’t say, “Okay, here’s your book. Go home at night and read all this and by tomorrow…” People need to have their free time.
JV: You also said you were rebuilding the automated testing infrastructure.
MA: Right.
JV: What does that entail, besides work?
MA: It kind of goes into the efficiency bucket again. When I first started my team, within the first couple weeks I had them kind of step back from their exploratory testing and start focusing on test automation. One of them was very tiny; it was only three of us at that point and one of them wasn’t much of a coder and the other one was a pretty decent coder and it’s like okay, we need to figure out how we’re going to even automate these products. So I gave them two weeks each to just go and explore the problem, like what are the test frameworks we can use all that jazz.
JV: Right.
MA: Not being coders, in some cases they started building test frameworks on top of existing tools. And when they tended to code a lot of it wasn’t modular code; a lot of it wasn’t reusable code. It worked and it did the job, but to get things up and running quickly they took shortcuts; like having for our system as kind of this backend database that we need to talk to and they would have a single database instance up on the internal network and all the tests would run against that. Then they would work very hard to make sure that they only looked at the data they created.
We quickly got into the situation where our tests were depended on previous runs, because they were looking at a test twenty minutes into. They were looking at something that got started in the first test, but we couldn’t test that one test when it broke; you had to do a full twenty minute run. And talking to people, a lot of people run like that. It’s so hard to do data management when you’re doing automative testing and to be able to do it quickly that. And again, they’re also in a situation where they don’t have a huge development team either, because they hired customers.
JV: I mean that just seems like a huge issue nowadays. You got data management and testing.
MA: Yeah. Well, if you do something like Rails, Rails has a whole great system built in to handle it where you can actually set up flag in your test. You’re like “Hey, stick this all in memory for the next ten commands and then you can pop it and it lets it all go.” So you can kind of get this machine into a state, and I saw that and I really like it. I had one of my folks build a data management system for us and we had a central web service that we could do the request and say “Hey, build me a data base in this state and at this IP with this version and this data set,” and it would automatically do it and set it up in this nice clean bucket for us and hand us back a URL.
And for all my test automation we started going through and ripping out all the bits that we were relying on previously existing states, and just having these nice clean test cases that were running against these test instances that spin up on the fly; we to be able to do the for exploratory testing too, because we have a web interface for the whole thing. So someone could go in and say “Oh man, an automative test fail. That’s weird. I want that exact same state—that same version, make me one,” and then they can connect and do it by hand.
JV: And you’re still using the system now?
MA: Yes.
JV: And is it always like … I mean, it’s agile, so I’m assuming it’s always a working process, there’s always adding more things and adding everything to it or is this pretty much like “Let’s stick with this. This is our new waterfall.”
MA: It’s always changing. I mean, that system right now is actually kind of been running in the background for a little while. I’ve since moved out of QA doing work in a development organization now, but I was just helping some just two weeks ago who were looking in extending to help support new web applications and new test cases. So yeah, it’s something that’s growing here at the organization. And another thing that we did at the same time was we moved to one test framework, because for a while we were doing a lot of stuff in Java and we probably had different test framework for every single application.
JV: Oh wow.
MA: Yeah.
JV: Yeah, that can get be kind of a mountain to manage I would imagine.
MA: It is and Java is great, but It’s not really meant for behavioral testing, which is what we were doing. You can do it, but you just have to be a very diligent developer about it, so I moved the team to Cucumber.
JV: Okay, and that’s better for behavioral testing.
MA: It’s better for behavioral testing, because you describe the behaviors that you don’t write function names. It really enforces code cleanliness. One of the big problems we had that I was talking about [involved] copying and pasting. With Cucumber you really can’t do that. Each step has to be completely separate and on its own, so that they can be chained together uniquely. So it forced my developers to think about how to organize their code more cleanly. Again, it’s one of those forcing functions; just like having that agile release schedule every two weeks makes you think about things.
Having that whole different framework where they had to make sure every set would run independently made them work on that. And the two together we got to the point where we actually … to do our test plans we would get three or four people in a room and then we’d put up a notepad on the board and would just start writing the Cucumber test as a group. Once we were done it’s like okay, those are all the problems we can think of to test we can all go back to our desk and just start filling out for that, and we had full functioning automated test in a week and sometimes before the application even existed, so it was cool.
JV: Very cool. We’re coming close to ending here. Any final words of advice you have to the people listening and reading?
MA: I think my only piece of advice is to always be looking how to improve. It’s hard to do; it’s hard to take the time to step away and see how can I do this better, but I know here we had problems were we didn’t take that time, and it’s hard to build the month you need to improve your life, but once you do you get three months back. Sometimes you have to suffer. Like there was a month there where I worked late nights so that my developer could build that test data management system, and I took over his testing work and we all really kicked our [rears], but then we’ve had this system since and the quality of life improvement is huge, so you just have to bite the bullet sometimes.
JV: You got to like put out that extra effort; always step up a little bit, step up your game just a little bit more.
MA: Well, it’s step up so that you can relax, and that’s the thing. A lot of times it’s easy to get in the mindset, “I’m going to have to do this thing and I’m going to be stuck doing that forever,” but you’re trying to look for improvements where you can get more of your free time back.
JV: That’s a wonderful way of looking at the world. When you get worked dumped on you don’t think about the work; think about the relaxation that comes once you [get familiar]. All right, thank you Matt very much.
MA: Thank you so much I really appreciate talking to you.
A jack of all trades and master of some, Matt Attaway has worked as a tester, developer, researcher, designer, manager, DevOps engineer, and elephant trainer. He currently manages the open source development group at Perforce Software, but prior to that he led a test team working on the highest profile projects at Perforce. When not tinkering with the latest software development pipeline tools, Matt enjoys experimenting with cocktails and playing games of all sorts—preferably simultaneously.