The Future of Test Automation: A Conversation with Max Saperstone

[interview]
Summary:

Max Saperstone, director of test and automation at Coveros, chats with TechWell community manager Owen Gotimer about codeless automation, using ROI to drive test automation decisions, and why manual testing is here to stay. Continue the conversation with Max (@max) and Owen (@owen) on the TechWell Hub (hub.techwell.com)!

Owen Gotimer 0:01

Hello, and welcome to 404 Podcast Found. I'm your host Owen Gotimer. This episode is brought to you by Coveros Training, offering expert training in Agile, DevOps, testing, and more. We'll have more on that later. For now, let's jump into the episode. Max Saperstone is the director of test and automation at Coveros. At Coveros, Max works with organizations to determine how best to test their application across the organization to ensure high quality and low risk releases. I sat down with Max and talked about codeless automation, using ROI to drive test automation decisions, and why manual testing is here to stay.

Max Saperstone 0:44

Codeless automation is really the idea of—well exactly what it sounds like it is—actually doing automation without any sort of code. And so the fact is, if you're gonna be writing automated tests on you know, one of the big things I preach is when you write automated tests they're code and so they have to be treated as which means following good coding standards, like keeping them in source control, making sure you go through code reviews, hopefully writing unit tests around parts that makes sense, integration tests, etc. And that is a lot of work. And it's a change from what most QAs are used to. They're used to doing a lot of manual testing. And so there's the this large proponent for well, how can we get more QA is involved in this automated process? And the thought is, well, what if they didn't actually have to write code. So there's this idea of, you can kind of do a drag and drop or you can use other UI tools in order to actually build your automated test. So it's kind of taking this record and playback but to a much greater step in the sense of not only can I record and playback, and then I can reuse the components, I can build my own tests from those with the idea of again, just trying to make it simple and make automation a lot more accessible to a lot more people,

Owen Gotimer 1:57

Right. So you make it more accessible by removing that coding element from it. But there's still code at some level.

Max Saperstone 2:04

Oh sure. It's just that the code is removed away from anyone who has to see it. I mean, you sat through, we did a little bit with Selenium IDE, and you saw that you're able to record all of your actions and write a test, and you don't have to actually write any code. But if you wanted, you could go in and you could have messed around with all those steps and everything else. And it's kind of like that, but at a much more complex system. And there's a bunch of different tools that are out there that allow you to essentially build your own automated test without having to write any code without any knowledge. And some of them even say, and if you want, you can go under the hood and start tweaking, changing the code, kind of writing whatever it is you actually want.

Owen Gotimer 2:37

Yeah, that's awesome. Do you see any new challenges coming about with this codeless automation people who are now going to be doing automation through maybe a drag and drop or click and record, that them not knowing code is going to be an obstacle for them even with codeless automation?

Max Saperstone 3:00

So I don't know about any new obstacles, but I do think that they're facing the exact same things the prior tools introduced. So a few things I think have have been fixed. I've definitely seen things being a little bit more maintainable and the reliability going up a bit. But overall, a lot of the same problems exist, which is you still need some sort of smart way of writing your tests. And the problem, again, with automation, is that a lot of people just take their manual tests, and they simply convert those into automated tests. That just doesn't work. Because as a manual tester, you know, I'm looking at 1000 different things, even though I'm just running my one test case, and your automated test only knows to check exactly what you tell it. And there are a lot of ways that you can kind of get around that and do other things, but they don't come directly from the test to see you need to know a little bit more when you're actually writing that when you're actually doing it. So, you know, again, I think it solves some of the problems and I don't think it necessarily opens up anything up up for not knowing code, but at the same point, at the same time, you have issues of well, you're also tied to that tool. Like if at any point you say you don't like it, all of your tests are kind of stuck in there. And that's not a new thing. I've written a whole bunch of plenty mighty scripts, and then I decided I didn't want to use Selenium IDE anymore. I'm kind of stuck there. Same thing with with UFT or any other tools out there. So I don't think that's necessarily a new challenge, but it is definitely something else that you want to be aware of.

Owen Gotimer 4:26

Yeah, absolutely. And this is something that we talked about, too. You mentioned, converting, like directly converting manual tests to automated tests. We've talked about how that isn't, obviously the practice we want to be following. When people start down the automation track or people who may be already in it, what should we be automating?

Max Saperstone 4:43

So that's actually a great question. And I don't have a silver bullet answer, but really, the question is, that I always like to look for is like looking at your ROI, like when you start doing automation and why do you want to do that because we need to speed up the testing process. Okay, great, what's slow about the testing process, and typically it's oh well, we have to go through X, or Y, or Z. And a lot of times, what I hear is, we go through this one thing 50 times. So take, for example, I'm filling out a form, and I want to make sure that it works with every single state. That is a very tedious, miserable thing to do, because I have to fill it out 50 times each and every time it's like another state gets even worse when you start earning territories and other things. I think the last project has on they had like 57 different workflows through that. And so one of the first things that we did was we automated filling out those forms. And we didn't actually check every single page on there, what we did is we just wanted to make sure that the workflow was the correct workflow, we got to the end, and then someone still went through once manually on a single state to make sure that everything actually looked properly and the usability was there, but then they could just go through it once because and there was a slight risk, but we made the, our kind of a risk assessment was if manually works on one state and the workflow works and all the states, we're not worried about something being off on, you know, one of those random states, because again, we recognize it's going through the same pages, it's literally just sending different data. And we just wanna make sure that the data kind of can be sent. And so that, you know, that can speed up a lot. And it's just, you know, adding really writing one test and just saying loop through 50 different times these different inputs and stuff like that can really save a lot of time when you identify, what are those very repetitive mind numbing things to go after. Other big things you want to look at are things that are highly data driven, because those are very prone to human error. So I don't know if I've told you the story. But how I actually got into doing automated testing is I think it was my second job. I was doing software testing. And the output of this file was whole bunch of binary data. So literally just pages and pages of ones and zeros that we were supposed to look through and make sure that it was the correct data. And we would and we would find it and there would probably be plenty of errors that got past us because that was a miserable impossible task.

Owen Gotimer 7:05

You're doing it all by hand?

Max Saperstone 7:07

By hand. Absolutely. We had you know what the correct output was. And so some people got creative, they printed out sheets, they'd print stuff out on transparent sheets almost to lay over what they would need to. And then some people would do kind of like in Word, they do like a highlight and find and replace to make sure they all match up. But either way, it was a painful process. And so that was actually my first entry into, we need to come up with some better way. We happened to have Pearl available on the system, so I started writing some scripts, which literally just ran the program took all the output and compared it to whatever it was, again, because it was a slow, painful process. And really, why not do something like that. And so really, anything you can think of that you find a, you know, something that's very tedious, something that's very error prone, again, with a lot of data is usually very right for automation, because it's usually fairly easy because you're not looking at UI, you're not necessarily looking at anything else. You're just making sure is the correct output I care about, you care about all the steps along the way, you want to be a lot more careful about your automation and most likely, then you want to start following some of the other strategies, you know that we talked about breaking down your one test into 20 or 30 or 50, automated tests and looking at different things, etc.

Owen Gotimer 8:16

Yeah, when you you have that automation, you know, you start to get the automation in place. You're not your goal isn't necessarily to automate everything, right? You still going to do manual testing?

Max Saperstone 8:27

Exactly the goal is, and and I hear this, unfortunately, from a lot of execs, "oh, well, we're going to replace all manual testing." And, sure, that's a great thought. But it's not something that happens. I mean, even places like Amazon and Etsy who claim they have no manual testers. I'll take a an excerpt from Jeff Payne—my boss, your boss as well— and he likes to say that they do, in fact have testers. Their testers, however, are the end users and the end users find all the issues and they then send them back. I mean, there is manual testing being done whether you do it internally or not. And so that really is isn't and I don't think it should be the goal. You may end up with, you don't need to do as much manual testing, that's what you want. But sometimes in several organizations that I've been in, we found as manual testers do just as much. They're simply focused in other areas. And they're able to get to different areas that they weren't able to test before because automation is taking some of the workload off of them. Because honestly, every single time, you know, you're ready to release I've never once heard a tester say "Great, I'm satisfied. I've checked absolutely everything in the app. And I know we're bug free. Let's go ship." It's, well, here are the areas I tested, here are the risk areas, here are the ones I haven't even touched. So good luck. And if you can let testers use their time more effectively, then that's great. And that really should be what you're driving at for automation and then sure, build it up so you need less and less manual testing over time, but I'm not a proponent of 100% automation, fire all the testers. I don't believe that's a reasonable thing, quite honestly.

Owen Gotimer 9:52

Back to the Amazon example, you're talking about their testers being the end users. Amazon has done their risk analysis and we're going to risk putting the manual testing in the hands of our customers. Yeah, knowing that if something goes wrong, we don't think that they are going to stop using us, they're going to let us know. And we'll be able to make the changes,

Max Saperstone 10:11

So using someone like—so rather than Amazon—using something like Netflix, for example, I think they have like a 17 minute from code check-in to release in their cycle. Which means that if their end users find an issue, great and only takes them 17 minutes once they fix the issue to actually push it out. So that's a very low risk. But you're right. And their software is, you know, I guess imperative enough that users are going to continue to use it anyways. Right? Their risk assessment is "Who cares? Like our testers, manual testing would take a lot longer than that, why not just let it get fixed elsewhere." That really kind of brings in this whole idea of how DevOps really needs to fit into your whole testing cycle. Because if you can have a great process to release software, well, it doesn't necessarily mitigate the risk of introducing bugs. It does mitigate the risk of how difficult it is to actually fix them. Because you don't need to necessarily worry about hot fixes, you can just push everything through your actual pipe. And there's no way to do that without a lot of good automation in place.

Owen Gotimer 11:10

So let's talk about that a little bit more automate, obviously, DevOps talks about continuous integration, continuous delivery, however, you want to kind of look at that. And continuous testing is a big part of that.

Max Saperstone 11:20

Continuous testing is that. I mean, you know, the term popped up, I think, about two years ago or so. And, you know, for a long time, I was talking about how the term is BS and what not. And I, part of me still thinks it is but really, it's a marketing thing; it really is about the testing is, in fact, important. Let's make sure that we remember to include it. For most testers, I don't think it's really a new concept. It's, you know, this idea of let's test as soon as we can, and let's test as long as we can. But really, it's what makes the whole DevOps process successful, because other otherwise all you're doing is you're taking software in whatever state it is, and you're shoving it out to your clients as fast as possible. Whereas, what you really want is to take the software and shove it out to your clients as fast as possible with as much quality statistics, as you can gather, and hopefully the quality is high. But at least you know if it's not that you can say,"Yes, I'm going to release" or "No, I'm not." And so the thought about it isn't necessarily to make sure that it's bug free, but at least to get a much better risk analysis of it so that you can say, "Hey, I know I'm releasing it with these issues, but I don't care." So at least someone at the end can say yay or nay.

Owen Gotimer 12:29

It's about putting that in the hands of the people who can make those decisions. As testers, you don't necessarily get to make those decisions. As a part of the team, maybe you have some say in it, but ultimately, you're supposed to help illuminate the issues that you find and the risks that you see and maybe provide some analysis around that. Like "Hey, we think these are high risk or low risk based on what we think customers might get out of it."

Max Saperstone 12:50

Well, exactly. That goes to this concept of whole team quality. You know, traditionally it's always been thought of the QA owns quality, which is a ridiculous sentiment because QA is not creating the bugs nor introducing them. And honestly, you know, I used to like to say, "Oh, well, I like QA because I like to break software." And even that's not true. I'm not breaking the software, the software was already broken. I'm simply pointing out where the actual points are. This idea of whole team quality is really the fact that yes, I am in there, and I'm identifying the issues that exist, but the developers should be doing the same thing. And ultimately, then what we're doing is we're gathering up all this analysis, we're handing it over to someone else. And we're saying "do you want to release a software or do you want people to go back and fix it?" Because ultimately, that's what you really want for, you know, successful process. And that's what the, this whole DevOps pipeline is really supposed to be highlighting. It's this continuous feedback, which is, at any point, I know the exact state of the system, and I can do whatever I want with it. It's no longer necessarily just a release manager going, "Hey, I hope everything's gotten tested." It's great from the dev perspective, from a QA perspective, from the security perspective, etc. I know all this entire risk assessment, and then I can do whatever I want with it.

Owen Gotimer 13:56

I think that even when you do that, there's going to be things that you can't necessarily test? You're not going to ever be able to test everything unless your application is like the most simple application in the world, that's probably never going to happen. But like you said, if you're able to kind of provide that risk assessment, and your whole team is kind of behind the the acceptance of that risk assessment, say, yeah, these are the things we've done, these are the things that we agree are problems or could potentially be problem areas. I think that's super important. One thing you mentioned is the whole team quality and the developers playing a role in that—the people who are actually putting the code into the system.How important is it as you're moving through your DevOps journey, and as you're getting through continuous testing, and more and more automation, how important is it that the developers really have a mindset in the quality space?

Max Saperstone 14:52

So phrasing that way is interesting. I mean, I think that it's really important for developers to have a mindset in the quality space. That said, I think it's something you don't see quite that often. I think that there are a lot of developers who really care about doing unit testing and really care about their code coverage. But I honestly believe that your QA speak a little bit differently. And that's one of the reasons that you you have that separation of responsibilities is the developers—as far as they're concerned—they're mentality is "let's make sure that it works the right way." Whereas the QA mentality, "let's make sure it doesn't work the wrong way." And, you know, I think if devs were really thinking, in that sense, it would be great, but I think it might also hamper some of their ability to really produce the right software, the right way.

Owen Gotimer 15:36

To that point, that potentially inhibits their creativity and their ability to maybe take some risks in their manipulation of the code in a way that they think the user might be able to use, or things that they can do that will help them kind of maintain code long term. So I think that that's a very interesting point about the different dynamic that someone who's maybe a traditional QA person has from a traditional coder.

Max Saperstone 16:02

But again, back to your initial question is I do think it's really important that there's some sort of quality mindset going on pretty much with everybody. Just this idea of everyone actually needs to be involved. And I don't just think it's necessarily important for the quality of the product. But it's also, I think, really important for the whole team. Happiness and for the company culture. I mean, one of the things that I've seen in some environments that I've worked in is there's a lot of blame going on, which is, well, "we found this bug QA, why didn't you catch it?" Well, sure you missed it. Or why did the developers introduce it, but if there's this idea of whole team quality, it's no longer finger pointing blame. It's, as a team, we missed this. Let's figure out how we can address the problems it's doing to change your development practices, or QA practices or figure out how you know how this actually got by, but it's no longer this kind of painful place to be in. It's much more generative, which is again, I think a really important thing. And it can only really come about when you have this view of whole team quality as opposed to just saying, "Hey, you guys are responsible for quality." So if there's a problem, it's your fault, as opposed to anyone else in the organization.

Owen Gotimer 17:16

You mentioned generative culture. Westrum has the pathological, the bureaucratic, and the generative. Pathological being that kind of fear culture where people are placing blame. Bureaucratic is more of the red tape kind of getting in your way from being able to get stuff done. And the generative is where the whole team is working together with the idea that they wanted to create good software. So when you have someone that's maybe working in a pathological situation—you mentioned you've been in an environment where there's this blame game—what are some steps you can take to get from that pathological to, ultimately, you want to get all the way to generative.

Max Saperstone 18:00

So that's so that's a great question. Again, it depends on the company culture, but it's not something that you can just go in and snap your fingers say, "Hey guys, we're in our generative culture, we're no longer gonna blame everybody." It's really something that has to come both from the management and from the employees. Because, again, I've been to places where they have actually great employees, they love interacting with each other. And I think that, that's absolutely important. But if you don't have that support, coming down from management, if management is always playing the blame game, that's not going to help. At the same time, if you always have management supporting everybody, but you have teams that really just work in their own silos, you're going to have that same issue. So I think you really need to be looking at both things. From an employee perspective, I think one of the biggest things you can do is try and coordinate, try and communicate as much as you can with pretty much everyone around you. So from a QA perspective, to me what that means is if there's an issue, that I encounter, the first thing that I'm going to do isn't going to be to open a ticket. The first thing that I might go do is go sit over by dev and say, "Hey, I found this. Is this, is this actually an issue? Is it not?" Because what happens is, it puts a face to it. It's not just that I'm throwing something back at the dev saying, "hey, this doesn't work. This is a pile of garbage." It's actually, "let's, let's talk about it a little bit." And there's a lot of back and forth, which really helps. From a management perspective, I want to probably be doing the same thing. Abig thing that you see in a lot of these pathological cultures is people don't like saying when there's problems that get buried or that come up, but be as vocal as you can, to as many people as you can. Especially if you don't see it being flown up, there's a problem, tell your manager immediately don't say well. And if you're gonna get in trouble for it, great, like actually have that happen, right then because it's better than sleeping under the rug and then having it come out later. Because most likely you're just gonna be thrown under the bus later. And if that's not getting flowed up, if you hear management say something I would, you know, personally, it's a very difficult and uncomfortable position to be in. But you know, say something, say, "actually, we talked about this", and don't say, "I told so and so and he clearly didn't float up", say, "Oh, actually the whole team was discussing this, that's going to be a problem." Just to try and make sure that there are none of these surprises. And hopefully what that can do is it can foster a lot more trust, which gets you out of that rut of a lot of finger pointing and a lot of blame and a lot of almost these fear-based decisions going on.

Owen Gotimer 20:26

Trust is such an important part of getting to that generative culture. You can't be generative unless the team trusts the team, the team trusts the leadership, and the leadership trusts the team.

Max Saperstone 20:35

Yeah, absolutely. And what I just described is not simple at all. Like it's a painful thing that may take years, quite honestly to get there and may never even pan out because you may just literally be in a culture where no one else wants to do that. And management doesn't like that or see the value, which is unfortunate. But I again, I really do think that's kind of the best path going forward and I've seen that work relatively well. At least in, you know, in parts that I've definitely seen a lot of dev and QA groups get a lot closer. And I've seen even their quality improved just from having a lot of those discussions. Because what it stopped is a lot of this bouncing back and forth is "Nope, it's working fine." No, it's not, you know, kind of thinking that you're picking on one person or the other.

Owen Gotimer 21:20

And it helps create a better understanding, there's less miscommunication. And it comes back to the Agile principle about face-to-face communication. You mentioned going over and actually sitting with the dev and having a conversation, being able to explain things in that way, are much, one, maybe a much nicer way than putting a ticket and almost opening the tickets like they got it wrong.

Max Saperstone 21:40

And I'm not saying don't open the ticket, where the idea is if you have the conversation first because otherwise, again, and I've seen this happen, you open a ticket devs say "could not reproduce." You then send back a video and says, "Oh, well, I don't see that on my machine," or "I don't see this over here," or "that's never a use case we'd run into," and so what you have is is really painful back and all through some sort of tool. And then next time you see that person, all you think is man, that guy's really a kind of a jerk, and you have this sort of animosity towards him just because that was your whole exchange. Whereas if you go over sit down, actually have a communication, that really shouldn't be the case.

Owen Gotimer 22:16

Right? And you might be able to also nip the problem in the bud much more quickly, because maybe it is, "oh, you know, I know exactly what the problem is," and they can go maybe it's a quick four-second coach.

Max Saperstone 22:27

Exactly. They can put it in there or, or even better. I've seen developers who then call me back over and say, Hey, remember how we were looking at this before? Do you think this is going to fix it or this we're going for? So instead of it being kicked out to QA, and then I get to assess it? I can see it right there on the devs machine? It's all about again, that that communication is building the relationships..

Owen Gotimer 22:44

And it builds a relationship and helps break down those silos. Then despite the what the efforts of what agile and DevOps were attempting to do, still, there are teams that are not working in a way where they're not throwing stuff over the wall.

Max Saperstone 22:55

And again, I mean you can do this in Agile or not. I mean really anytime you do any sort of development, the more relationships you can build with different teams the better.

About the author

Upcoming Events

Apr 27
Jun 08
Sep 21