Squash Bugs by Breaking Your Testing Biases: An Interview with Gerie Owen and Peter Varhol

[interview]
Summary:

How many bugs have you missed that were obvious to others? We all approach testing hampered by our own biases. Gerie Owen and Peter Varhol share an understanding of how testers’ mindsets and cognitive biases influence their testing.

 

Gerie Owen and Peter Varhol will be presenting a presentation titled How Did I Miss That Bug? Managing Cognitive Bias in Testing at STARCANADA 2014, which will take place April 5-9, 2014.

 

About How Did I Miss That Bug? Managing Cognitive Bias in Testing:

How many bugs have you missed that were obvious to others? We all approach testing hampered by our own biases. Understanding our biases—preconceived notions and the ability to focus our attention—is key to effective test design, test execution, and defect detection. Gerie Owen and Peter Varhol share an understanding of how testers’ mindsets and cognitive biases influence their testing. Using principles from the social sciences, Gerie and Peter demonstrate that you aren’t as smart as you think you are. They show how to use knowledge of biases—inattentional blindness, representative bias, the curse of knowledge, and others—not only to understand the impact of cognitive bias on testing, but also to improve your individual and test team results. Finally, Gerie and Peter provide tips for managing your biases and focusing your attention in the right places throughout the test process so you won’t miss that obvious bug.

Cameron Philipp-Edmonds: Gerie Owen and Peter Varhol. They will be speaking at STARCANADA 2014, April 5 through April 9. Their session is titled How Did I Miss That Bug? Managing Cognitive Bias in Testing. Thank you so much, guys.

Peter Varhol: I thought I'd take two minutes and explain how this idea came about before we started getting into your questions, if that's OK.

CP: Sure. Absolutely.

PV: I read the book, back in the end of 2011, by Michael Lewis called Moneyball. It's about the Oakland Athletics baseball team and how he could not afford to compete. Their general manger, Billy Beane, started looking at the characteristics of players that made up baseball teams. He discovered that the baseball evaluators of scouts and the general managements and all that were evaluating talent all wrong, that they were not looking at characteristics that made up the baseball team.

They were biased in their evaluation of baseball players. That really struck a chord with me. Then I read an article by Michael Lewis that had a profile of Daniel Kahneman, about Daniel Kahneman's book Thinking Fast and Slow. It's really a revelation about how we're biased in many of our decisions in life. So I developed a presentation that I gave, actually, at STARWEST. Gerie, when was that? 2012?

Gerie Owen: Yeah.

PV: I think it was 2012, called "Moneyball, the Science of Building Great Testing Teams." I was more interested in team-building at that point, because that was the idea behind Moneyball. While Gerie took this idea and said, "You know, this can also apply to the way we set up our testing practices, the way we make decisions on testing strategies, and the way that we collect data and evaluate our results." What came out of this was a presentation, that was primarily written by Gerie but also uses some of my material, called "How Did I Miss that Bug? Managing Cognitive Bias in Testing."

I'd also like to credit a couple of other people here because we're certainly not the first ones to look at bias and emotions in testing. In particular, Michael Bolton, who has been looking at it for a number years now. I saw his presentation just last year called Emotions in Testing, and it's so wonderful. Both Gerie and I, at that same STARWEST in 2012, saw Jonathan Cole give a presentation on mobile testing in which he said, "Your emotions and your attitudes in using mobile devices is going to affect the results." I'd also like to credit at least those two and follow the others in guiding our thoughts on this presentation.

CP: OK. Great. Of course, the session you're referring to is "How Did I Miss that Bug? Managing Cognitive Bias in Testing," which will be at STARCANADA 2014, April 5 through April 9. I'm going to go ahead and give a little spiel and brag on you guys a little bit. Let me know if I miss anything.

PV: Sure.

CP: Peter, you are a TestStudio evangelist at Telerik. You're ...

PV: I've actually left Telerik. I'm starting, actually at the beginning of March, at FrogLogic, which is another testing automation practice.

CP: OK. You're also a writer and speaker on software development and quality topics. You've authored dozens of articles on software tools and techniques for building applications. You have given conference presentations, including ours, and webcasts on a variety of topics, user-centered design, integrating testing and agile development, and building the right software in an era of changing requirements.

PV: That's all correct.

CP: OK. Anything to add?

PV: No, I don't think so.

CP: OK. All right. Gerie, you are with Northeast Utilities Inc., correct?

GO: Yup.

CP: OK. You are QA consultant. You specialize in developing and managing off-shore test teams. You have implemented the offshore model, developing training and mentoring new teams from their inception. You also manage large, complex projects involving multiple applications, coordinating test teams across multiple time zones, and delivering high-quality products.

GO: Yes, and I'm also a speaker and writer. One of my new roles is consulting with a lot of the large projects and working on test methodology and test architecture.

CP: All right. Fantastic. Let's go ahead and dive into your session here. It talks about cognitive biases. How are testers' biases affecting their ability to test effectively?

GO: I think biases affect testing in that testing and finding, and specifically with "How Did I Miss That Bug?," is about judgment. If you look at a bug, if you look at testing as judgment about software quality, then you would look at bugs as judgments as to how the software performs.

If we miss a bug, it's really missed in judgment. If you look at it that way, you also look at biases. Cognitive biases, basically, impact judgment. That's how testing it and cognitive bias, how bias, relates. Your biases affect your judgment. Your judgment affects your testing.

CP: OK. Are there a lot of common biases that testers tend to have?

PV: Inattentional blindness, representative bias, and curse of knowledge—I think those are three very common ones. I'll let Gerie explain them just a little bit more.

CP: All right.

GO: Well, inattentional blindness, that refers to Chabris and Simons's gorilla experiment. It's basically something that you missed that's in plain sight. In the gorilla experiment, they, Chabris and Simon, had their subjects watch a short basketball game, a clip of a basketball. They had to count the number of passes that one of the teams made. At the end of it, they came up with various numbers.

Then they were asked if they saw anything else. During the middle of it, a person in a gorilla suit had walked across the basketball court and done a little dance. Only 50 percent of the subjects noticed that. So inattentional blindness is when you're—the way that would apply to testing is that you're so concentrating on validating your application to spec that you can miss some potentially big things, like the name of the company is misspelled on the website.

PV: We heard that one at a conference recently. One of the, yeah, attendees actually stood up and said, "I understand completely what you're talking about. But what none of us noticed was that, when this application ran, my company was spelled wrong."

GO: Yeah.

CP: That's usually a pretty important aspect.

PV: One would think so. The whole inattentional blindness thing is all about you're so focused on looking at one thing, or one particular class of things, that you can miss some very significant other things that are going on.

GO: Representative bias is similar to that. It's your ability or the way that you judge things based on other things. If you judge something that tastes like chicken, well, it may not be chicken. But this can happen in testing, and that "Well, it's always worked that way in the past, so it's still got to work this way." You can sometimes miss regression defects because you say, "Well, I've already tested with a new order, and I've tested with an order in process. Why bother to test with a completed order?"

Then curse of knowledge, that's when we become so knowledgeable about something that we fail to look at it from the perspective of somebody new to whatever the topic is. Of course, in testing, that would be the application. If you test the same application over and over again, you're going to get to know it so well that you're not going to necessarily approach it as a new user. A lot of times usability defects can be missed because of the curse of knowledge.

PV: That also happens to domain experts where they might think, "Well, we've always done it this way. So that's all you should care."

CP: OK. It seems like there's not having the right set of eyes on it, have a new set of eyes on it, having tunnel vision, seeing what you want to see. But are there any scenarios where those biases, or any other biases, actually help with testing?

PV: I can't think of any, to tell you the truth.

CP: OK.

PV: The important thing is to be able to catch yourself, because biases mean that we're looking at one specific thing to the exclusion of others. So we have one form of view to the exclusion of others. Gerie, we're welcome to disagree. I just don't see any scenario where being biased toward a particular approach or a particular set of features can help us.

GO: I tend to agree. The only way it could possible, maybe, is if you're biased to believe that requirements aren't everything, then maybe you'll test outside of them. But, yeah, I think in general, biases are something that are going to hurt rather than help.

PV: For an example, you may think that your development team is the best team on the planet. You may think that they're hot stuff. So you may—I'll say unintentionally, unconsciously—back off or evaluate more leniently than you might otherwise. On the other hand, if you think that you have a poor to average development team, you may find things that actually aren't there. It's possible to be biased both ways, and neither of them is particularly helpful.

CP: OK. You talk about the development team. A lot of times, or occasionally, development teams are made up of different types of developers or developers at different stages in their careers. Does the extent to which you are biased change depending on how many years . . . ?

GO: I don't think so. I think, maybe, having been in the testing industry now for quite a few years, I think, maybe, if anything, you might tend to become more biased. When I know they've got new a developer on the project: "Oh, cripe." Then, yeah, sure, I may test his code more, but any test effort, you have a limited amount of time. If I over-test the new guy's code, I'm going to be under-testing somebody who's been there for a long time. Potentially, yeah, that could hurt.

PV: Cameron, I think Gerie brings up a very good point here, that we will never rid ourselves of biases. But we can overcome them by reminding ourselves that they do, in fact, exist. Gerie just reminded herself that, "Gee, if I spend too much time with the new guy's code, if I'm biased against the new guy—or the new person, I should say—then I may not be doing my job."

CP: OK. Another thing, too, is you mentioned those three different types of biases, especially with inattentional blindness, where you just need to know what you're looking for. As you go through your career and you come up with the same problems over and over again, or you see the same type of defects happening over and over again, could that possibly cause you to become more blind, or more susceptible to inattentional blindness, as the years go by?

GO: I suppose it could, but I like to think that . . .

CP: Doesn't happen.

GO: This is how I think that way. I think you're right. It probably does. I think the best way to counteract that is have multiple testers on a project. Move testers around so that if you've got these verticals, especially when you test a center of excellence, you've got verticals where you've got the same testers.

They become domain experts. That's good, but that's going to lead to more of the bias. It's going to lead to the curse of knowledge. When they're seeing the same application, it's probably going to lead to the inattentional blindness. So it's probably a good thing to rotate some of your testers around. Give them new experiences. Put a fresh set of eyes on the same applications.

CP: OK. All right. You talked about rotating around. This leads into my question for Peter here. Peter, you've had various different roles in your lifetime, including being a technology journalist. Do you think the same tools for managing biases with testing can be used for people that are in different roles, regardless of the relation to the software industry or testing in general?

PV: The answer to that question is absolutely. We're all biased in how we do our jobs, and how we set up strategies, and how we collect data, and how we evaluate that data, and how we make decisions. But if he is a technology journalist, a product owner, a tester, a developer, I think the important thing to do is to be able to recognize that whatever stage we are in our careers and whatever we happen to be doing, our thoughts and decisions are influenced by those biases. I think the best thing that we can do is not in terms of eliminating them—we know we can't. But the best that we can do in overcoming them, and also compensating for them, is to be able to recognize that they exist.

CP: Now, Gerie, I have a question for you. You specialize in monitoring and developing offshore test teams. Do you find that there are a different set of biases found with offshore testers?

GO:: Well, I think cognitive biases—we're all human beings, and we all have the same mind and structure of our brains. So, in that respect, I think it's really the same. There's cultural differences, of course. It's cultural differences, as far as how our offshore teams approach testing and approach asking questions. I think, sometimes, that they also tend to—probably, the biases perpetuate more because they tend to really stick to the requirements and don't get into as much exploratory testing and whatnot. I think that probably also has to do with the structure of some of our contracts with the offshore and whatnot.

CP: OK. When there's that combination of an offshore testing and onsite, are the biases deterred, or are they magnified?

GO: The more eyes you get on a project, that would help to deter them, I would think. If you've got a new offshore team, a team that's been working on the application onshore, well, your offshore may see things like the curse of knowledge. The ones that have been working at it for a long time have that curse of knowledge, whereas your offshore teams, the fresh pair of eyes, could work the other way too.

PV: That's one of our prescriptions for dealing with biases, is let a fresh pair of eyes, let somebody who's not been intimately involved in the project, look at your data, examine your conclusions, and criticize them.

CP: I don't want to give away too much of your guys' presentation here. Could you give us another way of managing and coming over those biases?

GO: That's interesting, because Peter and I have a philosophical difference here.

CP: Oh, OK. A little bit of controversy. All right, I like it.

GO: You knew that the origin of bias—and this is all in the presentation—this is how it actually came out of Moneyball, the Moneyball presentation. The origin of bias is the conflict between System 1 and System 2 thinking. System 1 thinking is the intuitive thinking, the quick thinking. System 2 is the analytical.

When these conflict, then you have biases. Now, the way I see it, I think testers need to more intuitive. I think they need to do more exploratory testing, get into the emotional aspects. If something is not feeling right about the application, do more exploring in that area. Peter is of the opposite.

PV: First of all, I believe in, I'll say, alternating between the two so that neither one particularly wears you out. Use intuitive, but also use the analytical thinking and try to alternate between the two, and describe that as the difference between exploratory testing and test automation. I think automation is important from a repetitive standpoint in that you would like to have repeatable results, since one on the exploratory testing allows you to just say, "Well, what happens if I do this without necessarily including that in the script?"

CP: Yeah. No, that makes plenty of sense. Are there certain biases that you guys have found that plague your professional career?

GO: I think that those are the big three. The other one that I think also tends to impact testers particularly, and probably everybody, is the planning fallacy, which is that you tend to underestimate the amount of time a task is going to take. It's particularly critical for the testing folks because our time's always getting crunched anyway. If we underestimate it up front, we make it more difficult for ourselves.

PV: For me, I think it's the inattentional blindness, because I've been a developer on the task. Developers can get so focused on the code in front of them that they can't focus on the other implications other than that code that's in front of them. I've literally sat in front of the computer where the code is in front of them for eight hours a day, five days a week. In order to do that, it's hard to think of, "Well, what does my user think of the decisions?" I'm literally making hundreds of small decisions about how that application is supposed to operate as I'm writing my code that, I'll say, are beneath the level of the user interface requirement. What we can't often get out of—because we're so focused on that foot in front of us—is saying to ourselves, "Does this make any sense to the user?"

CP: Gerie, coming over back to your planning fallacy—not being able to really understand and either underestimate or overestimate how much time it's going to take—is a good way to overcome that bias to use flowcharts to really try and map it out, and sit down and . . . ?

GO: Yeah. Usually, you plan by the number of test cases. The rule of thumb is forty-five minutes to write and a half hour to execute. You figure from that. But I always just add on a buffer. If you think the test effort's going to take ten weeks, tell them fourteen.

CP: Gerie Owen and Peter Varhol, they will be speaking at STARCANADA 2014, April 5 through April 9. Their session is titled How Did I Miss That Bug? Managing Cognitive Bias in Testing.

GO: The thing that our presentation will give to testers and test managers is the answer to the more important question of how.

CP: Thank you so much, guys.

GO: Thank you.

PV: Thank you.

 

Gerie OwenGerie Owen

QA consultant Gerie Owen specializes in developing and managing offshore test teams. Gerie has implemented the offshore model—developing, training, and mentoring new teams from their inception. She manages large, complex projects involving multiple applications; coordinates test teams across multiple time zones; and delivers high-quality products. Gerie’s most successful project team wrote, automated, and executed more than eighty thousand test cases for two suites of web applications, allowing only one defect to escape into production. In her everyday life, Gerie enjoys mentoring new test leads and brings a cohesive team approach to testing.

 

 

 

Peter VarholPeter Varhol

A tester at FrogLogic, Peter Varhol is also a writer and speaker on software development and quality topics. Peter has authored dozens of articles on software tools and techniques for building applications and has given conference presentations and webcasts on a variety of topics—user-centered design, integrating testing into agile development, and building the right software in an era of changing requirements. He has held key roles on engineering teams that have produced award-winning, quality tools such as BoundsChecker and SoftICE. Peter’s past roles include technology journalist, software product manager, software developer, and university professor.

 

About the author

Upcoming Events

Oct 13
Apr 27