The Value of Risk-Based Testing: An Interview with Huw Price

[interview]
Summary:

In this interview, Huw Price of CA Technologies discusses the appeal of risk-based testing and how to actually do it within your testing team. Huw explains why issues arise when risk metrics are assigned arbitrarily, based on assumptions made about how a poorly understood system should work.

Jennifer Bonine: All right, we are back. Thanks for tuning in again for more of our virtual interviews. I'm here with Huw. Huw, thanks for joining me ...

Huw Price: Nice to be here.

Jennifer Bonine: ... today. For those out there that maybe don't know you, let's give them a little bit of your background so they know the context of what we'll be talking about today and where you come from.

Huw Price: I was the managing director of Grid Tools and the founder of Grid Tools, who specialized in test data. I think we probably pretty much have established the market in test data management. We've been at the forefront of the new thinking about test data. As part of that, we got into designing the right type of test data to test the systems.

Now, that's actually pretty hardcore scientific stuff, and it led us into the world of modeling, so I need to test a particular transform of data, therefore I need to model what that transform is, and I need to create the data, so we moved into a thing called cause and effect modeling. Cause and effect modeling has been around for a long time. It's been used for electronics testing, for military testing, and such things like that.

Now, the problem is that it's actually quite difficult for people to use cause and effect modeling. You need to be familiar with phrases like x-nor and x-or and things like that.

Jennifer Bonine: It may put some people off ...

Huw Price: Indeed.

Jennifer Bonine: ... who aren't familiar with that.

Huw Price: So what I did is I went out to my favorite costumers who loved all of our software and I said, "Use this, you've got to do modeling." And they all went, "Great, Huw, it’s too hard." So it was like, "Aw man, come on, man." The benefits are so ...

Jennifer Bonine: And you know what the benefit is. You want them to use it but there is a barrier.

Huw Price: They wanted to use it.

Jennifer Bonine: Yeah, but they have that barrier territory of, “I’m not sure how to make that happen.”

Huw Price: Indeed. So what I decided to do then was I said well, were going to drag the mountain to Muhammad. We’re literally going to have to create something new here. So we took something which has been around in the industry for a long time, which is a flow chart. People are very familiar with it.

Jennifer Bonine: Oh, very.

Huw Price: If you look at most sprint teams, you'll see a flow chart on the board. And it’s a good way to communicate ideas and information. So we thought let’s make it look like a flow chart, but we'll hide all the hard math behind it. So that's where agile designer, or as it’s known as agile requirements designer came from. So that allows you to build a model of what the system is supposed to do. And then out of that we can create you the smaller set of test cases to test each node or each logic gate. As opposed to doing it by hand.

In other words, it’s a bit like playing the computer at chess. You can sit there for half an hour and make a move, or you can program in what a chess board is and the computer will do it for you in .0 whatever seconds. So we created this very powerful test case optimizer tool that will build you a very small set of test cases but guarantees to test all functionality. So that's kind of the journey, and then that leads on to API testing, which is obviously a hot topic at the moment.

Jennifer Bonine: Yes, absolutely. Now on the tool portioning, kind of making tools as we've heard this week. I've interviewed several folks and they've said a lot of the tools that existed, the first generation, second generation, and some of the tools that people are using, are more designed for geeks or technical people.

Huw Price: Yeah absolutely.

Jennifer Bonine: Right? And now that you have more users and business folks and other folks getting involved in technology with agile, and wanting to be part of the team. Are you seeing that trend of tools doing exactly what you've said, where you're making them more accessible to more of the market space, and taking them away from that stigma of you have to be a math major or a scientist or a computer science person in order to understand all of these things.

Huw Price: It's where everyone's going. Jonathon Wright talks about design ops. That's basically sitting down with the user, or even the user doing it on their own, and then actually saying, this is the system I want. Then IT goes, oh, okay fine. And all you're really doing is you're assembling ideas, and business terminology, business objects, business processes, and assembling that into something that is going to come to market quickly and its going to be of high quality. That's where the market is. Cost is not so important anymore. If you don't come to market, you're out of business. And that's what's driving this change. And that's why it’s actually quite an exciting time at the moment.

Jennifer Bonine: It is. That's what I was thinking too this week. Hearing some of the transformation occurring in the tools, exactly what you did, which is take something that can help them speak a language they understand. The flow chart. They get that. Make it accessible. Hide the hard stuff behind it. It still exists, but now it’s accessible to more people.

Huw Price: That's kind of the landscape. Now after that, you also then, one of the key things that effects speed is people working parallel. There's this sort of idea of a conveyor belt, which I hate. I think actually nowadays, if you look at the term agile, it’s actually cooperation. You're in my team, you might build the model, or you might do some testing, or you might run some automated shifts, you might write the code. I might write the code. It doesn't matter. But if we’re all working parallel, then at the end of that time period, which could be a sprint or a waterfall, it doesn't matter, we've actually created some working code. And I think that is kind of what we’re being able to do.

One of the problems with the sequential one of the reasons sequential occurs is you have to transfer information from one tool to the other, and you generally do that by hand. Every time you do that, you lose quality and you lose speed, so if you can create as many assets up front, which then drive all the other things, then you can have lots of parallelism. That's very important. Now, when you come to API testing, APIs are a double edged sword. Everyone goes "aw yeah, great, internet things, fantastic".

Jennifer Bonine: Right, I was just going to say, right? Because we’re seeing open APIs really important. Connecting the lots of different things. Not even knowing all the things potentially you'll need to connect to.

Huw Price: Yes. But, a good one is if you go on and look at say the Facebook API. It looks pretty straightforward, you can connect to it. The thing about that Facebook API, is because they are the dominant vendor, they can force the users. They only support three versions of that API. If you don't upgrade to within that three version window, you're out. Okay? They are dominant, so what we've discovered there is version compatibility is a massive problem. So before you even start with your API's, you've got to think about how you build version management into the API's. The problem is an API could be called by a rest call, it could be called by a sub call, it could be called by another API.

Now if you don't track those dependencies, you absolutely have no idea. And when you build your testing to do that, you need to think about its a non function requirement, which is version management. I'm accidentally calling it with the form of version two into a version four. What happens? Does is fail gracefully? Do I detect version compatibility and say, "Oh, sorry," or do I just carry on and you end up with carnage and chaos. So, that's one sort of big area. The other thing about APIs is that people think, "Oh, great, I just press this and it does something." The trouble is there's another API doing something else which is related to the first API, and there's another one.

Jennifer Bonine: Yes, the interconnectedness of all of it.

Huw Price: Absolutely. So you might end up with something like I'm going to have an API to create my costumers. Then i have an API to create my orders. Then I have an API to create my iTunes. Fabulous. Customer comes in, creates. Order comes in, iTunes fails. What do I do now? So now I have another testing problem, is I have to test the failures. Then I have to test that the back out works. Suddenly, in the old days of main frames you had this wonderful thing called a roll back. And you just go, it failed.

Jennifer Bonine: Revert back.

Huw Price: Let’s just revert it all back. Send them a message and say try again later.

Jennifer Bonine: Yeah. Sorry about that.

Huw Price: Yeah, exactly. And that's fine. And you've got a logic lenitive work. Now the trouble is once with API's the complexity of a logic lenitive work becomes really difficult to deal with. So when you're designing, people say "oh, we'll just convert a couple programs to job and then well make them a micro service." I'm like "stop, no no no. No no no no.” You need to fundamentally change the way… you need to model exactly what this API is going to do. You need to model the roll backs, and then once you have done that you can then build those test cases automatically. And also, you're sort of building into the model a logical unit of work with the interconnectedness of them. Coming back to modeling, and that kind of where we started, if you can model the API's behavior then that's a very strong way of being able to build the test cases to support that. So testing API's is hard, but if you built a decent model its fine. Just to finish upon the API thing, another thing to think about with APIs is if you—I hesitate to use the phrase exhaustively—if you can build a comprehensive set of tests using automation that you are continually running against the API. So I'm running against version two, version three, version four. And I'm validating it pretty much continuously.

These are my twenty top APIs, and I know that they are very stable. And I might occasionally add in some extra tasks once some new functionality comes in. That's fine. So that's one way of making sure those API's do what they are supposed to do. The reverse of that is if someone wants to test against my API, we will create you- because we have the model—we will create you a virtual version of the API. So we will say okay, if you look at say a credit history, I'm going to connect to Dun and Bradstreet and get the credit history. That comes back in an XML structure and it will have your name, your address, who you were married to, who you divorced from. You've had a change of name. You've had one credit default. You've moved from seventeen times. You've been bankrupt. All of those things are actual characteristics of the response from the API. If I want to test my app works against the API, I could just type in my name, get something back, say oh that worked. One test. What you really need to think about is to give me ...

I'm going to give you a pack of four thousand different responses. Those responses are each different. Think of it like a snowflake. So now someone wants to say "I work against my API." Then we say, "Before you can consume my API, here are my test cases and my virtual responses. you put this information in, this is what you should get back. This is what you should be displaying." And you're giving the expected results with the virtualization as well. So that then actually allows them to almost certify.

Now if you're thinking about APIs, what you have to think about, is plugs that work. Coming back to the original thing of trying to, when we talk about the user the business user, they just want to assemble things. Okay? Now, if they are assembling things which have never been assembled in that way before, that should work. If you haven't built those plugs to be of high standard, it will start falling apart. And I think that's where you have got to think very hard with your API strategies to build that testing sort of certification type of methodology in there. The version management in there. And actually you're into some measure of stability.

Jennifer Bonine: Right. That they're up and how can you ensure, right? Because yours may be stable, but if you're connecting to other people's who don't have version management and those other things, you may lose it there.

Huw Price: Just the final thing, there is also things like performance. There's so much stats on if you go into an app and it just doesn't, then you move away very quickly. How do you test that I connect to the Starbucks main server and it doesn't respond in 0.2 seconds, I need to then flip to the next server to next server, or I might get to a cache. So if I haven't tested that, which is actually a functional requirement but based on a non-functional feature that is also something you should certify.

Jennifer Bonine: Well and with APIs as well, I think it's important for you to understand where you sit in the food chain with that stuff, as you mentioned. In terms of Facebook if they're the dominant, you're not dominant in that food chain. Then when something goes wrong, understanding where the perception, it's going to be perceived a failure whereas whether it was you or not. So, Netflix as the example, if Netflix goes down but it was really Amazon on the back-end of that, no one cares. It doesn't get tied to Amazon, it gets tied to Netflix. So you as a consumer has to be conscious of where the perception will be around your organization, potential failures you don't directly control but indirectly need to be aware of.

Huw Price: And the other thing about that is, speed. If you can detect the failures very early in your continuous integration development cycle, then you have a hope of fixing them. What you can't do is say, "oh we have a failure" and then you're training through logs and things like that. You should really have all the task cases ready to go to be able to identify that.

Jennifer Bonine: Absolutely. So we're out of time, it goes so fast. Folks out there, we've just scratched the surface on these topics, they're so broad and there's so much to dig into. What is the best way to get a hold of you or get information on this?

Huw Price: Well LinkedIn, Huw Price, it's H-U-W, which is a Welsh word, because I'm Welsh. Huw Price on LinkedIn is probably the easiest. If you go to Professional Tester, I think the June, the last edition, and just do a search for that, I've got an actual pretty comprehensive article on API testing which covers a bit more than we've talked about today. But that's a good place to kind of start.

Jennifer Bonine: Good place to start. For those of you out there struggling with that, great things to think about. I think some good nuggets to take back, to get you started if you haven't thought about it already, and get some research going. Thanks Huw, I appreciate you being here with us.

Huw Price: It was a pleasure, thanks so much.

Huw PriceHuw joined CA Technologies in 2015 as Vice President of Test Data Management, when specialist testing vendor Grid-Tools was acquired into the CA DevOps portfolio. During a 30 year career, Huw has gained a deep understanding of the challenges faced by modern organizations, and, with an understanding of the science of testing, how to solve them. Prior to joining CA, Huw helped launch numerous innovative products which have re-cast the testing model. His first venture goes back to 1988, when he set up data archiving specialists, BitbyBit. He was soon joined by long-term partner, Paul Blundell, also now of CA Technologies. After BitbyBit was acquired, Huw and Paul co-founded data migration and application conversion firm, Move2Open. In 2004, Grid-Tools Ltd was set up, and Huw quickly went about re-defining how large organizations approach their testing strategy. He helped oversee the development of the company’ flagship product, CA Test Data Manager (formerly Datamaker), pioneering a data-centric approach to testing, and later played a visionary role in the design, development, and release of CA Agile Requirements Designer (formerly Agile Designer). Huw has spoken at numerous conferences and exhibitions, from Star East and QSIT’s STeP-IN in 2009, to the 2011 InfoSecurity show and the ISACA annual conference in 2012. He is currently on the industrial advisory board for the University of Wales, and has previously served as an advisor to King’s College London.

About the author

Upcoming Events

Apr 27
Jun 08
Sep 21