With more than thirty years of experience in the testing world, Dorothy Graham knows a thing or two about test automation. We asked Dorothy about how to get the most out of automation and how to avoid "intelligent mistakes."
Noel Wirst: You're giving a session at the 2012 STARCANADA conference that describes some of the "intelligent" mistakes in test automation. What makes these intelligent and not careless?
Dorothy Graham: A mistake is an action resulting from defective judgment, carelessness or a misunderstanding, so some mistakes are due to carelessness, but not the ones I will be talking about.
"Intelligent" means exercising good judgment, but if good judgment is based on a faulty premise or misconception, then a mistake is still made. If the misconception had been true, then the action would be sensible, and there is an element of truth in all of the intelligent mistakes I will be talking about - this is why they are easy to make and seem like a good idea at the time!
NW: When you say that “making testers become test automators may be damaging to both your testing and automation. How is this so?
DG: What I think is damaging is the assumption that it has to be the tester who also becomes a test automator, a person who works directly with the tool. Because the tools use scripting languages, which are programming languages, any tester who wants to use the tool directly must therefore become a developer.
There seems to be a bit of controversy developing around this topic, as some people are saying that all testers should learn to write code. I don't agree!
Of course there are some testers who will be very happy to become developers (script developers working with the tool's scripting language) - I have no objection to that, and it is very useful particularly in an agile team.
However, by implying that the only good tester is one who can code, we are denigrating our own discipline, and definitely "throwing the baby out with the bath water." Not all testers want to become programmers, and not all testers would be very good at it. If you are a tester who has come from a business background and is very happy dealing with tests and very effective at finding defects with your testing, why should you have to stop doing what you love to do something you don't like and won't enjoy?
Forcing all testers to become developers is damaging to those testers and therefore to testing, and it doesn't help the automation either, to have de-moralized people doing what they don't like not very well.
All testers should be able to write and run automated tests, whether or not they are developers - this is what a good automation framework will provide (and the framework needs developer skills for support).
And testing tools are not just for testers - developers can gain a lot of productivity by using them for their own testing too!
NW: You've mentioned that "many organizations never achieve the significant benefits that are promised from automated test execution?" Who promises these benefits, and why do so many testers lack the management needed to achieve them?
DG: I'm afraid that it is tool vendors who often get carried away and promise things that cannot be achieved, or omit to mention the effort needed to achieve good benefits. I was at a conference recently where one of the vendors (who will obviously remain nameless!) was promising that using their testing tool could achieve zero defects in the application being tested!
I was so annoyed I actually went and had a frank discussion with the representative who was there, and I believe they have now modified their web site, as I don't see this claim there any more.
The people who make the decisions about what tool to get and how much effort and time would be needed in test automation are often at very senior levels in the organization and not aware of all of the issues to achieve real and lasting success. If they believe the over-hyped promises, they will not see any need to invest effort and time to build good automation, as they think it will just come fully-formed "out of the box." This makes it very difficult for testers, test managers and test automators to be able to achieve what would be possible if only they had realistic investment.
NW: When is automation not the answer, and what benefits are there for manual testing?
DG: Nice question - I like it! Automated testing never replaces all of manual testing. There are some things that are better and/or easier to do manually, for example seeing if the layout looks nice or the colors are pleasing. There are some things that would take a long time to automate; if these tests are not run very often, the effort to automate them is not worthwhile. The "wavy words" that you often see to be typed in on web sites are designed to detect whether it is a human being filling in the form (CAPTCHA). If this can be automated, then the CAPTCHA has failed! (Yes, there are ways to test it, but it's an interesting conundrum.) Usability issues must have human beings in order to assess the human interface - you cannot automate a real person!
Manual testing has many benefits, probably the biggest being its flexibility and bug-finding ability. Exploratory testing is the most effective approach, the reason being that the human brain is engaged! For example, if there is something just a little strange that happens during a test, the human tester might think "that's odd", follow a new line of investigation and find a major bug. An automated test will only do as it is told and never thinks "that's odd", it just compares what's in its comparison file - in fact the tool doesn't think at all - it is the least intelligent tester you will ever have.
The best approach is to use people to do what people do best, and use the computer to do what it does best. Test automation gives the best benefits when it removes tedious and repetitive testing (e.g. repeated regression tests), freeing the testers to design more tests and do better manual testing.
NW: In many cases, money is obviously the deciding factor on business decisions. Why is relying on the ROI of test automation difficult or unadvisable?
DG: Sometimes it is necessary to have some kind of Return on Investment (ROI) calculation to convince senior managers to make an investment in automation. I have been developing on a simple spreadsheet example to help calculate automation ROI for a few years now (copy available on request!)
The problem with calculating ROI for automation is that it is easy to do for some aspects, but very difficult to do it for other aspects which are very important - but if they are not included in the calculation, they can be forgotten!
For example, if you are currently doing regression testing manually, test execution is probably taking a lot of time, and this is where you would like automation to make it more efficient - as it can do. Once the tests are in their automated form, the effort needed from the testers to "kick off" the tests might be ten times less than running those tests manually. But if this is the only thing included in ROI, the assumption jumped to is "OK, we only need a tenth of the testers (or worse yet, don't need any!)."
But some things will take longer especially at first, e.g. failure analysis time, and maintenance of the tests when the application changes. What about the effort to automate the tests and make sure they are structured well (so maintenance time will be minimized)? Ignore these and you are guaranteed to have problems, yet many ROI calculations do ignore them.
In our recent book Experiences of Test Automation," there are some great examples of ROI calculations - but there are also many stories of people who were very successful without them!
In testing for more than thirty years, Dorothy Graham is coauthor of four books—Software Inspection,Software Test Automation,Foundations of Software Testing, and Experiences of Test Automation: Case Studies of Software Test Automation. Dot was a founding member of the ISEB Software Testing Board, a member of the working party that developed the first ISTQB Foundation Syllabus, and served on the boards of conferences and publications in software testing. Dot holds the European Excellence Award in Software Testing. Learn more about Dot at DorothyGraham.co.uk.