People have always been keen to use tools. In fact, use of tools contributed to making us sapient beings, as we evolved our utensils from primitive objects to complex mechanics. All that history has led to our current exponential growth of computing capabilities and widespread use of programming.
Now people are able to create tools for intellectual work, and even attempt to augment and replace it with automation. It was only natural to apply programming in the tasks of programming itself. Such formalized tasks—intellectually mechanical, one might say—as syntax checking, compiling, and linking were successfully automated and now no longer considered parts of programming. But testing resisted automation for decades.
Even now, software testing is still largely misunderstood. Many see it as a formalistic comparison of actual results with expected results provided in the requirements. Many see testing as a series of repetitive and repeatable operations. Those superficial understandings cause failures in attempts to automate testing.
A decade ago, Michael Larsen and I discussed test automation problems from multiple angles, and we came up with some main factors to focus on, expressed in the acronym TERMS: Tools and Technology, Execution, Requirements and Risks, Maintenance, and Security.
Here are some examples from my professional experiences illustrating how these factors are involved in defining automation success or failures.
Tools and Technology
There are dozens of commercial, free, and open source tools. Which tool to pick ultimately boils down to how well the automation tool supports your application technologies, and how well it will keep supporting them in the future.
Technical details are critical. Earlier versions of Selenium, for example, were helpless against browser pop-ups and any custom controls. HP QTP, on the other hand, strived to support as many UI technologies as possible, which made the tool quite heavyweight and slow.
Purchasing a tool is an expense that managers must justify. Developing a test suite is a time investment that needs to pay off. Some organizations follow a sales pitch, and others prefer to do a proof of concept or a pilot project. Whatever the approach taken, tools and technology will have a lasting effect on your automation project.
Execution
A test automation salesperson is preparing to give our business and testing teams a demo on how their groundbreaking tool is going to “revolutionize manual testing.” Management is excited. Testers look a bit skeptical.
“First, request the daily transactions report. Then scan through the records until you see the client’s name. If there is more than one record, check the date, time, or amount to find the transaction you need. The report displays twenty records per page. Click Next to look further.”
At this point the sales guy looks very puzzled. The managers seemingly don’t understand his confusion. The execution steps are straightforward, and the requirements are simple. Sure, automation can handle this. Right? As it turns out, not quite.
First, operations like “Scan through records” cannot be recorded simply as a sequence of pressed buttons and mouse clicks. An algorithm must be conceived, including such programming steps as “Identify table object,” “Parse data rows, comparing first name from one column and last name from another with data from some source,” and so on. And what to do with records containing identical names?
It becomes clear that even a straightforward task can prove to be challenging for automation. This is a common problem. Any experienced programmer knows that automation is possible only given that all steps are deterministic and completely specific. Programming languages do not include instructions like “Investigate” and “Try this or that.”
Requirements and Risks
Confusing the means (operations) with the ends (requirements) is a common pitfall of all beginner automation programmers. Business requirements never come in a fully deterministic and completely specific form; they never have to. Skilled humans will explore, investigate, and use examples and prior knowledge, and they will bring back any gaps and discrepancies to the business stakeholders. These vague requirements are still completely testable in a human sense, but not so much in the sense of automated verification.
Different teams approach this challenge differently. Some may pair a tester with a programmer to define those atomic, granular steps and requirements. Some teams are lucky to have a hybrid employee, either a tester capable of good coding, or a programmer who mastered the testing mindset.
But regardless of the approach, the resulting automation is always a simplified, dumbed-down, mechanized parody of a skilled human performance. Skilled testers are capable of improvising while executing tests and figuring out requirements on the fly.
As for risk, we've already seen that, by its very nature, automation has fundamentally limited observation capabilities. Once created, a robust automation script can be run thousands of times, but it still will be the same particular case. This greatly reduces the scope and quality of testing in comparison to skilled human exploration. This trade-off introduces new kind of risks. But there’s more. Not detailed enough requirements or frequently changing requirements are common headaches for testers, but with automation, it’s also a cause of new risks. With unstable requirements, the scripts will demand endless maintenance.
Maintenance
“You don’t need to create any new automation scripts. We have three hundred of them. You just need to make them work.” Yes, I actually heard this once, while being interviewed.
Creating an automation script is not difficult. In fact, the record-playback feature available in many tools makes the creation of scripts rather trivial. These scripts might be even useful at first. But then requirements get updated, UI screens get altered, API parameters change, and data become no longer relevant. Here comes the maintenance nightmare.
Maintenance is often overlooked, but it’s a crucial factor that differentiates between automation success and failure.
Maintenance involves debugging and testing. Then there’s the updating of input data and expected results, and keeping up-to-date interaction points with the application under test, be that API or UI. For example, in UI automation, updating descriptions of UI controls is a significant part of maintenance. All of that together often comes at time and effort costs comparable to or exceeding the initial costs of creating automation. But, unlike the creation phase, maintenance is not expected to be so costly, and there’s simply not enough time to debug the scripts when testing results are urgently needed.
Different teams cope differently with the challenges of maintaining automation scripts. Some keep “babysitting” their scripts, correcting mistakes on the go. Some prefer to inspect the application first to find possible issues that will cause failure of the automation scripts. And that brings the automation maintenance paradox: What was the point of fixing your scripts if you just tested the app?
Security
My automation consulting gig started with somewhat unusual precautions: two pieces of ID, and a signed legal statement acknowledging that I will never disclose my login credentials or store them in any written form. That may sound a bit strong, but it totally made sense. A large, international financial organization processing millions of investment accounts must take information security measures with all seriousness.
My first objective was to evaluate the effectiveness and efficiency of an automation suite used for posting test transactions. Considering literally thousands of varying business rules, such a verification was indeed a difficult task. What did I find? The automation suite was stored on the corporate network and used by developers and testers. To log in to the applications, the scripts required users’ credentials be saved in a plain text file. Some of the users had accounts with access to the preproduction and production environments. Luckily, no one had yet taken advantage of this security flaw.
This story is not as uncommon as you may think. Through a decade of experiences with automation, I have observed dozens of cases where use of automation led to the compromise of security in some ways, from tweaked firewall or network settings to “backdoor” passwords or API calls.
Defining the TERMS
Tools and Technology, Execution, Requirements and Risks, Maintenance, and Security offer a thought framework for automation projects. These factors can be applied in the planning phase or as an assessment of the existing automation.
Automation is a service to testing—a tool that may prove to be useful or turn wasteful. Aim to improve your software testing using automation without introducing unbearable costs or risks.