Sometimes we get so focused on solving the problem in front of us that it doesn't occur to us to ask if we are solving the right problem. Linda Hayes finds that starting a new year makes her think less about what has been and more about what could be. In this column, she offers her thoughts on the validity of the way we approach the most variable of all factors: the user.
I was discussing test design with the test manager for a large enterprise application. We were looking at a particular screen whose layout was completely dynamic depending on the actions of the user. She was bemoaning the fact that there were so many possibilities it took forever to test them all. I suggested that we identify the most common or likely scenarios and test only those. She countered by saying that the software had to support any eventuality, and so instead we should figure out how to use automation to test every possible outcome. Since she was the customer, her method prevailed, and much was invested trying to develop a sophisticated approach for solving the problem.
Aside from the fact that my experience told me this was a losing battle (after a great deal of time and effort, she eventually agreed), something about this whole discussion was unsettling, but I didn't put my finger on it until a later and seemingly unrelated incident occurred.
I was working with a different customer to help them document their business processes for testing, and it quickly became clear that there was no standard approach for performing many of their most critical processes. For example, one user navigated through five extra screens that her peers did not use. It turned out she was taught to do that by someone who used some of the same screens as part of a different process—like someone giving you directions starting from his own house. Because of these kinds of inconsistencies, the company had over a thousand test cases that, when analyzed, yielded frequent duplication resulting from multiple approaches to the same process.
Reflecting on this, I realized what these two incidents had in common: The users were uncontrolled variables. The essential randomness of user behavior, in turn, vastly complicated the test process. How can you test for interactions you can't predict?
That's when I realized we were solving the wrong problem. The answer is not to develop ever more test cases or more complicated test automation approaches. The real solution is to control the user.
Technically, it may be easier than you think. Organizationally is another matter.
Training Versus Automating
Even though every new system rollout is, hopefully, accompanied by rigorous training and thorough documentation, it is obsolete by the next release or new hire. Most business-critical software (as opposed to desktop productivity tools like word processors and spreadsheets) exists in a constant state of change as the business adapts its technology to fluid competitive and customer demands.
Unfortunately, training classes usually can't be justified for only one or two new features or new hires at a time, and pressure on delivery schedules doesn't always allow for updating documentation and training materials. So, training becomes organic: Carla trains Darla, and by the time you get to Marla variances have crept in. It's like the telephone game. The result, of course, is the very unpredictability that ends up distorting the test process.
But what if, instead of documenting the processes and training the users, we automated the processes that the users should follow? In other words, what if we trained the software, not the person?
The technology to do exactly that exists already. Many training systems and most test-automation tools are capable of walking a user through a task and controlling her interaction within certain parameters. All you have to do is shift these from training or testing environments to the production platform.
Think about it: User predictability and efficiency would improve, errors would be reduced, the learning curve for new employees would be shorter, and—best of all—we would know exactly what to test and how to test it. In fact, the automated processes simply could be pointed at the test environment as a comprehensive acceptance test.
Sound radical? It might be, but it's not as crazy as trying to test for every possible outcome of any particular individual user's behavior in a given situation.
It's not impossible. I know of companies that have done this successfully. The trick is not implementing it; it is to get management buy-in to define and adopt the processes and require the users to follow them. But maybe by educating the organization about the interplay between user behavior and testing requirements, we can do our part to effect a new level of automation.