The Future of System Maintenance

[article]
Summary:

The role of the tester has changed significantly over the years. Allow your mind to wander and think about how it might continue to change. Imagine a world with increased transparency for code changes and more visibility of details. What could the future of system maintenance look like?

In a recent project, I was reviewing the application’s code base for things to simplify my testing. I found tools and utilities providing the usual transparency of important variables, logs showing transaction steps, and test data generators.

As I got more familiar with the application, I began to realize that I needed a test harness, or at least a richer set of tools. What I had was workable, but I wanted deeper visibility of operating code at varying levels of detail and increased transparency in code changes. While I imagined my wish coming true, I saw a future tester in my mind’s eye and mused about how the role of a tester has increased in responsibility.

As business teams grew to value testability and transparency, they sought the advice and guidance of their testing teams in both system design and system maintenance. The combination of their technical knowledge and their ability to drive testable and maintainable domain solutions served to advance testers ’ responsibility in many aspects of enterprise operations. Testing became an integral role throughout the project and product lifecycles.

It Starts with a Typical Usability Problem

While the user was checking out, she could not locate the Submit button. She happened to scroll down the page and found it in an odd place near the left side of the screen. She completed her transaction and received her confirmation email.

This typical issue may be unrecognized for months with present-day monitoring tools. But, in this future, events like Page Not Found errors and application exceptions are just the start of a smart application profiler.

The application profiling system, or profiler, collects user actions, reaction times, and browser and network properties. Based on customized trend thresholds for each transaction in the application, it alerts the system change administrator (formerly known as a tester). Before that alert, many operations and evaluations occur.

The profiler’s UI experience monitor detects a drop in the cart checkout workflow efficiency since the last change to the web page. The data suggest that in many cases, a scroll is required to locate a button that completes the workflow. Also, the page dwell duration has increased significantly above the average, and checkout abandonment has increased, indicating a usability issue. The profiler reviews the page layout against usability criteria, similar pages in other transactions, and recent changes to the application.

The profiler logs a change request for the button location and revisions to page content, and it changes the cart checkout workflow status from “compliant to “monitoring.” The profiler will monitor the workflow until it improves to its original efficiency. The last ten thousand transactions through the cart checkout workflow also are logged into the application history journal.

The change request triggers the requirements generation engine to create expectations on workflow efficiency improvements. It also triggers the implementation generator to create a list of code changes to satisfy the workflow efficiency goals. The suggestions are logged and linked to the change request.

The scenario analysis and outcomes system works with the system source management to review the list of code changes from the implementation generator. Each code change is implemented, deployed to the production copy environment, and evaluated using the transactions logged in the application history journal.

These simulations provide the basis for deciding the best solutions to address the issue. In as little as ninety minutes, the scenario analysis and outcomes system reviews many solutions provided by the implementation generator and assigns a probability for improving the efficiency of the cart checkout workflow. The top three improvements along with workflow trends are included in the alert message to the system change administrator.

The system change administrator reviews the request against enterprise architectural direction, infrastructure support, business risk, collateral impact, similar requests, recent changes, business need, and business priority. She selects the best solution and creates a story card. She links the change request, uses the built-in task generator to create a list of tasks, and adds a number of tests from the cart checkout workflow regression suite and recent test plans to evaluate risk. The story card is queued for execution, and the administrator catches up on a few testing blogs.

The story card repository reviews her card, implements the selected solution, and deploys it to the production copy. After a final round of simulations, the administrator’s test results combine to provide coverage in functionality, business intent, usability, and solution fitness. At the next deployment cycle, the solution is deployed to the production environment.

As new customers experience a simplified workflow, the monitoring records efficiency gains. In time, the workflow is compliant again. . . .

Back to Reality

Suddenly, the alert of an instant message broke my concentration. I was back in the present, with only a pleasant memory of my daydream about the way system maintenance could be in the future.

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.