If you aren’t measuring the coverage your regression tests provide, you may be spending too much time for little benefit. Consider the value of your regression tests as you create and manage them. You need to be smart about the regression tests you maintain in order to gain the maximum value from the work put into creating, running, and analyzing their results.
Do you know how many regression tests you have? Do you know what the coverage of your regression tests is? Do you know the value of your regression tests?
These may all be pretty simple questions, but your team needs to be asking them; knowing the answers is very important to your test plan. You can't manage what you don't measure!
As I conduct more testing and agile reviews with organizations, irrelevant regression tests seem to be an increasingly common problem. In many instances, I see a good portion of tests that are either redundant, repetitive, or not exercising any different code—i.e., not expanding code coverage. This is becoming even worse with the increase in automation. Due to the perception that automation tests require almost no time to run because they are running out of hours or in parallel with the operator doing other tasks, people think it’s not as necessary to manage the volume of regression tests, and the associated tasks disappear.
You have to be smart when designing your regressions tests. That’s why I use that acronym to remember what’s important to consider when creating and managing them:
- Specific: Tests have to be clear about what requirement is being tested
- Measurable: Tests must define the coverage of the requirement in order to measure the actual value
- Achievable: Every regression test has to have a standard value defined for it. This could be measured in the time required to create it, run it, or maintain it, or as when it last found a defect
- Relevant: The regression test must give more insight in the continuing function of the software, proving its fitness for purpose as well as just working correctly.
- Time: It is important to express the value of the regression test in time. Every test only has a meaning if one knows the time dimension in which it is realised. This can both be from the aspect of the time it actually takes to run the test, and the time lapse as to the value that the regression test still adds value.
Let’s dig deeper into what you should consider when creating, managing, and evaluating regression tests.
Know the Value of Your Regression Tests
The first thing we need to consider for each individual regression test is its return on investment. If the effort to create, run, and maintain a test is not either giving us confidence in the system under test or finding regression defects, then we have to question why we are spending this time.
I want to highlight in particular the “run” part of the equation. I have heard people say so many times that they think there is no cost to run regression tests after they’re automated. While a good automation framework may allow regression runs to be kicked off by the click of a button and run unaided, there is still a requirement to analyze the results, and that does incur a cost—even if those results are summarized in a dashboard.
This also takes into consideration that the scripts have been created within the framework to deal with unexpected events, such as pop-up messages, which could potentially stop a regression run. If they require a human to analyze the results, they are not fully automated. Maybe it is worth questioning if these tests are truly automated.
Selecting Tests for Regression
When it comes to creating tests, I have used the term selection deliberately. I do not believe you should be writing a regression test as a separate test; it should be a reuse of a test you have already written for a feature set. When writing progression tests, we should simply be marking those that we think will be of use as regression tests in the future.
Let’s take this thought to the next step. When we do the analysis and design of our tests, why not mark tests that should be used for regression before the test case or session sheet is even written? The regression test simply gets scripted as an automated test immediately. This saves on both preparation time to write the test, plus the test execution that may happen manually until it gets automated.
This is where agile teams need to focus; they cannot afford to test the same thing once manually and then again as automated. This would not be a very efficient way to test. Let automation take care of what it is good at—checking—and let humans focus on what they do best.
In addition to being selective about what we create as regression tests, we also need to be careful about which regression tests we execute. Here, I am particularly focused on agile teams. With the increase in communication and collaboration between business analysts, developers, and testers, I see no reason a whole regression pack needs to be run for each new release or change of configuration. The Three Amigos strategy is a great practice that can be used to have conversations about regression tests. With the combined expertise of the BA from the business value perspective and the developer from the technical aspect working with the tester, I see no reason we cannot reduce the number of regression tests down to a handful per user story.
We need to become more focused on our regression testing, targeting it to what we have just changed and the potential impacts on tests that we have already executed and passed for that feature. We need to have confidence that we have identified the correct risks and put in place the tests necessary to mitigate them, instead of running multiple regression tests “just in case.” We need to be smart about how we spend our time.
Managing Regression Tests
There should be tasks on your task board detailing continuous review of your regression tests to ensure that they are still providing the benefit to your project that you expect. As situations change over time, here are some aspects you should consider:
- Test level: This relates to the period within the software development lifecycle itself. When progressing through unit, system, integration, and user acceptance tests, the regression tests are going to differ. As the team adds more stories and features to the working system, consider which regression tests you want to retire at which level.
- Time since first run: The system, application, or feature being promoted to the live production environment is another trigger point for active management. As the maturity of the system grows, certain tests may deliver less value, particularly if no development has been done in that area for a period of time.
- Changed functionality: Changes in functionality or features also should trigger a review of the related regression tests. If changes are actioned as they occur, regression packs will remain current.
- Archived or retired applications: As applications or even systems are archived or retired, the related regression tests also need to be marked as such.
All regression tests will need to be retired at some point, even if it is at the stage that the system, application, or feature becomes decommissioned. But it is likely that most regression tests should be archived long before this. Proper maintenance of regression tests can have a significant impact on your delivery. Keeping regression tests that are no longer required or of value just wastes time and effort.
If you aren’t measuring the coverage your regression tests provide, you may be spending too much time for little benefit. Consider the value of your regression tests as you create and manage them. We need to be smart about the regression tests we maintain in order to gain the maximum value from the work we put into creating, running, and analyzing their results. If they do not provide confidence in the system or find defects, then why are they in your regression pack?
User Comments
Thank you for the great article Leanne. We are currently working on a project to go through our regression tests and finding where the gaps are and what we can elminate.
Hi Robby, let me know how you go. I would be interested particularly in how many tests you retire/archive when you do the reveiw.
Good advice, but I already stumble at the "S": "Tests have to be clear about what requirement is being tested" While I agree that this is a necessity the destructive rampage of Agile on QA made it so that QA no longer gets and firm or often even lose requirements. I se more and more a total disconnect between what the business needs (but does not tell Dev/QA), what developers implement (which is often what developers want because it is cool...but dysfunctional), and what QA tests (without any guidance of any kind we are forced to make stuff up to test for).
In a more waterfallish approach there was more time spent upfront to determine and document requirements. While I no terms advocate to return to a strict waterfall approach, the Agile teams need to move away from the anarchy and more towards putting pressure on the business to detail exactly what the needs are. Otherwise we end up in endless debates and get stuck in refactor and redesign cycles that waste a lot of resources. Quoting Mike Holmes "If you do it, do it right the first time!"
Hi Tim, This is something that I see all too often with agile teams. If the user stories and acceptance criteria are not well written, then it makes everything difficult for the team. Try putting something like "All user stories must pass INVEST checks" into your DoR.
Tim, $.02.
I was recently part of a project team that ran a Proof of Concept on Agile Testing (migrating from traditional Waterfall). While not without its bumps and bruises, the entire process hinges on communication: clearly defined, well-estimated user stories with enough understanding to execute (dev / test) and a clear definition of done. If you have these aspects, there should be far fewer questions for what Testers should test. Everything should mature in the sprint ("final" design documents, solutions, test scripts, etc) but it should never be chaos and it all starts in Backlog Grooming with the information gathering and the estimation (Three Amigos involved). Sprint 0 and Backlog grooming, IMO, is so critical to the ultimate success of Agile (and carving time out to continue it DURING SPRINTS).
Agile is not an excuse to embrace anarchy. It is a clearly defined set of principles that govern a software development methodology. When we start a sprint, our testers execute to the definition of done and, because of the daily stand-up meetings and other related ad-hoc meetings, understand the intended functionality / flows that should be implemented (because the BSA is not a Programmer and things can morph in implementation). I would leverage your Retrospective as an opportuntiy to raise your concerns and then take a hard look at your Backlog grooming process. If Testing isn't involved early and communication isn't flowing then you're never going to get away from the anarchy.
Great article. I am about to embark on a project that will automate our Oracle regression test scripts. This system will become decommissioned in about 2-3 yrs so I don't want to waste to much time creating a bunch of regression scripts that will be obsolete in a year. We are basically automating them in order to go into maintenance mode with only business critical changes, if necessary. I like using the SMART option for planning out the direction we should take but are there any other suggestions you would make in this situation?
Diane, I think that the key for me would be to make sure that you calculate the ROI for each test that you automate taking into consideration the short timeframe. Obviously the value of the functions and product risks, as I mention in the artilce, are key factors.
Happy to discuss in more detail offline. You can contact me via Linked-in.
Some great suggestions Mike, all of which I would agree with. I would have testers in teh Grooming sessions as well. I use the Three Amigos a lot and it is a sign of a good team, if you see people huddles together throughout the day.
Nice Post. Considering each regression test as a return on its investment is awesome.