Getty Images
A few simple strategies to reduce software test redundancy
While comprehensive test coverage is a must, software teams must make conscious efforts to keep suites from becoming overpopulated with hampering numbers of redundant test cases.
Software test redundancy happens when multiple test cases embody either a partially or completely overlapping set of objectives, meaning that the same code segments, functionalities and features are unnecessarily tested more than once. Without a way to configure test suites in ways that reduce the number of redundant tests, while keeping the necessary ones intact, testing teams risk creating inefficiencies that can ultimately disrupt development velocity.
In this tip, we examine how testing teams can make smart decisions regarding what to keep, update, archive or delete in the effort to cultivate a manageable inventory of test scripts, while still ensuring effective coverage, as well as some tools that can help.
Making test optimization decisions
The key to mitigating test redundancy is continual test optimization, which begins with designing tests to be atomic (one test per test case) and autonomous (no test is reliant on another). The goal of test optimization is to get the most value from the least number of test cases, which involves assessing both individual test cases and the overall test suite.
Optimizing a test suite involves a close examination of each test case's relevance to current operations and value to the organization. Eventually, hard decisions need to be made regarding whether to delete, archive, rewrite, update or deprioritize one or more individual tests.
While specific situations may affect the decision, here are a few general guidelines software teams can use to determine the best course of action:
- Rewrite/update. Test cases that cover more than one test objective, rely on other tests to run correctly or run too long are often candidates for remediation efforts.
- Delete. Test cases that consistently return flaky, incorrect or uninformative results should be removed completely. However, take care to ensure those test cases aren't valid candidates for a rewrite or update.
- Archive. Test cases can become obsolete when certain application components or features are removed from the codebase. However, teams should consider archiving these test cases rather than deleting them; it's possible those components or features may be brought back, in which case those tests could likely be brought back into use with minimal updates required. This can often apply to test cases associated with legacy functionality or uncommon edge cases as well.
- Deprioritize. Depending on how critical the associated functionality is, teams can opt to simply deprioritize certain test cases within the overall suite. For instance, test cases that deal with high levels of code complexity or cover features linked to crucial business processes should take higher priority than those associated with straightforward, lower-level functionality.
Best practices for reducing test redundancy
Test optimization is a dynamic process, and teams must continually audit test suites to keep test redundancy at a minimum. For example, regression test suites must be optimized after every release. Unfortunately, test suites change as software changes. Performing a post-release review on an entire test suite -- especially a regression suite -- can take a lot of time that teams don't have.
However, teams can keep these review times to a minimum by actively dealing with flaky tests and deprecated segments of code as they encounter them. Tools such as Hexawise and Ranorex DesignWise can help with this -- especially if applications require complex testing routines -- by enabling teams to input test scenarios and run algorithms that identify redundant test cases. As an added benefit, these types of tools typically provide test coverage reporting that comes in handy when auditing the overall suite.