The need to demystify software testing for developers
With how quickly demands for functionality can change in today's dynamic application environments, the case for developers to learn core testing skills could be stronger than ever.
Given today's increasingly dynamic software lifecycle demands, many organizations are starting to realize that testing is no longer a discipline that can sit outside the average developer's realm of priority concerns. Modern software developers need the skills to not only build applications that are readily testable, but also create many of the test suites that follow those applications through the pipeline.
Unfortunately, the art of consistently writing effective test suites may not come so easily to developers who lack exposure to or formal training in proper testing strategies. Fostering the right testing habits is crucial; without them, developers run the risk of endlessly chasing down missed bugs and picking up the pieces left from cascading application failures.
Maurício Aniche, author of Effective Software Testing: A developer's guide, has some advice for developers stuck in this testing conundrum. Aniche, who heads the Tech Academy at Adyen and is assistant professor of software engineering at Delft University of Technology, believes this problem is fixable -- but doing so requires that developers maintain a sound, strategic state of mind.
The systematic tester
When conducting research on how developers approach application testing, Aniche noticed that programmers who possess little to no formalized training in testing often lack a systematic approach for creating their own tests. Instead, they tend to write ad hoc test cases as issues arise or ideas for test cases "come to them."
While this approach may work in the short term, Aniche said, such an ad hoc approach to testing can create problems down the line, such as when developers forget a test case on a "bad day." Instead, developers should commit to a consistently systematic approach to testing, he said, which increases test efficiency and alleviates the risk of overlooking critical considerations when writing test cases.
Of course, for some, the concept of adopting a systematic approach may carry the connotation that testing must become more "rigid." Aniche stressed, however, that this does not have to be the case. Systematic testing should be iterative, he explained, and the test cases within should follow whatever structure or format makes the most sense given the context. And, since it is always the case that different applications require unique testing strategies, Aniche believes systematic discipline and intelligent creativity aren't nearly as at odds as they may seem.
"You're going to use your creativity, you're going to use your knowledge as a developer and you're going to use your knowledge of the system that you're building," he explained.
The effective test suite
Development teams sometimes struggle to decide between more unit or more integration tests, Aniche said. Those teams may opt to devote more time to unit testing the individual parts of the application because unit tests are easy, quick and cheap. Also, they can save the team from having to refactor or fix bugs later. On the other hand, performing integration tests with wider areas of coverage could potentially save teams time that they would otherwise spend unit testing the individual components involved.
Thinking past the debate of performing unit versus integration tests, Aniche proposes a third option: Make your tests fast. Instead of focusing on rigid statistics, such as how many times a certain test was performed, teams should target benchmark goals that concern overall test speed and success rates. For example, the following are three critical testing aspects he suggests developers keep track of:
- Average length of time it takes to run a single test, e.g., hours, minutes, milliseconds, etc.
- Number of test cases with complex infrastructure support requirements.
- Rate at which test results provide accurate, usable feedback.
A top priority for developers performing testing routines, Aniche said, should be to understand what specific factors contribute to how fast or slow a test runs. To that end, clearly identifying key logistics factors, such as how many calls a single test needs to make or databases it interacts with, can help developers uncover ways to both optimize existing test cases and create better ones going forward.
At Delft University of Technology in the Netherlands, Aniche trains students to self-evaluate their testing processes to better prepare for inevitable bugs using automated test suite assessment tools, like Andy. By reinforcing proper testing techniques through repetition and practice, Aniche hopes the developers he teaches can internalize the skills needed to make testing a much easier and more productive activity.
To learn more about this book, and all Manning products and publications, please click here.