Guiding principles for an automated software test
An expert in automated software testing offers guiding principles for succeeding at this most challenging of QA projects.
In recent conversations with software quality assurance pros, I keep hearing the same thing: Test automation projects are among the most demanding that any QA organization takes on. Following the right set of guiding principles can increase the odds of success with this challenging undertaking.
I asked Jay Philips, president and CEO of software development consultancy Project Realms, in East Bethel, Minn., for her advice for QA organizations embarking on automated software testing projects. She offered five guiding principles, which I share with you in this edition of Quality Time.
Guiding principle #1: Avoid developing test scripts too early.
Software testers are encouraged to plan an automated test early in the application lifecycle. But writing scripts that test an application function before that function is complete is counterproductive, Philips said. "If you automate too early and the app is still changing, you will have to rewrite your script." She doesn't recommend waiting until the entire application is ready. A better approach is to review the application requirements, identify which ones are complete and start writing test scripts for those that are.
Guiding principle #2: Develop realistic estimates of how long application testing will take.
Software testers are under pressure to do more, faster. But if they succumb completely to management demands to get the software out the door sooner, they may end up damaging the credibility of the test organization, Philips said. Test automation is all about speed, but it's crucial to budget time to resolve unanticipated problems, such as scripts that aren't working. These kinds of issues come up all the time, she said.
Guiding principle #3: Stay abreast of subtle design changes, which test scripts won't necessarily catch.
QA pros -- and the scripts they write -- focus on testing new functions implemented in the software, but they often overlook design changes that might accompany those new functions. Philips offered an example. The second release of an application gives users a more efficient way to change their passwords. In the first release, the background color of the change password screen was red; in the new release it's blue. A script that is focused on testing the functionality is not going to check to make sure the red screen is now blue, Philips said.
Guiding principle #4: Turn to developers for help coding scripts.
One hurdle that most software testers face is learning a programming language well enough to write the scripts that an automated software test project demands. This is the perfect opportunity to work with developers on your team, Philips said. And, yes, like QA pros everywhere, she has heard countless stories where interactions between software testers and developers don't go well. But she urges testers to look beyond the stereotypes and ask for help. It worked for her.
Once, while working with a home-grown testing framework that required some knowledge of Java, Philips turned to a developer for insight on how to code a particular script. She knows some Java. "But there were areas I could not figure out," Philips told me. She sat down with the developer who wrote the piece of code that was giving her trouble, and together they walked through the application. "He helped me figure out why my script was broken," she said. What's more, they built a relationship that continues today. "Now he uses my script to do unit testing," she said.
Guiding principle #5: Keep on educating management about test automation.
Myths about automated software testing abound. The big one, of course, is that organizations that implement it can lay off the testers. Getting top management to understand why this is not the case is an ongoing challenge. "They want to get rid of manual testers, but you can't," Philips said. "Management people believe you can automate everything. But it's not true."
Instead of fearing for their jobs, QA pros should take it upon themselves to continually educate top managers, without expecting them to get the message right away. Explain why manual testing -- conducted in tandem with automated testing -- remains important. Offer evidence of where the QA organization is saving time and money. Most important, give management examples of software glitches that have been caught by manual testing and explain why automated testing can't catch them.
What are your guiding principles for automated software testing projects? Let us know what you think.