Exploring the three major types of software testing tools
Application testing tools make enterprises' app development more efficient. Learn more about automation, coverage and bug tracking tools.
Software testing tools exist to help staff members conduct the most effective tests possible and do more with less. Additionally, these tools help to eliminate repetitive operations -- replacing the human element -- and do what might not be possible otherwise, such as complementing or cataloging, searching, and combining information in ways that are common for test and software development organizations. Application testing helps organizations find issues in their product before the customers do. The number of combinations one has to test for -- even the most trivial of programs -- can be staggering. A pair of nested for loops, for example, can have unique test cases that number in the millions.
Software testing tools themselves do not perform actual testing. Humans test with attentive minds, as well as the ability to discern differences and interesting details based on the information they receive. Testing tools can be programmed to run a series of operations and check for expected results. In a skilled person's hand, these tools can extend the reach of the tester. In this feature we talk about three major categories of test tools: automation, bug tracking and coverage.
The distinction between quality assurance and software testing
Before covering the major categories of application testing tools, it is important to make the distinction between quality assurance (QA) and testing to give you a better idea of what these tools should and should not be doing. QA is building it right. Testing ensures you built the right thing. QA means ensuring that the steps of a manufacturing process are followed correctly and in the right order to prevent problems, resulting in the same product every time. Testing is mass inspection of all the parts after going through the manufacturing process. It's a distinct difference in the two, and a distinct difference in the tools used to perform both functions.
QA ensures that no code is created without a requirement; that all code is reviewed -- and approved -- before final testing can begin; and that the tests that will run are planned upfront and are actually run. The company defines its work process model and someone in a QA role either checks off each step, or, perhaps, audits after the fact to make sure the team performed each step and checked the right boxes.
If software QA tools make sure the product was built right, software testing tools help ensure that the team built the right product. Because each software change request is different than the others, software QA tends to fail -- it can make sure that a requirements document exists, but not that the requirements were done well.
Application testing tools can help the software team determine the actual status of the software as it is built.
Automation
The most well-known kind of software application testing tool is automation, which attempts to replace human activities -- clicking and checking -- with a computer. The most common kind of test automation is driving the user interface, where a human records a series of actions and expected results. Two common kinds of user-interface automation are record/playback -- where an automated software testing tool records the interactions and then automates them, expecting the same results -- and keyword-driven -- where the user interface elements, such as text boxes and submit buttons, are referred to by name. Keyword-driven tests are often created in a programming language, but they do not have to be; they can resemble a spreadsheet with element identifiers, commands, inputs and expected results.
Nearly every program that runs in a browser now has a mobile counterpart. Because of this, mobile test tooling is quickly becoming as important, if not more so, than testing in a web browser. Sometimes this automation takes control of the mobile device by launching an app or mobile browser and performing some actions. Other times this testing happens just below the surface by working at the API level.
Automation tools perform a series of preplanned scenarios with expected results, and either check exact screen regions -- in record/playback -- or only what they are told to specifically check for -- in keyword-driven. A computer will never say "that looks odd," never explore or get inspired by one test to have a new idea. Nor will a computer note that a "failure" is actually a change in the requirements. Instead, the test automation will log a failure and a human will have to look at the false failure, analyze it, recognize that it is not a bug and "fix" the test. This creates a maintenance burden. Automated testing tools automate only the test execution and evaluation.
Another term for this kind of automation is something Michael Bolton and James Bach call checking, a decision rule that can be interpreted by an algorithm as pass or fail. Computers can do this kind of work, and do it well. Having check automation run at the code level -- unit tests -- or user interface level can vastly improve quality and catch obvious errors quickly before a human even looks at the software.
Bug tracking
For very simple software, the bug reports might be tracked with sticky notes or spreadsheets. But when the software is more complex, these become unwieldy, and companies need to turn to software designed for the task. Typically, professional bug trackers report on bug severity, priority, when the defect was discovered, exact reproduction steps, who fixed it, what build it was fixed in, as well as searching and tagging mechanisms to simplify finding a defect. These tools don't just assist programmers and project managers; customer service and existing users can use these tools to find out if an issue is known, if it is scheduled for fixing, escalating known issues and entering unknown ones. Bug tracking tools can also help with the workflow, because bugs can be assigned to programmers, then to testers to recheck, then marked to be deployed, and then, after the release, marked as deployed.
Coverage
When we discuss coverage in software testing, we are looking at two specific ideas.
The first area is code coverage, which focuses on the percentage of software that is exercised by tests. The most common type of code coverage is statement coverage, which is the percentage of statements that are run through during the test process -- manual, automated or both.
The second area, application coverage, looks at the test process from other directions -- typically, the percentage of the requirements that are "covered." One common application coverage tool is a traceability matrix -- a list of which tests cover which requirements. Typically, test case management software records all the planned tests and allows testers to mark that a test case "ran" for any given release, which allows management to determine what percentage of tests were "covered." This is a sort of "quality assurance" look at the test process, which should ensure that each part of the application is covered, along with a management control.
Alone, each of these three categories of tools can help a software team manage issues and code changes. When they are combined, that team has a fairly robust suite of tools that can help with finding bugs, debugging the code and freeing up the team to think about areas that need to be tested.
Infrastructure and support
There is a section of testing tools that should be addressed but is too varied to fit under one category. Test automation assumes the latest version of the application is installed on the computer or web server. It still needs to be compiled and installed, the automation needs to be started, and someone needs to be informed to check the results. All of these secondary tasks fall into support -- and they can all be automated. Continuous integration tools are support tools that notice a check-in of new code, perform a build, create a new virtual web server (or update a staging server), push the new code to the target machine, run the automation to exercise the program, examine the results, and email relevant team members about failure.
Support includes the tools that testers use to move faster or extend their reach. Software to generate random names to use for input, or test data in general, falls into this category, as well as software to create screen captures and videos. This type of software exists to record all of the interactions that a tester has had with various fields, simulators for mobile devices, and developer environments that blend into the background and pop-up on command to record notes.
Monitoring also plays a large role in supporting software testers. These tools provide real-time information about what is happening in production environments, notification when problems occur and guidance on how to improve testing and development in the areas where customers are discovering problems.