Fotolia
Does a tester actually need test cases?
Discover whether or not test cases are necessary in this expert answer by consultant Robin Goldsmith.
The short answer is yes, testers need test cases. But the test cases they need might not be what you're thinking of.
From time to time, I encounter the seemingly implausible argument that testers don't need test cases. My informal analysis suggests two related and, I believe, mistaken premises that underlie the belief that testers don't have or need test cases.
First, many testers believe a test case must be in a particular format. The premise is that only test cases written in the right format are "true test cases" (like the "true Scotsman"). By that reasoning, if testers don't have anything in the correct format, it must mean they don't have test cases.
The format that seems most widely accepted as "necessary" for a test case is a step-by-step written script that includes extensive, often keystroke-level, procedural detail. The script also may be accompanied by additional written descriptive information, such as a test case identification number, short title, longer description of purpose, context, owner, various categorizations, related test cases, priority, change history and more.
Each step in the script describes a typical user input action or condition, followed by an expected intermediate result. The script consists of a series of steps to be carried out in the prescribed sequence, which ultimately produces an expected end result. Both end and intermediate results could be displayed values, reports, transmissions, signals, changes of state, additions or modifications of values in a database, stopping or starting some other program or action, and so forth, along with combinations thereof.
The advantage of such test scripts is that they can be repeated precisely, even when they're used by someone with little or no knowledge of the system being tested. Such test scripts also have several downsides, starting with the sheer amount of time it takes to create and maintain them. The more time testers spend writing test script documents, the less time they have to execute the tests. Moreover, precisely following a script could interfere with testers' ability to detect defects the script didn't specifically provoke.
To overcome these weaknesses, exploratory testing advocates not writing anything and thereby using all of the testers' time to execute many more -- and presumably more thorough -- tests that they come up with, based on the context as they execute them. Here's where the second false premise comes in. Testers who believe a test case must be written falsely believe exploratory testing does not have test cases.
Let me suggest instead that a test case should consist of inputs or conditions and expected results, period. Inputs are the commands explicitly entered by a user. Conditions are not explicitly entered, but must often be created by a tester to carry out a given test. For instance, a condition might be that the database is full, and an input might be to add a record to the database.
That's all you need to carry out a test. Tests have inputs or conditions and expected results, regardless of whether they are in some written form or made up at the moment of execution. Writing tests offers some advantages, including helping the user to avoid forgetting things and facilitating repetition and refinement. However, a written test is not required, and a written test case does not have to be in a particular format -- especially not that of a script with extensive procedural detail. Inputs or conditions and expected results suffice.