Prioritizing software testing on little time
Too many test cases and too little time? Expert Scott Barber explains how to prioritize testing.
Suppose I am testing any Web application and time period is very short and we have a heap of test cases. Which test case should we use first, which can make our Web site more secure and reliable?
This question pops up in various forms all the time. It boils down to "We don't have enough time to test everything, so what do we test?" Not having enough time, of course, is not only the status quo for testing software, it is a universal truth for any software that will ever go into production.
Given that, here's my advice.
- Start by forgetting that you have any test cases at all.
- Make a list (quickly -- remember we don't have enough time to test, so let's not waste what little time we have making lists) of each of the following usage scenarios. I usually limit myself to five on the first pass, but no matter what, move on to the next category as soon as you find yourself thinking about the category you are on. If you have to stop and think, whatever you come up with isn't important enough.
- What things will users do most often with this application?
- What areas of this application are most likely to contain show-stopping defects?
- What parts of this application are critical to the business?
- Are any parts of this application governed by legal or regulatory agencies?
- What parts of the application would be most embarrassing to the company if broken?
- What parts of the application has my boss said must be tested?
- Prioritize the list. If you've made the list in a word processor or using note cards, this will take under 60 seconds (if you have to write a new list by hand and you write as slowly as I do, it will probably take a little longer. Here are the rules for prioritizing.
- Count the number of times a scenario appears in any of your categories. The more times the scenario appears, the higher the priority.
- In case of a tie, 'a' comes before 'b' comes before 'c,' etc.
- Now scan your test cases. Note which ones are covered and which ones aren't. On the ones that aren't covered, ask yourself, "Can I live with not testing this?" If the answer is no, add it to the bottom of the list.
- Start testing.
- If you complete these tests before time is up, do the same exercise again without repeating any usage scenarios. If not,
at least you have a defensible list of what you did and did not test and lost all of about 15 minutes of testing time creating that list.
In case you're wondering, this approach is derived from my FIBLOTS heuristic for deciding what usage scenarios to include when developing performance tests. FIBLOTS is an acronym representing the words that complete the sentence "Ensure your performance tests include usage scenarios that are:
- Frequent
- Intensive
- Business critical
- Legally enforceable
- Obvious
- Technically risky
- Stakeholder mandated."
I guess for functional testing, it would be "Ensure you test usage scenarios that are:
- Frequent
- Risky
- Business critical
- Legally enforceable
- Obvious
- Stakeholder mandated."
Too bad the acronym FRBLOS isn't as easy to remember as FIBLOTS.
- How to define a test strategy
- How to make testing estimation more accurate
- Ten skills of highly effective software testers