Formulating test status reports based on daily status criteria
A user requests help deriving the daily status criteria for the test execution of a software project, with status criteria including test execution criteria, test case failure rate and bug rate.
I have a task to derive the daily status criteria for the test execution of my project. The three status criteria would be test execution criteria, test case failure rate and bug rate. I need to derive the criteria based on some assumptions -- how can I approach this and derive the same?
Thanks,
Naveen
Naveen,
Your management or the customer are asking for some very standard test status reports, so the good news is that this is relatively straightforward.
Test execution: The most important metrics in this are total test cases, test cases executed, test cases remaining, test execution rate, and "glidepath execution rate." Total test cases are, obviously, the total number of cases you must execute during this project. Test cases executed is the sum total of cases which have been executed (pass or fail, but NOT including "blocked" cases). Test cases remaining should be the sum of cases not executed plus cases blocked, for this metric represents all the work remaining. Test execution rate should be the average number of cases your team is executing per day. Finally, the glidepath is the rate of execution your team needs to maintain in order to complete the project.
To calculate the execution rate, take the total number of cases executed (passed, failed, but not "blocked") and divide by the number of working days. Note that you also have to communicate the number of cases failed and the time it will take to retest these. Some teams want you to report this as a separate metric, some teams want the "failed" cases to not count as executed, and be counted as remaining test cases.
To calculate the glidepath rate, take the total number of remaining cases divided by the number of days remaining in the project. If you have 100 test cases remaining, and 10 days left, you need to execute 10 test cases per day.
The critical thing to keep in mind here is what your manager or customer care about. They want to know two things, above all else: 1) Are you on track to be done on time, and 2) Is the product in good shape? The way they answer the on track question is simple -- is your average test case execution rate (the average number of cases executed per working day) higher than your glidepath rate? If yes, your project is probably green. If not, your project is yellow or red.
The next metric here is the test case failure rate. This is a strong indicator of product quality, as well as your test execution rate. If you have a 50% failure rate, that means 1 out of 2 test cases are failing. That indicates terrible requirements definition or engineering quality. If your failure rate is significantly less (10%, 5%) it probably means the requirements were well defined and the code quality is high. It could also mean that your test team is overlooking defects -- as project lead, you need to have your pulse on your team's performance, and be able (and willing) to tell management if you think your team is overlooking defects in the project. Failure rate is also helpful in calculating execution rate and glidepaths. Like I said, some teams want all tests executed counted in the execution rate, some only want passes (personally, as a manager I want to see the PASS rate as well as the executed rate). Calculating the failure rate is also simple -- how many test cases in 100 are failing?
Finally, bug rates. The most important bug metric right now is probably how many defects are generated per test case. So if you have executed 100 test cases, how many defects have resulted from those? Coupled with the count of remaining test, it's a good way to get a decent idea of how many defects are still 'lurking' in the product.
Metrics are a funny thing. They can be used for good or warped for bad. It's very important to focus on the metrics themselves, and separate the discussion of what the metrics might mean. Above all, if the metrics indicate a schedule or product risk, don't try to paint an artificially pretty picture. Be forthcoming in the information and help management through the repercussions.
Hope that helps!