Search This Blog

Welcome to Machers Blog

Blogging the world of Technology and Testing which help people to build their career.

Tuesday, September 8, 2009

Subject: Software testing metrics for a medium-sized project

Author: Robin F Goldsmith
Summary:- This article provides you details on all the metrics one should collect for a typical medium-sized software testing project and how long these metrics be collected during the project schedule?
Theme:-
IMHO, project size doesn't change your need to know what you're doing, which is what metrics are for. And I can't think of a point in a project when it's no longer necessary to know what's going on. Failing to know key measures, including the consequences after the project supposedly is done, is a major way in which small projects turn into big projects.
Basically, you always need measures of two things: (1) results you are getting, and (2) the causes of those results.
Results
Typically, the primary measure of results is whether the project is on time and in budget, which usually actually says more about the effectiveness of setting budgets and schedules than about the project itself. Poorly set budgets and schedules are the biggest reasons for overruns. Other results measures include size and quality of what has been produced.
Size may be measured in terms of KLOC (K for thousand, LOC for lines of code), function points, modules, objects, methods, or similar units which reliably describe physical size of software produced. Some people measure project size in number of requirements or pages of design. Other types of sizing measures include capacity, such as number of users or sites served, and database and transaction volumes. Project results involving hardware are also often sized with respect to numbers and capacities or capabilities of hardware components. A highway project ordinarily would be sized with respect to the length of the road involved. Although somewhat circular, many projects are sized by the budget and/or schedule.
Quality of results is typically measured in terms of defects, ordinarily as defect density, which is the number of defects relative to the physical size of the product, system or software. However, the way many folks measure defects can create as many issues as it addresses.
For instance, it's especially common for defect measures to include only coding errors, which reflect poorly on the developer and thereby create incentives for developers to pay more attention to avoiding accountability than actually doing a good job. Arguing about whether something is a defect is a pretty nonproductive use of everyone's time. "Coded as designed" and "user error" argument distractions can be prevented by making sure that defects also can be categorized as requirements, design, instructions and operational defects.
Results value
In addition to these physical size and quality measures of results, it's essential to quantify results in terms of value, which is what stumps many people. Probably the simplest method used is the percentage of defined requirements that have been implemented.
Percentages alone don't tell the full story because all requirements are not created equal with respect to size or value, and there can be wide variations in how well a requirement has been satisfied and how adequately the requirements have been defined. That's why it's essential to use effective methods to discover the REAL business requirements -- deliverable whats that provide value when delivered (or met or satisfied).
Ultimately, value should be measured in money. Monetary benefits come from four sources. Cost savings mean eliminating or reducing existing expenditures (unfortunately the most common method is eliminating jobs). Cost avoidance means not having to incur an otherwise additional future expense. Revenue enhancement occurs when one sells more, charges more for what they sell, and/or collects more of what they charge. Revenue protection involves retaining existing sales, which includes compliance with laws and regulations necessary to stay operational.
Actually, value is a net figure, which also must take into account the investment cost of achieving the benefit return. Thus, value most often is measured as return on investment (ROI). Conventional ROI determinations are frequently unreliable because they tend to fall prey to 10 common but seldom recognized pitfalls. (See www.proveit.net for information about determining right, reliable and responsible "REAL ROI.")
Causes
In order to sustain and improve results, it's necessary to identify and measure the causes of those results. Basic causal measures are resource costs/effort and time duration of the project work. Size and complexity of the project, of course, are the biggest determinants of effort and duration; they also are major sources of risk, which is another causal factor to consider.
Usually it's helpful to measure causes and results with respect to life cycle stages, such as requirements, design, development, unit testing, integration testing, system testing, acceptance testing and production. Distinguishing new code from modified code can be helpful for understanding causes of results.
Similarly, causes of results can be identified with respect to factors such as development methodology, use of particular types of tools and techniques, platform and language, and staff skills and experience.
By measuring results associated with these various types of causal factors, it's usually possible to tell what's going well and what needs improvement. Moreover, these more granular measures give quicker indication how well improvements are working.

End of document

No comments: