Search This Blog

Welcome to Machers Blog

Blogging the world of Technology and Testing which help people to build their career.

Tuesday, September 8, 2009

Software Testing Metrics - Metrics Used by Software Testers

Summary:
This article provides you details of various types of metrics generally used in software tester
Theme:-
A software metric is a measure of some property of a piece of software or its specifications.

Since quantitative methods have proved so powerful in the other sciences, computer science practitioners and theoreticians have worked hard to bring similar approaches to software development. Tom De Marco stated, “You can’t control what you can't measure.”

The Product Quality Measures captured in various ways, here are some of the examples
1. Customer satisfaction index

This index is surveyed before product delivery and after product delivery
(and on-going on a periodic basis, using standard questionnaires).The following are analyzed:

- Number of system enhancement requests per year
- Number of maintenance fix requests per year
- User friendliness: call volume to customer service hotline
- User friendliness: training time per new user
- Number of product recalls or fix releases (software vendors)
- Number of production re-runs (in-house information systems groups)

2. Delivered defect quantities

They are normalized per function point (or per LOC) at product delivery (first 3 months or first year of operation) or Ongoing (per year of operation) by level of severity, by category or cause, e.g.: requirements defect, design defect, code defect, documentation/on-line help defect, defect introduced by fixes, etc.

3. Responsiveness (turnaround time) to users

- Turnaround time for defect fixes, by level of severity
- Time for minor vs. major enhancements; actual vs. planned elapsed time

4. Product volatility

- Ratio of maintenance fixes (to repair the system & bring it into compliance with specifications), vs. enhancement requests (requests by users to enhance or change functionality)

5. Defect ratios

- Defects found after product delivery per function point.
- Defects found after product delivery per LOC
- Pre-delivery defects: annual post-delivery defects
- Defects per function point of the system modifications

6. Defect removal efficiency

- Number of post-release defects (found by clients in field operation), categorized by level of severity
- Ratio of defects found internally prior to release (via inspections and testing), as a percentage of all defects
- All defects include defects found internally plus externally (by customers) in the first year after product delivery

7. Complexity of delivered product

- McCabe's cyclomatic complexity counts across the system
- Halstead’s measure
- Card's design complexity measures
- Predicted defects and maintenance costs, based on complexity measures

8. Test coverage

- Breadth of functional coverage
- Percentage of paths, branches or conditions that were actually tested
- Percentage by criticality level: perceived level of risk of paths
- The ratio of the number of detected faults to the number of predicted faults.

9. Cost of defects

- Business losses per defect that occurs during operation
- Business interruption costs; costs of work-arounds
- Lost sales and lost goodwill
- Litigation costs resulting from defects
- Annual maintenance cost (per function point)
- Annual operating cost (per function point)
- Measurable damage to your boss's career

10. Costs of quality activities

- Costs of reviews, inspections and preventive measures
- Costs of test planning and preparation
- Costs of test execution, defect tracking, version and change control
- Costs of diagnostics, debugging and fixing
- Costs of tools and tool support
- Costs of test case library maintenance
- Costs of testing & QA education associated with the product
- Costs of monitoring and oversight by the QA organization (if separate from the development and test organizations)

11. Re-work

- Re-work effort (hours, as a percentage of the original coding hours)
- Re-worked LOC (source lines of code, as a percentage of the total delivered LOC)
- Re-worked software components (as a percentage of the total delivered components)

12. Reliability

- Availability (percentage of time a system is available, versus the time the system is needed to be available)
- Mean time between failure (MTBF).
- Man time to repair (MTTR)
- Reliability ratio (MTBF / MTTR)
- Number of product recalls or fix releases
- Number of production re-runs as a ratio of production runs

Metrics for Evaluating Application System Testing:

Metric = Formula

Test Coverage = Number of units (KLOC/FP) tested / total size of the system. (LOC represents Lines of Code)

Number of tests per unit size = Number of test cases per KLOC/FP (LOC represents Lines of Code).

Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria

Defects per size = Defects detected / system size

Test cost (in %) = Cost of testing / total cost *100

Cost to locate defect = Cost of testing / the number of defects located

Achieving Budget = Actual cost of testing / Budgeted cost of testing

Defects detected in testing = Defects detected in testing / total system defects

Defects detected in production = Defects detected in production/system size

Quality of Testing = No of defects found during Testing/(No of defects found during testing + No of acceptance defects found after delivery) *100

Effectiveness of testing to business = Loss due to problems / total resources processed by the system.

System complaints = Number of third party complaints / number of transactions processed

Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10

Source Code Analysis = Number of source code statements changed / total number of tests.

Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for Design and Documentation

Test Execution Productivity = No of Test cycles executed / Actual Effort for testing
End of document

No comments: