Read Part 1: Guide to Effective Test Reporting.Guide to Metrics Collection in Software Testing:

Metrics are defined as ‘standards of measurement’ used to indicate a method of gauging the effectiveness and efficiency of a particular activity within a project.
Base metrics constitute the raw data gathered by a Test Analyst throughout the testing effort. These metrics are used to provide project status reports to the Test Lead and Project Manager, Senior Management and Client Every project should track the following Test Metrics:


• Number of Test Cases Passed
• Number of Test Cases Failed
• Number of Test Cases Not Implemented during the particular test cycle
• Number of Test Cases Added / Deleted
• Number of Test Cases Re-execute
• Test taken for Execution for the Test Cases

Calculated metrics:
Calculated Metrics means to convert the existing base metrics data into more useful information. These types of metrics are generally the responsibility of the Test Lead and can be tracked at many different levels like by module lead, Technical Tead, Project Manager and Testers. The following Calculated Metrics are recommended for implementation in all test efforts:
• % Complete Test Cases
• % Defects Corrected in particular test cycle
• % Test Coverage against Use Cases
• % Re-Opened Defects
• % Test Cases Passed
• % Test Cases Failed
• % Test Effectiveness
• % Test Cases Blocked
• % Test Efficiency
• % Failures
• Defect Discovery Rate
• Defect Removal Rate and Cost

Measurements for Software testing
Following the V model of software development for every development process there needs to be a corresponding software testing process. Every step of this testing process needs to be measured so as to guarantee a quality product to the customer. At the same time, measurement needs to be easy to understand and implement.

1. Size of Software: Software size means the amount of functionality of an application. The complete project estimation for testing depends on determining the size of the software. Many methods are available to compute the size of the software. The following are couple of them:
• Function Point Analysis
• Use Cases Estimation Methodology

2. Requirements review: First a SRS (Software requirement specifications) should be obtained from the client. This SRS should be:
a) Complete: A complete requirements specification must precisely define all the real world situations that will be encountered and the capability’s responses to them. It must not include situations that will not be encountered or unnecessary capability features.
b) Consistent: A consistent specification is one where there is no conflict between individual requirement statements that define the behaviour of essential capabilities and specified behavioural properties and constraints do not have an adverse impact on that behaviour.
c) Correct: For a requirements specification to be correct it must accurately and precisely identify the individual conditions and limitations of all situations that the desired capability will encounter and it must also define the capability’s proper response to those situations.
d) Structured: Related requirements should be grouped together and the document should be logically structured.
e) Ranked: Requirement document should be ranked by the importance of the requirement
f) Testable: In order for a requirement specification to be testable it must be stated in such as manner that pass/fail or quantitative assessment criteria can be derived from the specification itself and/or referenced information
g) Traceable: Each requirement stated within the SRS document must be uniquely identified to achieve trace ability. Uniqueness is facilitated by the use of a consistent and logical scheme for assigning identification to each specification statement within the requirements document.
h) Unambiguous: A statement of a requirement is unambiguous if it can only be interpreted one way. This perhaps, is the most difficult attribute to achieve using natural language. The use of weak phrases or poor sentence structure will open the specification statement to misunderstandings.
i) Validate: To validate a requirements specification all the project participants, managers, engineers and customer representatives, must be able to understand, analyze and accept or approve it.
j) Verification: Requirements specification requires review and analysis by technical and operational experts in the domain addressed by the requirements. In order to be verifiable requirement specifications at one Practical Measurements for Software Testing level of abstraction must be consistent with those at another level of abstraction.

Review efficiency can then be computed. Review efficiency is a metric that provides an insight on quality of review and testing conducted.

Review efficiency = 100 * Total number of defects found by reviews / Total number of project defects
High values indicate an effective review process is implemented and defects are detected as soon as they are introduced.

3. Effectiveness of testing requirements: Measuring the effectiveness of testing requirements involves.
a) Specification of requirements and maintenance of Requirement Trace-ability matrix
Specification of requirements must include:
a) SRS Objective
b) SRS Purpose
c) Interfaces
d) Functional Capabilities
e) Performance Levels
f) Data Structures/Elements Safety
g) Reliability
h) Security/Privacy
i) Quality
j) Constraints & limitations

Once the requirements have been specified and reviewed the next step is to update the requirement trace-ability matrix. The RTM is an extremely important document. In its simplest form, it provides a way to determine if all requirements are tested. However, the RTM can do much more. For example,

Requirement Estimated Tests required Type of tests Automated/Manual Re-useable test cases
User Interface 20 5 Functional
3 Positive
3 Negative
4 Boundary
8 Automated
12 Manual
TC101
Command line 18 10 Functional
3 Positive
5 Negative
10 Automated
8 Manual
TC201
Total 110 20 Automated
40 Manual
50
Estimated Tests 50 Manual

In this example, the RTM is used as a test planning tool to help determine how many tests are required, what types of tests are required, whether tests can be automated or manual, and if any existing tests can be re-used. Using the RTM in this way helps ensure that the resulting tests are most effective.

b) Measuring the mapping of test cases to requirements:
While we map the test cases to corresponding requirements, we must first determine the following:
a) Number of requirements it tests
b) Priority of requirement it tests
c) Effort for its execution versus coverage of requirements
Here, the number of requirements it tests and the priority of the requirements can be obtained from the Software requirement specifications (SRS) . We should see that every requirement has a set of test cases mapped to it.

Measuring the efficiency in the testing process :
In the IT industry it’s quite often due the ‘n’ number of reasons there is always difference btw what we plan and what actually happens. However there is a need for measuring the efficiency the difference btw the planned versus the actual; as this helps in computing the variance btw them. Lets have an over view of the most common attributes that contribute in measuring the efficiency for software testing
1. Cost: The key factor from the organizational perspective can be measured by the factor “Cost Variance (CV)”. Cost Variance metric helps in arriving at the projects costs in its reference to the planned and budgeted expenditure. This is very important metric as it helps at deriving at the ROI of the project execution. The formulae for assessing is:

CV = 100*(AC-BC)/BC

Where AC is expanded as Actual Cost and BC is Budgeted Cost.

When the result happens to be of positive; it’s a red flag for the project management that the project costs are over shooting the planned expenditure and vice versa.

2. Effort: The next most important variant is the “Effort Variance (EV)”. Effort Variance helps in identifying the effort over run in the project. Every project has an estimate effort; quite so often the same is over run, when this is identified it can used as a basis for billing the client. The effort variance has a direct relation to the cost variance as it’s an indicator of the actual project costs in comparison with the actual costs and also relates to the allocation of the resources. EV can be measured by:

EV = 100*(AE-BE)/BE

Where AE is expanded as Actual Expenditure and BE is Budgeted expenditure.

When the result happens to be of negative; Its good sign for the project management that effort is being well managed within the project

3. Schedule: The third most important metric is the “Schedule Variance (SV)”. This metric valuates the project managements planning, scheduling and tracking techniques. It also indicates the projects performance in completion of planned activities in accordance to the project schedule. SV can be measured by:
SV = 100*(AD-ED)/ED
Where AP is expanded as Actual Duration and Estimated Duration.

When the result happens to be positive, it’s a clear indication that the project is behind schedule and negative means we are ahead of schedule. In this case positive or negative means that the project scheduling or planning was not correct

4. Cost of Quality: This metric arrives at the summation of the effort spent on the prevention, verification and validation attributes in the project. This also helps in arriving in the total effort spent on the quality related activities. A prevention activity (PA) includes time spent on planning, training and defect prevention. V&V (VV) effort includes time spent on activities like walkthroughs, reviews and inspections and last but not the least testing. Its important to consider the post testing (PT) effort too which includes rework (bug fixing, retesting etc)
COQ = 100 * (PA+VV+PT)/Total Project Effort

5. Product Size Variance: PSV can be defined as the degree of variance between the estimate size and actual size. This provides an estimation of the scope of the work to be performed
Size Scope = 100*(AS – IES)/IES.
Where AS is expanded as “Actual Size” and IES as “Initial Estimated Size”. The size of the project can also be arrived at the number of Functional points in the application

6. Defect Occurrence: Defect occurrence variance can be defined as the total number of defects in the software with reference to its size. This will be a summation of the defects found during the different cycles of testing and review comments
Defect Occurrence=Total number of defects/Functional Points

7. MTBF (Mean Time between Failures): MTBF is defined as the average time between two critical system failures or breakdowns .This gives an amount of time that a user may reasonably expect the software system to work before a breakdown occurs.
Mean Time between Failure = Total time of software system operation / Number of critical software system failures.

Defects: Defects can be measured by number of ways the following are couple of them.
Identifying Defects: Metric that indicates the distribution of total project defects in various phases or activities like Requirements, Design, Coding and Test planning. This helps in identifying the processes for improvement.
Defect Distribution = 100 * Total number of defects attributed to the specific phase / Total number of defects.
Defect removal effectiveness: Adding the number of defects removed during the phase to the number of defects found later (but that existed during that phase) approximates defect removal effectiveness.
For example: The following table depicts the defects that have been detected during a particular phase.

Phase Detected Requirements Design Coding/Unit Test
Requirements 10
Design 5 20
Coding 0 4 26
Test 2 5 8
Filed 1 2 7

Therefore Defect Removal Effectiveness (DRE) for each phase is as follows:
DRE (Requirement) = 10/ (10+5+2+1) * 100 = 55.55%
DRE (Design) = (5+20) / (5+0+2+1+20+4+5+2) * 100 = 64.10%
DRE (Coding) = (0+4+26)/(0+2+1+4+5+2+26+8+7) *100 = 54.54%
DRE (Test) = (2+5+8)/(2+1+5+2+8+7) = 60%
Therefore for the entire testing throughout the development cycle the defect effectiveness will be
DRE = (Pre-release Defect) / (Total Defects) x 100
i.e.. DRE = (10+5+2+20+4+5+26+8)/(10+5+2+1+20+4+5+2+26+8+7) * 100 = 88%

The longer a defect exists in a product before it is detected, the more expensive it is to fix. Knowing the DRE for each phase can help an organization target its process improvement efforts to improve defect detection methods where they can be most effective.

Benefits of implementing metrics in software testing:

The following are some of the benefits of implementing metrics while testing
1. Computing metrics improves project planning.
2. Helps us to understand if we have achieved the desired quality
3. Helps in improving the process followed.
4. Helps in analyzing the risk associated.
5. Analyzing metrics in every phase of testing improves defect removal efficiency .

Read Part 1: Guide to Effective Test Reporting.

References:
1. Measuring software product quality during testing by Rob Hendriks, Robert van Vonderen and Erik van Veenendaal
2. Software Metrics: A Rigorous Approach by: Norman E. Fenton
3. Software Test Metrics – A Practical Approach by Shaun Bradshaw
4. Testing Effectiveness Assessment (an article by Software Quality Consulting)
5. P.R.Vidyalakshmi
6. Measurement of the Extent of Testing (http://www.kaner.com/pnsqc.html)
7. http://www.stickyminds.com/
8. Effective Test Status Reporting by Rex Black
9. http://www.projectmanagement.tas.gov.au/guidelines/pm5_10.htm
10. http://whatis.com
11. Risk Based Test Reporting by Paul Gerrard