Better Requirements | Better Testability


“Good Manual Testers are hard to find..”

In order to do better testing, testers should understand the quality attributes. Testability is one important quality attribute.

Testability is the degree of difficulty of testing a system. This is determined by both aspects of the system under test and its development approach.

  • Higher testability: more better tests, same cost.
  • Lower testability:  Weaker tests, same cost.

When the degree of testability is lower, then there are chances of skipping bugs during the testing. Main causes for lower testability –


1. Functional/Non-functional Requirements – Poor/unambiguous requirements written by the author. Insufficient requirement reviews by testers.

2. Technical requirements/Architecture/Design of the system – Development teams concerns functionality for the customer, not for the testers. Testing Team does not get Management support and testing team not involved in the design meetings.

Higher testability can be achieved –
Note – Testability is not only testable, but also Easy to Test. The goal of increasing the testability of software is not just to detect defects but more importantly, to detect defects as soon as they are introduced.



1. If requirements are explicitly defined. The source for many improvements to the testability of the system will stem from the requirements. Business Analyst should keep the requirements language simple so that it should be easily understandable by testing and dev teams.
While reviewing the requirements/specifications – testers/test case designers should inspect requirements for testability. But how Testers can inspect the requirements for testability?
– James Bach has defined the Heuristics of Software Testability. Go thru this checklist. This is very useful while reviewing the spec documents.

– Go thru Guidelines and Checklist for Requirement Document Review.

2. Testers should be involved in the software design/architecture meetings, so that testers can understand how the software system is being developed. This is very required because it will help testers in understand the design, the complexity of the system and relations of one function/module with another function/module. This will help testers in getting the higher testability. Thru this testing team knows which programs are being touched in what scenario.
Sometimes, to cover the client’s requirements, development team develops the requirements in a complex way which makes the testing difficult. For example, developers might include some nightly jobs (db jobs) in the system which corrects the data. Now these things increase the complexity of the system which results into lower testability. These things should be avoided. This is one more reason that why testers should be involved in the software design/architecture meetings


References:


Test Effectiveness and Test Efficiency – Revisiting


Effectiveness – How well the user achieves the goals they set out to achieve using the system (process).
Efficiency – The resources consumed in order to achieve their goals.In Software field, these terms are primarily used to indicate the effort put in to develop the Software and How much is customer satisfied with the product.
Effectiveness signifies how well the customer requirements have been met i.e. does the final product provides the solution to the Customer problem effectively.

Efficiency is something, which is internal to the Organization that produced the Software product. It is basically- how efficiently the available resources (time, hardware, personnel, expertise etc.) were utilized to create the Software product.


Thus, Effectiveness of Software is a measure of Customer response on meeting product’s requirements and Efficiency is a measure of optimum utilization of resources to create the Software product.

Effective – producing a powerful effect
Efficient – producing results with little wasted effort

Example:
James efficiently made the calls he had wanted to make. Robert didn’t make the calls he had planned to make, but effectively met his sales quota by profiting from a chance encounter with a business acquaintance.

Key Point:When you’re effective, you are able to accomplish the worthwhile goal you’ve chosen. When you’re efficient, you quickly carry out actions. You won’t be effective, however, unless those actions result in your achieving a meaningful goal.

Test Effectiveness = ((Defects removed in a phase) / (Defect injected + Defect escaped)) * 100
Test Efficiency = (Test Defects / (Test Defects + Acceptance Defects)) * 100

The Test defects = Unit + Integration + System defects
Acceptance Defects = Bugs found by the customer


Common Software Testing Pitfalls


by – Wing La; Source: http://ebiz.u-aizu.ac.jp/~paikic/lecture/2007-2/adv-internet/papers/TestE-Commerce.pdf

Poor estimation. Developers underestimate the effort and resources required for testing. Consequently, they miss deadlines or deliver a partially tested system to the client.

Untestable requirements. Describing requirements ambiguously renders them impossible or difficult to test.

Insufficient test coverage. An insufficient number of test cases cannot test the full functionality of the system.

Inadequate test data. The test data fails to cover the range of all possible data values—that is, it omits boundary values.

➤  False assumptions. Developers sometimes make claims about the system based on assumptions about the underlying hardware or software. Watch out for statements that begin “the system should” rather than “the system [actually] does.”

➤  Testing too late. Testing too late during the development process leaves little time to manoeuvre when tests  find major defects.

➤  “Stress-easy” testing. When testing does not place the system under sufficiently high levels of stress, it fails to  investigate system breakpoints, which therefore remain  unknown.

Environmental mismatch. Testing the system in an environment that is not the same as the environment on which it will be installed doesn’t tell you about how it will work in the real world. Such mismatched tests make little or no attempt to replicate the mix of peripherals, hardware, and applications present in the installation environment.

Ignoring exceptions. Developers sometimes erroneously slant testing toward normal or regular cases. Such testing often ignores system exceptions, leading to a system that works most of the time, but not all the time.

Configuration mismanagement. In some cases, a software release contains components that have not been tested at all or were not tested with the released versions of the other components. Developers can’t ensure that the component will work as intended.

Testing overkill. Over-testing relatively risk-free areas of the system diverts precious resources from the system’s more highrisk (and sometimes difficult to test) areas.

No contingency planning. There is no contingency in the test plan to deal with significant defects discovered during testing.

Non-independent testing. When the development team carries out testing, it can lack the objectivity of an independent testing team.


Guide to Effective Test Status Reporting and Metrics Collection – Part 1


Test Status Report
Test Status Reporting is a formalized reporting on the status of the project from a testing point of view. This is part of the project Communication plan. The Report shows quantitative information about the project.
Test Status Reporting is the component that informs key project stakeholders of the critical aspects of the project’s status. Good status reporting prevents surprises to project sponsors and stakeholders. Formal status reporting needs to be provided as part of the project steering committee meetings.
Purpose
The purpose of a Test Status Report is to provide an ongoing history of the project, which becomes very useful in terms of tracking progress, evaluation and review. Test Status Reports forms a part of the Project Review Process both during and after completion of the project.A Test Status Report should identify the key areas of importance that will assist the stakeholders of the project in determining the “state” of the software development and test efforts. It helps in answering one of the most asked questions of system testers … “Will the software be ready for release on the agreed upon date?”
The Test Lead should try to maintain a careful balance between timeliness, accuracy, and consistency when preparing these reports as the status of the defects keeps changing.

Owner(s)
The person responsible for managing all test activities (Test Manager or Test lead) should be the owner of the test status report.

Who should use it?
Test Status Report is a document that will be used by the Project Manager to understand the ongoing history of the project, which becomes very useful in terms of tracking progress, evaluation and review. Test Status Reports forms a part of the Project Review Process both during and after completion of the project.
The target audience for a Test Status Report could vary. Any one interested on project health can be the audience. This can be the Project Management team, Project Steering committee, Customer or other key stakeholders of the project. (Click on the below pic to see clear view)
Weekly Ststus Report

A brief description of the different attributes of test status report (figure-1) presented below.
1. Project Name – The project name that you are reporting metrics on.
2. Duration – The reporting period
3. Report by – Owner / Author of the Report.
4. Report to – Target Audience.
5. Previous weeks Accomplishment – Activities performed after last reporting date.
6. Missed Accomplishment – Not able to work on any planned activity of the previous report.
7. Plans for this week – Future planning for the tasks to be performed during the current reporting period.
8. Issues

The role of a Test Lead is to give approximate if not accurate reports on the status of the project. At the start of test execution, Test Lead presumes that all the risks to be addressed by this phase of testing still exist. As we progress through the test plan, one by one, risks are cleared as all the tests that address each risk are passed. Halfway through the test plan, the tester can say, “we have run some tests, these risks have been addressed, here are the outstanding risks of release.”

Suppose testing continues, but the testers run out of time before the test plan is completed. The go live date approaches, and management wants to judge whether the system is acceptable. Although the testing has not finished, the tester can say, “we have run some tests, these risks have been addressed, here are the outstanding risks of release.” The tester can present exactly the same message throughout the test phase, except the proportion of risks addressed to those outstanding increases over time. How does this help?

Throughout the test execution phase, management always has enough information to make the release decision. Either management will decide to release with known risks, or choose not to release until the known risks (the outstanding risks that are unacceptable) are addressed. Most of the time Coding gets 90% of the effort and testing is squeezed yet again. To avoid this, code and test activities should be defined as separate tasks in project plan.

Most managers like the risk-based approach to testing because it gives them more visibility of the test process. Risk-based testing gives management a big incentive to let you take out a few days early in the project to conduct the risk assessment. All testers have been arguing that they should be involved in projects earlier. If managers want to see risk-based test reporting they must let testers get involved earlier. This must be a good thing for testers and our projects.

9. Cumulative Issues– Issues from previous report, if not addressed should be listed here.
10. Test execution Details for previous week – Test details for the reporting period

      1. Total effort spent on test execution
      2. Total number of functionality tested during the reporting period.

11. Test Summary

      1. Total number of test cycles, number of defects found, and defect tracking details should be listed in this section.
      2. Test Coverage details in terms of functionalities tested and test cases executed should be detailed in this section.

12. Project Milestone – Important project schedule from a testing point of view should be listed here.

The Test Status report should also represent the status through charts and graphs for easy and better understanding.
The following are few charts that can be included in the test status report.


Functionality Coverage vs. Duration
Functionalities tested during different releases or during different reporting period can be shown in a graph to give an indication about the progress of test coverage.
Functionality Coverage chart in Software Testing
(Figure-2 : Functionality Coverage chart)


Defects reported vs. closed
The above report shows the find and fixes counts for bugs reported against the reporting period. This gives Management an idea of the defect trends. A flattening cumulative open curve indicates stability, or at least the inability of the test team to find many more bugs with the current test system. A cumulative closed curve that converges with the open curve indicates quality, a resolution of the problems found by testing. Overall, this chart gives a snapshot of product quality as seen by Testing, as well as telling Management about the bug find and fix processes.
Defect status
(Figure-3: Defect status)


Defect Location
This graph displays the defect occurrence in different modules. This helps the management in assessing the most problematic area in the project. Based upon this a root-cause analysis can be conducted and appropriate corrective measures can be taken to reduce the risk.
Defect Location in Software Testing
(Figure-4: Defect Location)


Defect Classification
These details help in conducting a root cause analysis and taking corrective measures to improve product quality.
Defect Classification in Software Testing
(Figure-5: Defect Classification)


Test Case Progression Chart
This gives a clear indication of test progress against the test plan over a time.
Test Case Progression Chart
(Figure-6: Test Progression)


Severity wise Defect Distribution
This gives a indication about the severity wise defect distribution.
Severity wise Defect Distribution
(Figure-5: Defect Distribution)



What should be the ideal frequency?
The frequency of test status report can be decided based upon the project test plan.
In case of a small project where the releases to testing team happens every alternate day , status report should be prepared on a daily basis to keep the management informed about the test progress.
But in case of large projects where the application released on in a month for testing status report should be prepared once in a week.

Periodic meetings should be held to discuss the project status, either verbally or based on the Test Status Report. The meetings should be often enough that progress could be reported against a number of milestones since the last meeting.

Read 2nd Part: Guide to Metrics Collection in Software Testing (includes Benefits of implementing metrics in software testing)

References: 
1. Measuring software product quality during testing by Rob Hendriks, Robert van Vonderen and Erik van Veenendaal
2. Software Metrics: A Rigorous Approach by: Norman E. Fenton
3. Software Test Metrics – A Practical Approach by Shaun Bradshaw
4. Testing Effectiveness Assessment (an article by Software Quality Consulting)
5. P.R.Vidyalakshmi
6. Measurement of the Extent of Testing (http://www.kaner.com/pnsqc.html)
7. http://www.stickyminds.com/
8. Effective Test Status Reporting by Rex Black
9. http://www.projectmanagement.tas.gov.au/guidelines/pm5_10.htm
10. http://whatis.com
11. Risk Based Test Reporting by Paul Gerrard


Guide to Metrics Collection in Software Testing


Read Part 1: Guide to Effective Test Reporting.Guide to Metrics Collection in Software Testing:

Metrics are defined as ‘standards of measurement’ used to indicate a method of gauging the effectiveness and efficiency of a particular activity within a project.
Base metrics constitute the raw data gathered by a Test Analyst throughout the testing effort. These metrics are used to provide project status reports to the Test Lead and Project Manager, Senior Management and Client Every project should track the following Test Metrics:


• Number of Test Cases Passed
• Number of Test Cases Failed
• Number of Test Cases Not Implemented during the particular test cycle
• Number of Test Cases Added / Deleted
• Number of Test Cases Re-execute
• Test taken for Execution for the Test Cases

Calculated metrics:
Calculated Metrics means to convert the existing base metrics data into more useful information. These types of metrics are generally the responsibility of the Test Lead and can be tracked at many different levels like by module lead, Technical Tead, Project Manager and Testers. The following Calculated Metrics are recommended for implementation in all test efforts:
• % Complete Test Cases
• % Defects Corrected in particular test cycle
• % Test Coverage against Use Cases
• % Re-Opened Defects
• % Test Cases Passed
• % Test Cases Failed
• % Test Effectiveness
• % Test Cases Blocked
• % Test Efficiency
• % Failures
• Defect Discovery Rate
• Defect Removal Rate and Cost

Measurements for Software testing
Following the V model of software development for every development process there needs to be a corresponding software testing process. Every step of this testing process needs to be measured so as to guarantee a quality product to the customer. At the same time, measurement needs to be easy to understand and implement.

1. Size of Software: Software size means the amount of functionality of an application. The complete project estimation for testing depends on determining the size of the software. Many methods are available to compute the size of the software. The following are couple of them:
• Function Point Analysis
• Use Cases Estimation Methodology

2. Requirements review: First a SRS (Software requirement specifications) should be obtained from the client. This SRS should be:
a) Complete: A complete requirements specification must precisely define all the real world situations that will be encountered and the capability’s responses to them. It must not include situations that will not be encountered or unnecessary capability features.
b) Consistent: A consistent specification is one where there is no conflict between individual requirement statements that define the behaviour of essential capabilities and specified behavioural properties and constraints do not have an adverse impact on that behaviour.
c) Correct: For a requirements specification to be correct it must accurately and precisely identify the individual conditions and limitations of all situations that the desired capability will encounter and it must also define the capability’s proper response to those situations.
d) Structured: Related requirements should be grouped together and the document should be logically structured.
e) Ranked: Requirement document should be ranked by the importance of the requirement
f) Testable: In order for a requirement specification to be testable it must be stated in such as manner that pass/fail or quantitative assessment criteria can be derived from the specification itself and/or referenced information
g) Traceable: Each requirement stated within the SRS document must be uniquely identified to achieve trace ability. Uniqueness is facilitated by the use of a consistent and logical scheme for assigning identification to each specification statement within the requirements document.
h) Unambiguous: A statement of a requirement is unambiguous if it can only be interpreted one way. This perhaps, is the most difficult attribute to achieve using natural language. The use of weak phrases or poor sentence structure will open the specification statement to misunderstandings.
i) Validate: To validate a requirements specification all the project participants, managers, engineers and customer representatives, must be able to understand, analyze and accept or approve it.
j) Verification: Requirements specification requires review and analysis by technical and operational experts in the domain addressed by the requirements. In order to be verifiable requirement specifications at one Practical Measurements for Software Testing level of abstraction must be consistent with those at another level of abstraction.

Review efficiency can then be computed. Review efficiency is a metric that provides an insight on quality of review and testing conducted.

Review efficiency = 100 * Total number of defects found by reviews / Total number of project defects
High values indicate an effective review process is implemented and defects are detected as soon as they are introduced.

3. Effectiveness of testing requirements: Measuring the effectiveness of testing requirements involves.
a) Specification of requirements and maintenance of Requirement Trace-ability matrix
Specification of requirements must include:
a) SRS Objective
b) SRS Purpose
c) Interfaces
d) Functional Capabilities
e) Performance Levels
f) Data Structures/Elements Safety
g) Reliability
h) Security/Privacy
i) Quality
j) Constraints & limitations

Once the requirements have been specified and reviewed the next step is to update the requirement trace-ability matrix. The RTM is an extremely important document. In its simplest form, it provides a way to determine if all requirements are tested. However, the RTM can do much more. For example,

Requirement Estimated Tests required Type of tests Automated/Manual Re-useable test cases
User Interface 20 5 Functional
3 Positive
3 Negative
4 Boundary
8 Automated
12 Manual
TC101
Command line 18 10 Functional
3 Positive
5 Negative
10 Automated
8 Manual
TC201
Total 110 20 Automated
40 Manual
50
Estimated Tests 50 Manual

In this example, the RTM is used as a test planning tool to help determine how many tests are required, what types of tests are required, whether tests can be automated or manual, and if any existing tests can be re-used. Using the RTM in this way helps ensure that the resulting tests are most effective.

b) Measuring the mapping of test cases to requirements:
While we map the test cases to corresponding requirements, we must first determine the following:
a) Number of requirements it tests
b) Priority of requirement it tests
c) Effort for its execution versus coverage of requirements
Here, the number of requirements it tests and the priority of the requirements can be obtained from the Software requirement specifications (SRS) . We should see that every requirement has a set of test cases mapped to it.

Measuring the efficiency in the testing process :
In the IT industry it’s quite often due the ‘n’ number of reasons there is always difference btw what we plan and what actually happens. However there is a need for measuring the efficiency the difference btw the planned versus the actual; as this helps in computing the variance btw them. Lets have an over view of the most common attributes that contribute in measuring the efficiency for software testing
1. Cost: The key factor from the organizational perspective can be measured by the factor “Cost Variance (CV)”. Cost Variance metric helps in arriving at the projects costs in its reference to the planned and budgeted expenditure. This is very important metric as it helps at deriving at the ROI of the project execution. The formulae for assessing is:

CV = 100*(AC-BC)/BC

Where AC is expanded as Actual Cost and BC is Budgeted Cost.

When the result happens to be of positive; it’s a red flag for the project management that the project costs are over shooting the planned expenditure and vice versa.

2. Effort: The next most important variant is the “Effort Variance (EV)”. Effort Variance helps in identifying the effort over run in the project. Every project has an estimate effort; quite so often the same is over run, when this is identified it can used as a basis for billing the client. The effort variance has a direct relation to the cost variance as it’s an indicator of the actual project costs in comparison with the actual costs and also relates to the allocation of the resources. EV can be measured by:

EV = 100*(AE-BE)/BE

Where AE is expanded as Actual Expenditure and BE is Budgeted expenditure.

When the result happens to be of negative; Its good sign for the project management that effort is being well managed within the project

3. Schedule: The third most important metric is the “Schedule Variance (SV)”. This metric valuates the project managements planning, scheduling and tracking techniques. It also indicates the projects performance in completion of planned activities in accordance to the project schedule. SV can be measured by:
SV = 100*(AD-ED)/ED
Where AP is expanded as Actual Duration and Estimated Duration.

When the result happens to be positive, it’s a clear indication that the project is behind schedule and negative means we are ahead of schedule. In this case positive or negative means that the project scheduling or planning was not correct

4. Cost of Quality: This metric arrives at the summation of the effort spent on the prevention, verification and validation attributes in the project. This also helps in arriving in the total effort spent on the quality related activities. A prevention activity (PA) includes time spent on planning, training and defect prevention. V&V (VV) effort includes time spent on activities like walkthroughs, reviews and inspections and last but not the least testing. Its important to consider the post testing (PT) effort too which includes rework (bug fixing, retesting etc)
COQ = 100 * (PA+VV+PT)/Total Project Effort

5. Product Size Variance: PSV can be defined as the degree of variance between the estimate size and actual size. This provides an estimation of the scope of the work to be performed
Size Scope = 100*(AS – IES)/IES.
Where AS is expanded as “Actual Size” and IES as “Initial Estimated Size”. The size of the project can also be arrived at the number of Functional points in the application

6. Defect Occurrence: Defect occurrence variance can be defined as the total number of defects in the software with reference to its size. This will be a summation of the defects found during the different cycles of testing and review comments
Defect Occurrence=Total number of defects/Functional Points

7. MTBF (Mean Time between Failures): MTBF is defined as the average time between two critical system failures or breakdowns .This gives an amount of time that a user may reasonably expect the software system to work before a breakdown occurs.
Mean Time between Failure = Total time of software system operation / Number of critical software system failures.

Defects: Defects can be measured by number of ways the following are couple of them.
Identifying Defects: Metric that indicates the distribution of total project defects in various phases or activities like Requirements, Design, Coding and Test planning. This helps in identifying the processes for improvement.
Defect Distribution = 100 * Total number of defects attributed to the specific phase / Total number of defects.
Defect removal effectiveness: Adding the number of defects removed during the phase to the number of defects found later (but that existed during that phase) approximates defect removal effectiveness.
For example: The following table depicts the defects that have been detected during a particular phase.

Phase Detected Requirements Design Coding/Unit Test
Requirements 10
Design 5 20
Coding 0 4 26
Test 2 5 8
Filed 1 2 7

Therefore Defect Removal Effectiveness (DRE) for each phase is as follows:
DRE (Requirement) = 10/ (10+5+2+1) * 100 = 55.55%
DRE (Design) = (5+20) / (5+0+2+1+20+4+5+2) * 100 = 64.10%
DRE (Coding) = (0+4+26)/(0+2+1+4+5+2+26+8+7) *100 = 54.54%
DRE (Test) = (2+5+8)/(2+1+5+2+8+7) = 60%
Therefore for the entire testing throughout the development cycle the defect effectiveness will be
DRE = (Pre-release Defect) / (Total Defects) x 100
i.e.. DRE = (10+5+2+20+4+5+26+8)/(10+5+2+1+20+4+5+2+26+8+7) * 100 = 88%

The longer a defect exists in a product before it is detected, the more expensive it is to fix. Knowing the DRE for each phase can help an organization target its process improvement efforts to improve defect detection methods where they can be most effective.

Benefits of implementing metrics in software testing:

The following are some of the benefits of implementing metrics while testing
1. Computing metrics improves project planning.
2. Helps us to understand if we have achieved the desired quality
3. Helps in improving the process followed.
4. Helps in analyzing the risk associated.
5. Analyzing metrics in every phase of testing improves defect removal efficiency .

Read Part 1: Guide to Effective Test Reporting.

References:
1. Measuring software product quality during testing by Rob Hendriks, Robert van Vonderen and Erik van Veenendaal
2. Software Metrics: A Rigorous Approach by: Norman E. Fenton
3. Software Test Metrics – A Practical Approach by Shaun Bradshaw
4. Testing Effectiveness Assessment (an article by Software Quality Consulting)
5. P.R.Vidyalakshmi
6. Measurement of the Extent of Testing (http://www.kaner.com/pnsqc.html)
7. http://www.stickyminds.com/
8. Effective Test Status Reporting by Rex Black
9. http://www.projectmanagement.tas.gov.au/guidelines/pm5_10.htm
10. http://whatis.com
11. Risk Based Test Reporting by Paul Gerrard