Defect Density

Defect Density

Defect Density
What are various ways of calculating defect density?
The formula itself is simple: Density = Total Defects Found / Size
if we see defect density at granular level say Codesize of a particular functionality X in a application Y along with number of files, then we may draw some good observations like –
Taking an example here:- Lets say we have an application ABC, which have three functionality/modules A, B and C.
Code files for A =10 and KLOC=5k
Code files for B =5 and KLOC=1k
Code files for C =1 and KLOC=25k
Bugs found in A=40, B=50, and C=5
Defect density = Total number of defects/LOC (lines of code)
Defect density = Total number of defects/Size of the project
Size of Project can be Function points, feature points, use cases, KLOC etc


Defect Density can be used to:
1) Predict the remaining defects when compared to the expected defect density,
2) Determine if the amount of testing is sufficient.
3) Establish a database of standard defect densities.
What are you going to do with the defect density information you collect?
Depending on what you want / expect to discover, you could pilot some different measurements on different parts of the code base and see which versions of the metric were most measurable.
Must Read: Use of Relative Code Churn Measures to Predict System Defect Density.

Mercury Quality Center Interview Questions

1. What is meant by test lab in Quality Centre?
Test lab is a part of Quality Centre where we can execute our test on different cycles creating test tree for each one of them. We need to add test to these test trees from the tests, which are placed under test plan in the project. Internally Quality Centre will refer to this test while running then in the test lab.
2. Can you map the defects directly to the requirements(Not through the test cases) in the Quality Centre?
In the following methods is most likely to used in this case:

  • Create your Req.Structure
  • Create the test case structure and the test cases
  • Map the test cases to the App.Req
  • Run and report bugs from your test cases in the test lab module.

The database structure in Quality Centre is mapping test cases to defects, only if you have created the bug from Application. test case may be we can update the mapping by using some code in the bug script module(from the customize project function), as per as i know, it is not possible to map defects directly to an requirements.
3. how do you run reports from Quality Centre.
This is how you do it
1. Open the Quality Centre project
2. Displays the requirements modules
3. Choose report
Analysis -> reports -> standard requirements report
4. Can we upload test cases from an excel sheet into Quality Centre?
Yes go to Add-In menu Quality Centre, find the excel add-In, and install it in your machine.
Now open excel, you can find the new menu option export to Quality Centre. Rest of the procedure is self explanatory.
5. Can we export the file from Quality Centre to excel sheet. If yes then how?
Requirement tab– Right click on main req/click on export/save as word, excel or other template. This would save all the child requirements
Test plan tab: Only individual test can be exported. no parent child export is possible. Select a test script, click on the design steps tab, right click anywhere on the open window. Click on export and save as.
Test lab tab: Select a child group. Click on execution grid if it is not selected. Right click anywhere. Default save option is excel. But can be saved in documents and other formats. Select all or selected option
Defects Tab: Right click anywhere on the window, export all or selected defects and save excel sheet or document.
6. How many types of tabs are there in Quality Centre. Explain?
There are four types of tabs are available
1. Requirement : To track the customer requirements
2. Testplan : To design the test cases and to store the test scripts
3. test lab : To execute the test cases and track the results.
4. Defect : To log a defect and to track the logged defects.
7. How to map the requirements with test cases in Quality Centre?
1. In requirements tab select coverage view
2. Select requirement by clicking on parent/child or grandchild
3. On right hand side(In coverage view window) another window will appear. It has two tabs
a) Tests coverage
b) Details
Test coverage tab will be selected by default or you click on it.
4. Click on select tests button a new window will appear on right hand side and you will see a list of all tests. You cans elect any test case you want to map with your requirements.
8. How to use Quality Centre in real time project?
Once completed the preparing of test cases
1. Export the test cases into Quality Centre( It will contained total 8 steps)
2. The test cases will be loaded in the test plan module
3. Once the execution is started. We move the test cases from test plan tab to the test lab module.
4. In test lab, we execute the test cases and put as pass or fail or incomplete. We generate the graph in the test lab for daily report and sent to the on site (where ever you want to deliver)
5. If we got any defects and raise the defects in the defect module. when raising the defects, attach the defects with the screen shot.
9. Difference between Web Inspect-QA Inspect?
QA Inspect finds and prioritizes security vulnerabilities in an entire web application or in specific usage scenarios during testing and presents detail information and remediation advise about each vulnerability.
Web Inspect ensures the security of your most critical information by identifying known and unknown vulnerabilities within the web application. With web Inspect, auditors, compliance officers and security experts can perform security assessments on a web enabled application. Web inspect enables users to perform security assessments for any web application or web service, including the industry leading application platforms.
10. How can we add requirements to test cases in Quality Centre?
Just you can use the option of add requirements.
Two kinds of requirements are available in TD.
1. Parent Requirement
2. Child requirements.

Parent Requirements nothing but title of the requirements, it covers high level functions of the requirements
Child requirement nothing but sub title of requirements, it covers low level functions of the requirements.

HP0 M15 Quality Center Certification Questions

HP0 M15 Quality Center Certification sample Preparation Questions:
1) Quality Center is
– an ERP
– a standalone software
– a web based (client/server) tool
– a script
2) Traceability Alert is depicted with “@” Mark against a Test
– Yes
– No
3) For Test Plan Module , select the correct folder hierarchy
i. Subject – Tests – Test Step
ii. Subject – Test Steps – Step
iii. Requirements – Tests – Test Steps
iv. Requirements – Subject – Tests – Test Steps

4) Which Tab allows to configure conditional execution of Tests
Hint : Conditions like Test # 2 can be executed only if Test # 1 is completed
i. Execution Flow
ii. Design
iii. Attachments
iv. None of The Above

5) Test Cases created in Word / Excel can be uploaded in QC ?
i. Yes
ii. No

6) Types of Graphs available in QC
i. Reports
ii. Graphs
iii. Live Analysis Graphs
iv. All Of the Above
7) Follow Up Alert can be created on
i. A Test in Test Plan Tree
ii. A Test Instance in Execution Grid
iii. A Defect in Defects Grid
iv. All Of the Above

8) Favorites are almost always are High Priority Defects
Hint : User can mark a defect as favorite for easy tracking. A favorite can be PUBLIC or PRIVATE i. Yes
ii. No

9) In QC, Requirements & Test Plan can be linked ?
i. Only Child Requirements
ii. Only Parent Requirements
iii. Both Child & Parent Requirements
iv. Neither Child nor Parent Requirements

10) In Test Lab Module , is it possible to store separate results for the SAME test case run at different dates and times ?
i. Yes
ii. No
11) Which of the Below is NOT a module in Quality Center
i. Requirements
ii. Business Component
iii. Defects
iv. Automation

12) Defect can be linked to
i. Tests & Test Steps
ii. Run & Run Steps
iii. Requirements
iv. All Of the Above

13) Test Cases can be added in Test Lab Module from
i. Requirements Module
ii. Defects Module
iii. Test Plan Module
iv. All Of the Above

14) Data is DYNAMICALLY updated in
i. Reports
ii. Graphs
iii. Live Analysis Graphs
iv. None Of the Above

15) Is is possible to link a single defect to multiple test cases ?
i. Yes
ii. No
iii. MAYBE. Depends on the TYPE of Defect

16) Which Tests can be scheduled to be run at a specific Date and Time ?
i. Manual Test Cases
ii. Automated Test Cases
iii. Both Manual & Automated Test Cases
iv. Neither Manual nor Automated Test Cases

17) In Test Lab Module
i. Tests are created
ii. Tests are executed
iii. Tests are created and executed

18) Results of an AUTOMATED test case are stored in “——-” module of Quality Center
i. Automation
ii. Test Plan
iii. Test Lab
iv. None of The Above

19) An Instance is
i. A copy of the Requirement
ii. A copy of Test Case
iii. A Copy of Defect
iv. A copy of Email

20) Quality Center is a
i. Defect Tracking Tool
ii. Requirements Management Tool
iii. Test Planning Tool
iv. Comprehensive Test Management Tool

Answers:
1) Quality Center is
Correct answer is: a web based (client/server) tool
2) Traceability Alert is depicted with “@” Mark against a Test
Correct answer is: NO
Traceability Alert is depicted with “!” Mark
3) For Test Plan Module , select the correct folder hierarchy
Correct answer is: Subject – Tests – Test Step
4) Which Tab allows to configure conditional execution of Tests
Correct answer is: Execution Flow
5) Test Cases created in Word / Excel can be uploaded in QC ?
Correct answer is: YES
6) Types of Graphs available in QC
Correct answer is: All Of the Above
7) Follow Up Alert can be created on
Correct answer is: All Of the Above
8) Favorites are almost always are High Priority Defects
Correct answer is: NO
Any priority defect can be marked a favorite.
9) In QC, Requirements & Test Plan can be linked ?
Correct answer is: Both Child & Parent Requirements
10) In Test Lab Module , is it possible to store separate results for the SAME test case run at different dates and times ?
Correct answer is: YES
11) Which of the Below is NOT a module in Quality Center
Correct answer is: Automation
12) Defect can be linked to
Correct answer is: All Of the Above
13) Test Cases can be added in Test Lab Module from
Correct answer is: Test Plan Module
14) Data is DYNAMICALLY updated in
Correct answer is: Live Analysis Graphs
15) Is is possible to link a single defect to multiple test cases ?
Correct answer is: YES
Any type of defect can be linked to multiple test cases.
16) Which Tests can be scheduled to be run at a specific Date and Time ?
Correct answer is: Both Manual & Automated Test Cases
17) In Test Lab Module
Correct answer is: Tests are executed
18) Results of an AUTOMATED test case are stored in “——-” module of Quality Center
Correct answer is: Test Lab
19) An Instance is
Correct answer is: A copy of Test Case
A Test Case copied from Test Plan Module to the Test Lab Module is called an instance of that Test Case
20) Quality Center is a
Correct answer is: Comprehensive Test Management Tool

Effective Handbook for Implementing Test Strategies

A strategy in its synonym is a preparation for long-term battle plans or making plans to achieve a goal. It might involve giving up a destructive habit or addiction or it might be a matter of recognizing a self-defeating pattern of behavior and somehow seeing how to change it.
For example the bulk of testing hours of a tester goes in to testing the product again and again for different releases in the same pattern. At a certain phase it leads to frustration of the testers for doing things repeatedly. It happens sometimes that when someone is frustrated he may sound something like, “I know its possible to see this (or experience) differently”. And often, this awareness or even hope that there is another way of experiencing your dilemma, or problem opens the door for it to occur. This is where strategies come in.
This 9 pages handbook contains the following topics:

1. Test Strategy
2. Why do we need a Test Strategy?
3. Avoiding the Risk
4. Establish Test Techniques
5. Software risk issues:

  • Features to Test
  • Features not Tested
  • Test Approach
  • Training Needs
  • Responsibilities
  • Planning Risk Contingencies

6. Conclusion


Download Link: Click Here 


I am sure this small handbook will help you in understanding and implementing the Testing Strategy. This document is prepared by Shiva Kumar from Think Network Solutions.

Defining a Test Strategy

A solid testing strategy provides the framework necessary to implement your testing methodology. A separate strategy should be developed for each system being developed taking into account the development methodology being used and the specific application architecture.

The heart of any testing strategy is the master testing strategy document. It aggregates all the information from the requirements, system design and acceptance criteria into a detailed plan for testing. A detailed master strategy should cover the following:
Project Scope
Restate the business objective of the application and define the scope of the testing. The statement should be a list of activities that will be in scope or out of scope. A sample list would include:
* List of software to be tested
* Software configurations to be tested
* Documentation to be validated
* Hardware to be tested

Test Objectives
The system under test should be measured by its compliance to the requirements and the user acceptance criteria. Each requirement and acceptance criteria must be mapped to specific test plans that validate and measure the expected results for each test being performed. The objectives should be listed in order of importance and weighted by Risk.

Features and Functions to be Tested
Every feature and function must be listed for test inclusion or exclusion, along with a description of the exceptions. Some features may not be testable due to a lack of hardware or lack of control etc. The list should be grouped by functional area to add clarity. The following is a basic list of functional areas:
* Backup and recovery
* Workflow
* Interface design
* Installation
* Procedures (users, operational, installation)
* Requirements and design
* Messaging
* Notifications
* Error handling
* System exceptions and third-party application faults

Testing Approach
The approach provides the detail necessary to describe the levels and types of testing. The basic V-Model shows what types of testing are needed to validate the system.
More specific test types include functionality, performance testing, backup and recovery, security testing, environmental testing, conversion testing, usability testing, installation and regression testing. The specific testing methodology should be described and the entry/exit criteria for each phase noted in a matrix by phase. A project plan that list the resources and schedule for each testing cycle should also be created that maps the specific testing task to the overall development project plan.

Testing Process and Procedures
The order of test execution and the steps necessary to perform each type of test should be described in sufficient detail to provide clear input into the creation of test plans and test cases. Procedures should include how test data is created, managed and loaded. Test cycles should be planned and scheduled based on system availability and deliverable dates from development. All application and environmental dependencies should be identified along with the procedures necessary to gain access to all the dependent systems.

Test Compliance
Every level of testing must have a defined set of entry/exit criteria which is used to validate that all prerequisites for a valid test have been met. All mainstream software testing methodologies provide an extensive list of entry/exit criteria and checklist. In addition to the standard list, additional items should be added based on specific testing needs. Some common additions are, environmental availability, data availability, and validated code which is ready to be tested.

Each level of testing should define specific pass/fail acceptance criteria, to ensure to ensure that all quality gates have been validated and that the test plan focuses on developing test that validate the specific criteria defined by the user acceptance plan.

Testing Tools
All testing tools should be identified and their use, ownership and dependencies defined. The tools category includes manual tools, such as templates in spreadsheets and documents as well as automated tools for test management, defect tracking, regression testing and performance/load testing. Any specific skill sets should be identified and compared against the existing skills identified for the project to highlight any training needs.

Defect Resolution
A plan to address the resolution of failed tests needs to be created that lists the escalation procedures to seek correction and retest of the failed tests along with a risk mitigation plan for high-risk test. Defect tracking should include basic metrics for compliance based on number and type of defect found.

Roles and Responsibilities
A matrix listing the roles and responsibilities of everyone involved in the testing activities, along with the anticipated amount of their time allocated to the project, must be prepared.

Process Improvement
The entire testing process should be focused on process improvement. The strategy should list ways to monitor progress and provide constant feedback. This feedback can serve to enhance the process, deliverables and metrics used in the testing. Root cause analysis should be performed on all reported defects to help isolate the true nature of the problem and prevent unnecessary repeat offenses.

Deliverables
All deliverables should be defined and their location specified. Common deliverables are test plans, test cases, test scripts, test matrix and a defect log.

Schedule
All testing activities should be combined into one master testing schedule. The schedule should include an estimate of time for each task and the dependences for each. Testing resources should be assigned to each task and quality gates should be listed to insure oversight of the entire process.

Environmental Needs
All the requirements of the testing environment need to be listed. Common ones include a description of the environment’s use, management, hardware and software, specific tools needed, data loading and security requirements.
Resource Management
The skills of all personnel involved in the testing effort need to be assessed and the gaps noted so that a comprehensive training program can be designed. Specialty skills that will not be filled with in-house staff will require job descriptions and budgeting.

Risk and Contingencies
Planning for risk in advance and ways to mitigate it are essential for a robust strategy. A risk assessment that is prioritized by severity of risk and covers technology, resource, schedule and environmental issues should feed a detailed plan to mitigate each red flag.

Approvals and Workflow
All items on the critical path must go through an approval cycle. The procedures for approval and escalation must be well defined and assigned to resources prior to the start of the testing.
The above covers the main sections of a well-drafted and documented testing strategy. The more detail that you include in the strategy document, the less ambiguity and chance for deviation there will be throughout the project.

The completion of the strategy signals the beginning of the test planning phase. For each type of testing identified in the master test strategy there should be a test plan identifying the components to be tested, the location of the test data, the test environment needs, the test procedures, resources required, and the tests schedule. For each plan a series of test conditions should be identified so that test cases with expected results can be generated for later execution.
Read Also

Creating a Test Strategy

The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment.

Inputs for this process:

  • A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data.
  • A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules.
  • Testing methodology. This is based on known standards.
  • Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents.
  • Requirements that the system can not provide, e.g. system limitations.

Outputs for this process:

  • An approved and signed off test strategy document, test plan, including test cases.
  • Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.

See also: