Requirements Reliability Metrics

Requirements Reliability Metrics

Go to Part 1 – Metric Based Approach for Requirements Gathering and Testing 


Requirements Reliability Metrics:
The business requirements are the foundation upon which the entire system is built and this specifies the functionality that must be developed in the final delivered software. And that requirement verification and validation is needed to assure that the functionality representing the requirements has indeed been delivered. However, often the requirements are not satisfied, leading to a process of fixing what you can and accepting the fact that certain functionality will not be there. The better approach is to get the requirements right the first time, complete, concise and clear, that will provide the developer a clear picture to build the system with out any misunderstanding between the Business Analyst, Developer, Quality Assurance team and the client.
The requirement reliability metric is based on the requirement specification and keeps track of the requirements-in scope of the project.


Requirement Specification:
The importance of correctly documenting requirements has caused the software industry to produce a significant number of aids to the creation and management of the requirements specification documents and individual specifications statements.

Requirement Management:
Considering the size and complexity of development efforts, the use of requirements management tools has become essential. Tools also provide capabilities far beyond those obtained from text-based maintenance and processing of requirements. Requirements management tools are sophisticated and complex – since the nature of the material for which they are responsible is finely detailed, time-sensitive, highly internally dependent, and can be continuously changing.
There are many requirement management tools to choose from. These ranges from simple word processors, to spreadsheets, to relational databases, to tools designed specifically for the management of requirements such as DOORS (Quality Systems & Software – Mt. Arlington, NJ) or RTM Requirements Traceability Management (Integrated Chipware, Inc. – Reston, VA). The key to selecting the appropriate tool is the functionality (See Table 1 for a comparison of tool capabilities) provided and the capability to develop metrics from the data, secondary contained in the tool.

Requirement Repository Capabilities
The metric capability of the tool is important. It should be noted that most of the metrics presented in this paper to demonstrate how to do requirements the right way were developed from the data contained in a requirement management tool. Table 2 shows a comparison of the metric capability associated with the different tools. Clearly the relational database and requirements management tool provide the capabilities needed to effectively support the management of requirements.

Requirement Repository Capabilities - 2

Testing Reliability Matrix

Testing Reliability Matrix

Testing Reliability Matrix:

  • Software testing provides visibility into product and process quality in terms of current performance measurement and to use as historical data for future estimations and quality assurance.


Sl. No Sample Testing Reliability Matrix Name Purpose Artifact Template
1 QA Ledger-Script ID’s & Release plan To keep track of the test script id allocated for use cases/modules and release wise plan for future use.
2 Condition ID’s- Master Inventory To keep track of the test condition ids allocated for individual use case/module release wise for future use.
3 Script ID’s-Master Inventory To keep track of the test script ids allocated for individual use case/module release wise for future use.
4 Test Script Contains several test cases of a single use case/module (sometimes multiple) with reference to test condition ids.
5 Test Criteria Contains the test script id and test condition id along with the prerequisite data needed for testing and data validation.
6 Masters Conditions Inventory It comprises all the test conditions descriptions with test condition id and script id for all use case/module.
7 Repository Matrix It comprises the count of test script and test condition ids for all the use cases/module with a focus on the particular release system testing and acceptance testing.
8 Test Execution Dashboard
IR Dashboard
Test Execution Dashboard gives the detail about the testing status and their results.
Dashboard-IR (Incidence Reports) gives the detail about the defects raised during various phases of testing life cycle.

2.4.2.1 Testing Characterization:
Once the business requirements are written, methods/processes for ensuring that the system contains the functionality specified must be developed.

Steps to evaluate testing during and after the requirements gathering phase:
1. To validate the requirements, test plans are written that contain multiple test cases; each test case is based on one system state and tests some functions that are based on a related set of requirements.
2. In the total set of test cases, each requirement must be tested at least once, and some requirements will be tested several times because they are involved in multiple system states in varying scenarios and in different ways.
3. It is important to ensure that each requirement is adequately, but not excessively, tested.
4. In some cases, the requirements can be grouped together using criticality to mission success as their common thread; these must be extensively tested.
5. In other cases, requirements can be identified as low criticality; if a problem occurs, their functionality does not affect mission success while still achieving successful testing.
2.4.2.2 Test Coverage
The main objective is to verify that each business requirement will be tested; the implication is that if the software passes the test, the business requirement’s functionality is successfully included in the system. This is done by determining that each requirement is linked to at least one test case.
For example a query such as those shown below would result in data that could be displayed in a graph shown in Figure 1 for different builds:

Query 1: How many requirements in Level X Build 1 are linked to a test case?
Requirement Linkage to tests
2.4.2.3 Test Span
This activity characterizes the test plan and identifies potentially insufficient or excess testing. Requirements are usually tested by more than one test case, and one test case usually covers more than one requirement. Since each test costs money and takes time, the obvious questions are how many requirements are covered by one test, and how many tests cover only one requirement. On the other hand, if requirements are insufficiently tested, functionality may not be verified.

The metrics for this analysis are in two parts because of the bi-directional linkage between the requirements and tests. Each direction yields different information. Counting the number of unique tests used for a requirement indicates that requirements at both ends of the graph may have too much or too little testing. Counting the number of unique requirements tested indicates the exclusivity of the testing. Figure 2 shows an expected profile of unique requirements per test case.
Tests per requirements
This graph shows that if there is an expectation that there will be a large number of requirements tested by only one test case, and that there will be some number of requirements that will be tested by a multiple number of test cases. It is expected that the upper bound of multiple test cases will range in the tens. This makes sense, as more complicated requirements may require different test cases to thoroughly verify all aspects of the requirement. However, there is a limit on the number of test cases. As the number of test cases increases the difficulty of verifying the requirement also increases. This difficulty arises due to the complication in data analysis, understanding the results of the multiple tests cases, and understanding the impact of multiple test case results on the verification of the requirement. Number of tests per requirements counts the number of unique tests associated with each test. A program query such as the one below might be used for different tests conducted:
Query 1: How many requirements are tested by Test A.1? (Acceptance test, Test1)

2.4.2.4 Test Complexity
The objective of an effective verification program is to ensure that every requirement is tested, the implication being that if the system passes the test, the requirement’s functionality is included in the delivered system. An assessment of the traceability of the requirements to test cases is needed. It is expected that a requirement will be linked to a test case, and may well be linked to more that one test case as shown in Figure 3.
Requirement Verification - Trace to Test Linkage
The important aspect of this analysis is to determine which requirements have not been linked to any test cases at all.

Go to Part 1 – Metric Based Approach for Requirements Gathering and Testing

Checklist for ETL / Data Warehouse Testing

Sponsored Ads:


ETL or Extract-Transform-Load defines the mechanism of data flow from a system to the data warehouse.Here is the checklist for ETL / Data Warehouse Testing:
1. Validate schemas of both source and targets of data warehouse2. Ensure key constraints for targets are in sync with specifications given to you
3. query on source and targets and try to break transformation logic written in informatica. This is key and important since data sits in target based on transformation logic written and also check for Mapplets if any.3. try to break/check update rules attached to transformation
4. check for surrogate keys
5. Write SQL queries for sources and check weather they are meaningful to targets and vice-versa
6. check for scheduling jobs.
7. Create SQL queries with different logics and check for data consistencies for different conditions“Given a scenario where a single file is parsed into more than say 1000 columns in different tables say more than 10.If the person testing this would need to be able to code some scripts quickly to run a comparison of values between source file and relational staging table columns.Can it be done say by using macros in Excel??”

Here are a couple possible solutions:
1) Depending on your query tool, you could write a looping construct to query both the spreadsheet using one connection, and the staging database using another connection. Compare each value upon retrieval.

2) You could replicate the model in above using MS-Access and some VBA.

3) You could use VBScript or other language to accomplish above.

Steps
Need to validate that data from our data warehouse is properly loaded to our datamart. Would like to create baseline files using ‘db_execute_query’ and then ‘db_write_records’, and then compare the datamart result set with the baseline using file_compare (or a separate diff utility).

—–
Sponsored Ads:


White box testing Simplified


In this topic we will cover the various white box testing techniques:
– Statement- Coverage
– Branch- or- Decision- Coverage
– Multiple- Condition- Coverage
– Loop- Coverage
– Call- Coverage
– Path- Coverage
These are very simplified so that any Black-box tester can understand this easily.

Why Test Designing?

The Quality of Testing is as good as its Test Design
Usage of formal and customized Test Specification techniques for deriving Test Cases from the Input Documents (Test Basis) will help in achieving the following:
– Proper Coverage/Depth in testing each of the functions (/system)‏
– Test Specifications prepared by various members in the test team will be uniform
– Test Cases will be more Manageable.

Understanding test design techniques:

Classic distinction – black-box and white box techniques

– Black box – Black-box techniques (also called specification-based techniques) are a way to derive and select test conditions or test cases based on an analysis of the test basis documentation, whether functional or non-functional, for a component or system without reference to its internal structure.
White box – White-box techniques (also called structural or structure-based techniques) are based on an analysis of the internal structure of the component or system.

Structure based or white box techniques

– Structure-based testing/white-box testing is based on an identified structure of the software or system, as seen in the following examples:
i. Component level: the structure is that of the code itself, i.e. statements, decisions or branches.
ii. Integration level: the structure may be a call tree (a diagram in which modules call other modules).
iii. System level: the structure may be a menu structure, business process or web page structure.

Structure based or white box techniques:
– Statement- Coverage
– Branch- or- Decision- Coverage
– Multiple- Condition- Coverage
– Loop- Coverage
– Call- Coverage
– Path- Coverage

Statement Coverage:

– Testing to satisfy the criterion that each statement in a program to be executed at least once during program testing. Coverage is 100 percentage when a set of test cases causes every program statement to be executed at least once.

– The chief disadvantage of statement coverage is that it is insensitive to  some control structures.

Example:
1 int select ( int a[], int n, int x)
2 {
3 int i = 0;
4 while ( i < n && a[i] < x )
5 {
6 if (a[i] < 0)
7 a[i] = – a[i];
8 i++;
9 }
10 return 1;
11}
One test case n=1, a[0]=-7, x=9 covers everything ,
Flow 1 – > 2 – > 3 – > 4 – > 5 – > 6 – > 7 – > 8 – > 9 – > 10 – > 11

Branch or Decision Coverage:

– A test coverage criteria which requires that for each decision point or each possible branch be executed at least once.
Example:
1 int select ( int a[], int n, int x)
2 {
3 int i = 0;
4 while ( i < n && a[i] < x )
5 {
6 if (a[i] < 0)
7 a[i] = – a[i];
8 i++;
9 }
10 return 1;
11 }
Test Data:

Branch coverage i n x a[i] Branch Outcome
while ( i < n && a[i] < x ) 0 1 9 -7 TRUE
0 1 7 9 FALSE

Flow A : 1 – > 2 – > 3 – > 4 – > 5 – > 6 – > 7 – > 8 – > 9 – > 10 – > 11
Flow B : 1 – > 2 – > 3 – > 4 – > 10 – > 11

Multiple Condition Coverage:

– A test coverage criteria which requires enough test cases such that all possible combinations of condition outcomes in each decision, and all points of entry, are invoked at least once.
– A large number of test cases may be required for full multiple condition coverage

Example:
1 int select ( int a[], int n, int x)
2 {
3 int i = 0;
4 while ( i < n && a[i] < x )
5 {
6 if (a[i] < 0)
7 a[i] = – a[i];
8 i++;
9 }
10 return 1;
11 }
Test Data :

Multiple Condition Coverage i n a[i] x Outcome
while ( i < n && a[i] < x ) 0 4 -10 10 True
0 4 10 10 False
0 -4 -10 10 False
0 -4 10 10 False
if ( a[i] < 10 ) -10 True
10 False

Flow A : 1 – > 2 – > 3 – > 4 – > 5 – > 6 – > 7 – > 8 – > 9 – > 10 – > 11
Flow B : 1 – > 2 – > 3 – > 4 – > 10 – > 11
Flow C : 1 – > 2 – > 3 – > 4 – > 5 – > 6 – > 9 – > 10 – > 11

Loop Coverage:

– A test coverage criteria which checks whether loop body executed zero times, exactly once or more than once.
Example:
main ( )
{
int i, n, a[10],x;
printf (“Enter the values”);
scanf (“%d %d %d %d”, &i, &n, &a[i], &x);
while ( i < n && a[i] < x )
{
if (a[i] < 0)
a[i] = – a[i];
i++;
}
printf (“%d” , a[i] );
}
Test Data :

Loop Coverage i n x a[i] loop called
while ( i < n && a[i] < x ) 0 4 5 10 0 times
3 4 5 -10 1 time
1 4 5 -10 3 times

 

Call Coverage

– A test coverage criteria which checks whether function called zero times, exactly once or more than once.
– Since probability of failure is more in function calls, each function call is executed.
Example:
main ( )
{
int a, b, i ;
printf (“Enter the value of a, b, i”);
scanf (“ %d %d %d “, &a ,&b, &i);
if ( i < 10 )
{
sample ( a, b);
i = i + 1;
}
}
sample ( int x , int y )
{
If ( x > 10 )
x = x + y ; break ;
if ( y > 10 )
y = y + x ; break ;
}
Test Data:

Call Coverage a b i function called
sample ( int x, int y ) 2 4 10 0 times
2 4 9 1 time
2 4 7 3 times

 

Path Coverage

– Testing to satisfy coverage criteria that each logical path through the program be tested. Often paths through the program are grouped into a finite set of classes. One path from each class is then tested
– Helpful to find out Cyclomatic complexity, minimum number of test cases depend upon Cyclomatic complexity
–  General coverage requires executing all paths, number of paths may be infinite if there are loops
–  To find out valid logic circuit with predefined rules

Is this a valid logic flow circuit?
No.
Why not? It has two entrances.
Rule:
You can only have one entry point and one exit point in structured system.

The system is not a valid logic circuit,  because it’s not a structured system. It requires five linearly Independent paths to cover this system.

The calculated value is erroneous.
Path Coverage - A white box testing technique

Example:
Linear Independent Paths
Path 1 -> p1 – d1 – d2 – p4
Path 2 -> p1 – d1 – p2 – p4
Path 3 -> p1 – d1 – d2 – p3 – p4

Path Coverage - A white box testing technique

Sample Program:
1 sample ( int x , int y )
{
2 If ( x > 10 )
3 x = x + y ; break ;
4 if ( y > 10 )
5 y = y + x ; break ;
}
6 printf (“%d %d”, x , y);

Path Coverage
Path Coverage Examples


Unit test framework – An Introduction


Unit test frameworks are software tools to support writing and running unit tests, including:

  • a foundation on which to build tests
  • the functionality to execute the tests
  • the functionality to report their results.



Unit tests usually are developed concurrently with production code, but are not built into the final software product. The relationship of unit tests to production code is shown below:

clip_image002

An application is built from software objects linked together and the unit tests use the application’s objects, but exist inside the unit test framework. Advantages of this framework are:

– The production code is not cluttered up with built-in unit tests.
– The size of the compiled application tends to be kept smaller for the same reason.
– The tests can be run separately from the application, so the objects can be tested in isolation.

A single unit test should test a particular behavior within the production code. Its success or failure validates a single unit of code. Well-written tests set up an environment or scenario that is independent of any other conditions, then perform a distinct action and check a definite result. These tests should avoid dependencies on the results of other tests (called test coupling), and they should be short and simple.

By starting with tests of the most basic functionality, then gradually building to tests of compound objects and behaviors, a unit test framework can be used to verify very complex architectures. Having such a test framework to build upon not only is much easier than developing standalone tests, but also produces more thorough, effective tests. A comprehensive suite of unit tests enables rapid application development, since the effects of every change can be immediately and thoroughly verified.

Unit tests are white box (structural) tests, since the test framework is able to access the internal structure of the code being tested.

Most object-oriented languages provide access protection, preventing outside classes from accessing protected or private code elements. Because of this, unit tests often are written to test only the public interfaces of the objects tested. This encourages the design of objects with discrete, testable interfaces and a minimum of complex hidden behavior. Thus, writing testable objects promotes good object-oriented development practices.