Testing with Visual Studio Team System 2010 (VS 2010) – Part 2

VSTS (Visual Studio Team System) supports five key types of tests:

  1. Unit testing in which you call a class and verify that it is behaving as expected;
  2. Manual testing;
  3. Generic testing that uses an existing test application that run as part of the biggest test;
  4. Web testing to ensure the html apps function correctly
  5. Load testing to ensure the app is scalable.


VSTS (Visual Studio Team System) improves the workflow by allowing the developers and testers to store their tests and results in one place. This allows a project manager to run a report to see how many tests have passed or failed and determine where the problems are, because the information is all in one data warehouse.

With VSTS (Visual Studio Team System), testing capabilities have been integrated right into the VS environment, simplifying the process of writing tests without flipping back and forth between applications. VSTS supports testing as a part of an automated build system.

VSTS also allows a developer to aggregate multiple tests, which have already been written, so that they can be executed by automated test agents to simulate up to a thousand users for load testing. Several agents can be run concurrently to increase the load in multiples of about a thousand. This whole process allows a team to reuse the work that was initiated for the various kinds of tests.

This also makes it easier for the developers themselves to execute a load test based on the unit tests of the individual code modules, so that they can identify problems at an earlier stage, saving time, and learning how to write better code.

VSTS automated test code generation capabilities can save a developer two to three minutes of work per method.

Another key advantage is the Test List feature for managing tests in hierarchical categories that can be shared across projects. There are two grouping mechanisms for the tests, using projects and folders, using Test Lists. Test Lists makes it easier to manage the tests in batches. If you have twenty tests in one project and twenty in another, you can include them by clicking on one group of text boxes and reorganize your groups very easily.

VSTS and TDD
In TDD the test always comes first, while in unit testing this is not always the case. The basic idea of TDD is: Test a little, code a little, refactor a little – in that sequence! The rhythm for the entire sequence is seconds or minutes. With TDD the test always comes first, while with “unit testing” in the broadest sense this is not clear. Generating tests after-the-fact is not considered to be TDD.

On the other hand, unit testing after the code is developed is not necessarily bad, it just does not serve the goals of TDD. Some testing is always better than no testing. However, software that is implemented using a test-driven approach, if done properly, will result in simpler systems, which are easier to understand, easier to maintain and with a near-zero-defect quality.

VSTS plug-ins
Compuware DevPartner and TestPartner, provide interactive feedback, coding advice and corporate coding standards and enable developers to find and repair software problems. This software now plugs into Studio Team System (VSTS) to make detailed diagnostic data available to the developer.
Among other third-parties supporting VSTS list AutomatedQA with TestComplete and Borland with CaliberRM.

Testing with Visual Studio Team System – Static Code Analyzer | Code Profiler | Unit Testing

Testing with Visual Studio Team System – Static Code Analyzer | Code Profiler | Unit Testing

VSTS supports four types of tests, namely unit, manual, web, and load testing, which are organized using the Test Manager.

The team can code and store the tests in projects within the solution, like the normal source code projects. These test projects are designed to provide a container to hold all tests. This interpretation offers tests the same functionalities as the source project receive of VSTS. For example, the tests are placed in the source code control, can be linked to work items, and the test results are saved on the foundation server for other team members to review.

STATIC CODE ANALYZER
The quality of the system’s source code depends on various aspects. One of these aspects is that the code is written inline with general code rules. Team members have to write their code in the same way with the same rules to create a common way to implement the system. Reviewing and ensuring that all the source code respects the coding agreements of the team or system development organization is hard to do.

VSTS has a functionality that enables the team members to validate their source code automatically to general coding rules or company specific coding rules. This static code analyzer finds errors in the code before it is compiled. It has a range of checks: from style to code correctness to security issues to validate the consistency of the implemented source code, as presented in the figure below.

STATIC CODE ANALYZER

The static code analyzer enables the team to import or create own validation rules. The analyzer is also customizable on which checks or rules to include in the analysis, whether to report as error or warning to set the severity of the inconsequentialities, and supports the creations of own rules and checks. All together increases the quality of the code as it is written in a common and consistent way.

CODE PROFILER
Code profiling is the process of finding the bottlenecks and roadblocks of the application as an executable version. This functionality of the VSTS for developers has the purpose to enable developers to measure the performance of an executable application and identify issues in the source code.

The developer has the ability to analyze the time and memory spends of the application, like identifying which methods or classes are using most of the execution time. After the code profiler reports its finding, as shown in the figure below, the developer is able to prioritize which parts of the source code needs to be optimized first, and which not to spend his time on.

CODE PROFILER

  UNIT TESTING
After implementing the first method into the source code of the system, the developer and/or tester has the opportunity to test its correctness by creating a unit test. A unit test is a functional class method test by calling a method with the appropriate parameters, exercise it and compare the results with the expected outcome to ensure the correctness of the implemented code.

A unit testing functionality is built in VSTS, which enables the team to write and run units tests. This way of testing the system creates an overlap of tasks and knowledge of the developer and the tester role, because the tests are written using the same code as the actual system source code. Normally, these unit tests are integrated in the system solution as a project and just like the source code of the system, the unit testing code is stored onto the team foundation server.

VSTS has the functionality to automatically generate unit test methods during the implementation of the system class methods. Note that, VSTS creates classes with test-methods and initiates variables, but the project team needs to refine these methods to be useful tests. Below figure shows the generated unit test for validating the “Add”- method of the calculator example.

UNIT TESTING

The generated code has to be revised to be a functional unit test for the “add”-method.

VSTS attends the user that variable a, b and expected has to be initialized to an appropriate value. For example a is set on 20, b on 44 and expected on 64. If the actual implemented method does not match the expected value, this test method will alert the user with the error “CalcLogic.Calculator.Add did not return the expected value”. Note that, VSTS adds the last code line to inform the user that this unit test is not revised and may not be correct.

Testing with Visual Studio Team System – Manual Testing and Web Testing

Testing with Visual Studio Team System – Manual Testing and Web Testing

The oldest way to verify the correctness of the implemented code is the manual test. A manual test is a list of test activities with descriptions that a tester performs step by step.
A project team uses manual tests, as portrayed in figure below, when the test steps are complicated or unfit to be automated. An example of a test situation where a team may decide to use a manual test is when they want to determine a component’s behavior when network connectivity is lost.

Testing with VSTS - Manual Testing

The manual test in the Microsoft’s project development platform is in either MS Word or text format as scripts for manual testing tasks. VSTS treats manual tests the same as automated tests, they are checked in/out and is stored in the source control repository on the Visual Studio 2005 Team Foundation Server, they are managed and run in the test manager and the test results are shown in the test result explorer. As the manual test is started in the test manager, the manual test description is shown as in the previous figure. It will keep on the status ‘pending’ as the tester steps through the prescribed activities until the he selects the result of the test. If the test fails, the tester is able to create a bug work item to report the bug and associate this test to the work item.

WEB TESTING
Web testing is integrated into Visual Studio 2005 Team Edition for Software Testers to verify the quality of web applications. A web test allows the tester to automate the testing of a web-based application or interface by coding or recording the user and web browser interactions into editable scripts.

VSTS supports two different types of web testing, the recorded and coded web test.

The first type of web test records the HTTP-traffic of the web application into editable scripts, which enables the tester to play back the interactions. The tester can check web page access, content or response by adding validation rules to the recorded web test.

Figure below shows the results of an executed recorded web test of the calculator solution, where the user has loaded the webpage and hit the buttons after entering the numbers 10 and 5 in the textboxes.
Testing with VSTS - WebTesting

Each HTTP-traffic is recorded as a test line, which the tester can modify and enhance by adding validation rules. In this example, four validations rules were added to check the outcome of the four functions of the calculator. As shown the third “minus” test line fails, because the expected value of the result textbox in the validation rule was not 3 but 5. Also, the VSTS facilitates the user to generate coded web tests out of recorded web tests to be modified in more complex web tests. Just like the other tests, the web tests are stored in test projects and are check in and out to the source code control.

Load Testing with Visual Studio Team System 2010

Load Testing with Visual Studio Team System 2010

LOAD TESTING
The idea behind the load test is simulating that multiple users execute the logic of the solution simultaneously. The primary goal of the load test is to discover scalability and performance issues in the solution. Testing the solution also gives benchmark statistics on performance as the solution modified or extended.

The tester creates a load test in the VSTS environment using a wizard that steps through the process of defining a load test scenario. A scenario is a certain situation of the use of the application and is defined with a load pattern, test mix, browser mix, and network mix.

The load pattern defines how the load is applied to the application and has two flavors, a constant and a step load definition. The constant load definition applies a continuous load of users on the applications. For example, when the tester sets the maximum user count setting on 50, the load test continuously simulates 50 users stressing out the applications. This option is useful for peak usage testing and stability testing to see how the system performs under constant peak stress. The step load definition starts the load test with a small number of users and adds a predefined amount each step until the maximum is reached, or the system performs badly, or the solution falls over. This option is suitable for testing the performance limitations of the system, like the maximum number of users is before the system falls.

The test mix defines which tests are included in the load test and how they are distributed. The tester can select all the automated test types that are stored in the test projects of the solution. When selecting more than one test, the tester needs to define what percentage of the total load capacity should be executing each test, as shown in the figure below.

Testing with VSTS - Load Testing

The browser mix to define the kinds of browser to use for web tests. The browsers types include various versions of Microsoft Internet Explorer, Netscape, Mozilla, Pocket Internet Explorer, and smartphones profiles.

The network mix defines the kinds of network types are simulated, and ranges from LAN, dial-up through T3 connections for the load test.

After finishing the New Load Test Wizard, VSTS gives the tester an overview of all properties of the load test as depicted in the figure below.

Testing with VSTS - Load Testing-2

After defining the way to test the load on the system, the tester needs to identify what information is important to collect during the test. Setting these so called counter sets is a difficult task of the tester. Fortunately VSTS provides default counter sets to simplify the creation of the load test.

Finally, the tester has to set the load test’s Run Settings, where it is decided how much time the test needs to warm up, how long the test runs, and what the sample rate is. The warming up time is the time between the start of the test and the first sample to be taken of the counters. For example, the system starts slow the first minute due to the lack of caching files and the tester does not want this warming up to influence the test results.

The sample rate is the time interval how often the counters are sampled. For instance, the tester prescribes VSTS to take test data samples of the counters every ten seconds.

During the execution of the load test, the tester is able to monitor the counters in real time and add or remove counters. The tests results, as shown in the figure below, are presented in tables and graphs, which also can be modified during the test run by adding or removing counters.

Testing with VSTS - Load Testing-3

The test results are stored in a database or XML-file on the Visual Studio Team Foundation Server to be available to all the team members or other stakeholders. From a failed test, a tester is able to create a bug work item that is linked to these test results to be addressed to a developer. The VSTS project portal website provides the managers and other team members with extensive test and bug-tracking reports to track bug work items.

Test Load Agent
For large – scale applications, one computer might not be enough to simulate the desired load. Visual Studio Team System 2008 Test Load Agent can distribute the work across different machines. It can simulate approximately 1,000 users per processor. This product requires a separate installation and requires one controller and at least one agent. To configure the environment select Administer Test Controller from the Test menu. There you can select a Controller and add Agents. Then from the Test Run Configuration window in the Controller and Agent node you can select to run the tests remotely and select the configured controller.

Metric Based Approach for Requirements Gathering and Testing

Metric Based Approach for Requirements Gathering and Testing

References:
www.cetin.net.cn/storage/cetin2/QRMS/ywxzzl12.htm
www.stsc.hill.af.mil/crosstalk/1998/12/hammer.asp

1. Introduction
Requirements development and management have always been critical in the implementation of software systems. It is a known fact that engineers cannot build what analysts cannot define. It thus becomes imperative to have an efficient requirements gathering and management to deliver the best possible software systems.


Many automated tools are utilized to support requirements gathering and management. The use of these tools not only supports the definition and capture of requirements, but they also open the door to effective use of metrics in tracing, characterizing and assessing the requirements from a testing perspective. In addition to a tool based capture of metrics, it is necessary to opt for other complementary methods.

As per a detailed analysis done by NASA, it was identified that problems that are not found until testing are at least 14 times more costly to fix than if the problem was found in the requirement phase.

This objective of this topic is to discuss the details of various metrics that can be used in a complete SDLC project from requirements gathering thru testing and analysis phases derived from the best practices.

2. Metrics:
In software development, a metric is the measurement of a particular characteristic of a program’s performance or efficiency.

2.1 Why Metrics?
– Metrics help the Project Management/Team to effectively manage the various activities across the SDLC and achieve a single view, understanding of the progress of the deliverables and also to quickly analyze and identify the impact of any change across the deliverables.
– Metrics assist early detection and correction of errors or changes in the requirements gathered.
– Multiple metrics are needed for comprehensive evaluation of requirements, testing and their trace-ability to do a Gap analysis, Change Impact analysis, compliance verification of code, regression test selection, and requirements verification and validation for the project team to achieve the best of best deliverables.
– Metric collection, with a combination of tool based approach and other methods, is cheaper, faster and more reliable.

2.2 Approach:
1. Once business requirements are written, finalize the methods/metrics for ensuring that the system contains the functionality specified must be developed.
2. The next groups of testing metrics activities we investigate relate to the test plan links between test cases and the requirements.
3. The linkage of the requirements, the relationship between unique requirements and unique tests.

2.3 Metrics Lifecycle:

Metrics Lifecycle

Steps involved in Metrics Life Cycle:

  1. Identification of various Metrics required for the Project.
  2. Classifying and Prioritizing the Metrics as per their usage and need that are going to be used for different activities in the project.
  3. Identifying data required for the metric; if data is not available, identify/setup process to capture the data.
  4. Communicating to Stakeholders, to Project team and Quality Audit team.
  5. Capturing and verifying the data in metrics as per the project requirement.
  6. Analyzing and processing data in metrics as per the project requirement.
  7. Reporting the metrics to Customer and project team for project tracking.
  8. Revising the metrics at regular interval to improve the quality.
  9. Adding any new metrics required/suggested by Customer/Vendor as part of Best Practices at various stage followed by step 1 to 8. 2.4 Types we will cover:

    Metric Based Approach for Requirements Gathering and Testing
    Click on the below links to read further:
    1. Requirement Reliability Metrics
    2. Testing Reliability Matrix
    3. Traceability Matrix

3. Conclusions:
To develop any software application, the base is requirement gathering, management and testing. The metrics discussed above can provide the project team a good control and grip on the project with less effort and minimum errors leading to quality deliverables.
The benefits that can be derived through a metric based approach are:

Requirements Reliability Metrics Testing Reliability Matrix Trace-Ability Matrix
  1. Quick identification of potential project risks arising out of scope creep or slippage.
  2. Logical categorization of requirements based on certain criteria like type, priority for project releases etc.
  3. Requirements Reliability Metrics are available in the requirement phase to assess test plans.
  4. Logical grouping of functional requirements planned for different project releases.
  5. Separate logging and tracking of new requirements-related to change controls or change requests as part of the project releases.
  6. Monitoring the details of requirements delivered and not delivered as per the release plans.
  1. Facilitates the quantitative assessment of the quality of the software application developed.
  2. These metrics assists in identifying the test characterization, test span, test coverage, test complexity.
  3. Test metrics gives the detailed information about the test repository and their maintenance.
  1. The trace-ability metrics gives the details of test to requirement links to do a Gap and change Impact analysis.
  2. This metric can have phases for Requirements, Functional Design, Technical Design, Coding, Data Base modeling, Functional Code Review ,Quality procedure—Unit testing, Integration testing and System Testing, Data Dictionary and DEFECT CONSOLIDATION LOG for the project.
    References:

www.cetin.net.cn/storage/cetin2/QRMS/ywxzzl12.htm
www.stsc.hill.af.mil/crosstalk/1998/12/hammer.asp

Requirements Reliability Metrics

Requirements Reliability Metrics

Go to Part 1 – Metric Based Approach for Requirements Gathering and Testing 


Requirements Reliability Metrics:
The business requirements are the foundation upon which the entire system is built and this specifies the functionality that must be developed in the final delivered software. And that requirement verification and validation is needed to assure that the functionality representing the requirements has indeed been delivered. However, often the requirements are not satisfied, leading to a process of fixing what you can and accepting the fact that certain functionality will not be there. The better approach is to get the requirements right the first time, complete, concise and clear, that will provide the developer a clear picture to build the system with out any misunderstanding between the Business Analyst, Developer, Quality Assurance team and the client.
The requirement reliability metric is based on the requirement specification and keeps track of the requirements-in scope of the project.


Requirement Specification:
The importance of correctly documenting requirements has caused the software industry to produce a significant number of aids to the creation and management of the requirements specification documents and individual specifications statements.

Requirement Management:
Considering the size and complexity of development efforts, the use of requirements management tools has become essential. Tools also provide capabilities far beyond those obtained from text-based maintenance and processing of requirements. Requirements management tools are sophisticated and complex – since the nature of the material for which they are responsible is finely detailed, time-sensitive, highly internally dependent, and can be continuously changing.
There are many requirement management tools to choose from. These ranges from simple word processors, to spreadsheets, to relational databases, to tools designed specifically for the management of requirements such as DOORS (Quality Systems & Software – Mt. Arlington, NJ) or RTM Requirements Traceability Management (Integrated Chipware, Inc. – Reston, VA). The key to selecting the appropriate tool is the functionality (See Table 1 for a comparison of tool capabilities) provided and the capability to develop metrics from the data, secondary contained in the tool.

Requirement Repository Capabilities
The metric capability of the tool is important. It should be noted that most of the metrics presented in this paper to demonstrate how to do requirements the right way were developed from the data contained in a requirement management tool. Table 2 shows a comparison of the metric capability associated with the different tools. Clearly the relational database and requirements management tool provide the capabilities needed to effectively support the management of requirements.

Requirement Repository Capabilities - 2