Some Useful File System Scripts for QTP – Automation Testing

FSO stands for Files system object, during testing we need to do some basic task like add, move, copy, create and delete folders and files. We can also handle drives related things using File system object method. So It is very much important to learn File system object.

Below are some Useful QTP File System Scripts:


1) Create a Folder
Option Explicit
Dim objFSO, objFolder, strDirectory
strDirectory = “D:BraidyHunter”
Set objFSO = CreateObject(“Scripting.FileSystemObject”)
Set objFolder = objFSO.CreateFolder(strDirectory)

2) Delete a Folder
Set oFSO = CreateObject(“Scripting.FileSystemObject”)
oFSO.DeleteFolder(“E:BraidyHunter”)

3) Copying Folders
Set oFSO=createobject(“Scripting.Filesystemobject”)
oFSO.CopyFolder “E:gcr”, “C:jvr”, True

4) Checking weather the folder available or not, if not creating the folder
Option Explicit
Dim objFSO, objFolder, strDirectory
strDirectory = “D:BraidyHunter”
Set objFSO = CreateObject(“Scripting.FileSystemObject”)
If objFSO.FolderExists(strDirectory) Then
Set objFolder = objFSO.GetFolder(strDirectory)
msgbox strDirectory & ” already created “
else
Set objFolder = objFSO.CreateFolder(strDirectory)
end if

5) Returning a collection of Disk Drives
Set oFSO = CreateObject(“Scripting.FileSystemObject”)
Set colDrives = oFSO.Drives
For Each oDrive in colDrives
MsgBox “Drive letter: ” & oDrive.DriveLetter
Next

6) Getting available space on a Disk Drive
Set oFSO = CreateObject(“Scripting.FileSystemObject”)
Set oDrive = oFSO.GetDrive(“C:”)
MsgBox “Available space: ” & oDrive.AvailableSpace

Guide to Useful Software Test Metrics

I) Introduction
When we can measure what we are speaking about and express it in numbers, we know something about it; but when we cannot measure, when we cannot express it in numbers, our knowledge is of a meager and unsatisfactory kind: it may be the beginning of knowledge, but we have scarcely, in your thoughts, advanced to the stage of science.

Why we need Metrics?

“We cannot improve what we cannot measure.”
“We cannot control what we cannot measure”


AND TEST METRICS HELPS IN

  • Take decision for next phase of activities
  • Evidence of the claim or prediction
  • Understand the type of improvement required
  • Take decision on process or technology change

II) Type of metrics
Base Metrics (Direct Measure)

Base metrics constitute the raw data gathered by a Test Analyst throughout the testing effort. These metrics are used to provide project status reports to the Test Lead and Project Manager; they also feed into the formulas used to derive Calculated Metrics.
Ex: # of Test Cases, # of Test Cases Executed


Calculated Metrics (Indirect Measure)

Calculated Metrics convert the Base Metrics data into more useful information. These types of metrics are generally the responsibility of the Test Lead and can be tracked at many different levels (by module, tester, or project).
Ex: % Complete, % Test Coverage


Base Metrics & Test Phases

  • • # of Test Cases (Test Development Phase)
  • • # of Test Cases Executed (Test Execution Phase)
  • • # of Test Cases Passed (Test Execution Phase)
  • • # of Test Cases Failed (Test Execution Phase)
  • • # of Test Cases Under Investigation (Test Development Phase)
  • • # of Test Cases Blocked (Test dev/execution Phase)
  • • # of Test Cases Re-executed (Regression Phase)
  • • # of First Run Failures (Test Execution Phase)
  • • Total Executions (Test Reporting Phase)
  • • Total Passes (Test Reporting Phase)
  • • Total Failures (Test Reporting Phase)
  • • Test Case Execution Time ((Test Reporting Phase)
  • • Test Execution Time (Test Reporting Phase

Calculated Metrics & Phases
The below metrics are created at Test Reporting Phase or Post test Analysis phase

  • • % Complete
  • • % Defects Corrected
  • • % Test Coverage
  • • % Rework
  • • % Test Cases Passed
  • • % Test Effectiveness
  • • % Test Cases Blocked
  • • % Test Efficiency
  • • 1st Run Fail Rate
  • • Defect Discovery Rate
  • • Overall Fail Rate

III) Crucial Web Based Testing Metrics
Test Plan coverage on Functionality
Total number of requirement v/s number of requirements covered through test scripts.

  • • (No of requirements covered / total number of requirements) * 100

Define requirements at the time of Effort estimation
Example: Total number of requirements estimated are 46, total number of requirements tested 39; blocked 7…define what is the coverage?
Note: Define requirement clearly at project level


Test Case defect density

Total number of errors found in test scripts v/s developed and executed.

  • • (Defective Test Scripts /Total Test Scripts) * 100

Example: Total test script developed 1360, total test script executed 1280, total test script passed 1065, total test script failed 215
So, test case defect density is
215 X 100
—————————- = 16.8%
1280
This 16.8% value can also be called as test case efficiency %, which is depends upon total number of test cases which uncovered defects




Defect Slippage Ratio
Number of defects slipped (reported from production) v/s number of defects reported during execution.

  • • Number of Defects Slipped / (Number of Defects Raised – Number of Defects Withdrawn)

Example: Customer filed defects are 21, total defect found while testing are 267, total number of invalid defects are 17
So, Slippage Ratio is
[21/ (267-17)] X 100 = 8.4%

Requirement Volatility

Number of requirements agreed v/s number of requirements changed.

  • • (Number of Requirements Added + Deleted + Modified) *100 / Number of Original Requirements
  • • Ensure that the requirements are normalized or defined properly while estimating

Example: VSS 1.3 release had total 67 requirements initially, later they added another 7 new requirements and removed 3 from initial requirements and modified 11 requirements
So, requirement Volatility is
(7 + 3 + 11) * 100/67 = 31.34%
Means almost 1/3 of the requirement changed after initial identification

Review Efficiency
The Review Efficiency is a metric that offers insight on the review quality and testing
Some organization also use this term as “Static Testing” efficiency and they are aiming to get min of 30% defects in static testing
Review efficiency=100*Total number of defects found by reviews/Total number of project defects
Example: A project found total 269 defects in different reviews, which were fixed and test team got 476 defects which were reported and valid
So, Review efficiency is [269/(269+476)] X 100 = 36.1%

Efficiency and Effectiveness of Processes

  • Effectiveness: Doing the right thing. It deals with meeting the desirable attributes that are expected by the customer.
  • Efficiency: Doing the thing right. It concerns the resources used for the service to be rendered


Metrics for Software Testing

• Defect Removal Effectiveness
DRE= (Defects removed during development phase x100%) / Defects latent in the product
Defects latent in the product = Defects removed during development
Phase+ defects found later by user

• Efficiency of Testing Process (define size in KLoC or FP, Req.)
Testing Efficiency= Size of Software Tested /Resources used

References –
1. Measuring software product quality during testing by Rob Hendriks, Robert van Vonderen and Erik van Veenendaal
2. Software Metrics: A Rigorous Approach by: Norman E. Fenton
3. Software Test Metrics – A Practical Approach by Shaun Bradshaw
4. Testing Effectiveness Assessment (an article by Software Quality Consulting)
5. P.R.Vidyalakshmi
6. Measurement of the Extent of Testing (http://www.kaner.com/pnsqc.html)
7. http://www.stickyminds.com/
8. Effective Test Status Reporting by Rex Black
9. http://www.projectmanagement.tas.gov.au/guidelines/pm5_10.htm
10. http://whatis.com/
11. Risk Based Test Reporting by Paul Gerrard

Are you a software testing expert?

Are you thinking that you are a Software Testing Expert? Answer below questions in comments and judge yourself. For answers, select only one option and explain why you choose this option.

By – Anaya Johri

Question 1: You need to test a login screen. There is only one user id field which accepts 12 digits. As per the specs, it should accept only these values – 123456789012 or 098765432123 or 543216789091. Which software testing principle would use to design tests for this software?
Options:

  • Exhaustive testing is impossible
  • Testing shows presence of defects.
  • Confusing an absence of errors with product fit is a fallacy


  • Question 2: Your company has developed a product similar to MS Office. You have tested the application and it has passed all testing cycles. Your marketing team is selling the product by telling the client “Our product has passed Extremely thorough testing, and it is 100% bug free!”. Which principle of software testing is your marketing team not familiar with?
    Options:

    • Exhaustive testing is impossible
    • Testing shows presence of defects.
    • Confusing an absence of errors with product fit is a fallacy

    Question 3: You company has designed a Asset Management System.This system enables users to enter assets metadata. There are some basic rules and validations implemented like for date fields, numeric fields, duplicate asset names, etc. You have designed tests for this system. Later you come to know, that client will integrate their existing system to new system for data import. That integration system will not be available to test team for testing. Later, you and your team test the application and it passes all testing cycles successfully. Which software testing principle did your team overlook while designing tests?
    Options:

  • Exhaustive testing is impossible
  • Testing shows presence of defects.
  • Confusing an absence of errors with product fit is a fallacy


  • Answer below questions in comments and judge yourself. For answers, select only one option and explain why you choose this option.

    Lets Make Software Testing Fun

    Article by – softwaretestingsucks.com –

    I have been in software testing for quite some time, at times it sucked, because usually its boring and sometimes the testing activity is looked down as a menial job done by a less skilled person.
    Lets make an attempt to overcome the above mentioned sucking factors. To implement them, apart from individual testers effort,a large and important part has to be played by the management.

    i. Testers right attitude: First realize how important testing activity is.No matter how much developing effort has been spent on making a highly desirable product with state of the art technology, if it does not correctly do what it is supposed to do, then it is doomed to death.An improperly tested application reaching customers directly translates to loss of business.

    The right attitude of a tester is he/she must find satisfaction or joy in hunting down bugs and vice versa. My uncle sam who happens to be a hard core tester gave me this inspiring speech – “When a product comes for testing, you must pounce on it, use every method to break and destroy the system, you must inundate the developer with so many bugs, that he must curse himself why he has developed that product and why he did not find those bugs earlier .Destroy the product utterly and completely.Then you can right fully claim the title ‘Barbaric tester’.You must find that sadistic joy.Never ever assume that the product that comes for testing is bug free. In the vast land of the product code, the treasure of bugs are hidden somewhere and everywhere, now its your responsibility as a tester to dig them up, discover them and claim your treasure.”
    Hmmmmmmm, developers, pardon my uncle SAM, my fellow comrades(testers) don’t take my uncle’s advice literally, the point is add a bit of imagination and big titles, see the testing activity in a new perspective and have fun.

    ii. Usage of Automation tools: Manual testing is sometimes boring, but running the same set of test cases again and again, every time a bug is fixed in the name of ‘regression testing’, damn life sucks.
    If a test case has to be run several times, automate it using tools like Win-runner or Rational robot.There are significant advantages of automating them – first, you can run them over night and come in the morning and check results, this way it saves a lot of time and effort.Second, since it sucks to execute the same test manually again and again, there is a higher probability that the tester in his half-wake state may miss a regression bug, Whereas the dumb automated test case checks all the check points every time you run it.
    Try to Convince your management on purchasing Automation tool’s license.May be go for a pilot project initiative using trial version of the software.

    iii. How to make bug hunting fun: The management should create healthy competition between testers.They must find a way to make their testers aggressive and make finding bugs a fun filled activity. Here is one way that comes to my mind:

    -A ‘big bug board’ should be hung in a place visible to the entire team.The board should contain all the testers name and the number of valid bugs logged by them, which is updated each week.Every week, ranks should be awarded to the testers based on the bug count.The tester who logged most number of bugs should be ranked number one.Optionally he/she must be given a big title say ‘Tester of the week’.

    iv. A little change once in a while: If a tester is confined to a single product or single module within a product for a long time, then it will suck. Once in a while he should be allowed to have some change(upon tester’s desire) either in terms of moving to a different product or some different kind of work like a pilot project of some new tool.

    v. Equal Status: Any organization that understands the importance of customer satisfaction, knows the importance of the testing the product properly, before shipping it to customer.The management also have to realize that testing needs skill.The goal of any testing project is to test the product as thoroughly as possible, consuming as less resources as possible(time and man power).It needs skill and experience to design test cases that are small but tests large chunks of code in a product, that is easy to execute but catches a lot of bugs. It needs skill to identify weak areas in order to allocate more resources to it.It needs skill to use automation tools, test management tools, code coverage and performance testing tools.It takes some time for any new entrant into testing field to be a ‘trained bug catcher’ and experience in this field is valuable.
    Management has to take explicit measures to provide equal status to testers.The following are some of the them:

    • For all the important meetings related to a product , testing team should be invited.Their views must be considered while taking important decisions like when to release the product?
    • A separate career path has to be created for testers.They must be made equal to a developer in terms of pay and position.
    • Management has to allocate adequate funds as priority, to testing team to create adequate infrastructure
    • Training programs has to organized for new entrants in testing field, to understand testing techniques and to learn the usage of testing tools.

    Revolutionizing the Testing Process and Accuracy: The Decision Model [Testing Webinar]

    The Relational Model changed the way we manage data. Today, the Decision Model organizes business rules and logic in a similarly rigorous manner.  This webinar with Barbara von Halle and Larry Goldberg discusses how the Decision Model enables innovation in the way we can generate test cases, minimize the testing needed, test the logic prior to coding, and assure completeness of testing procedures.


    Learning Objectives:

    • Understand how and when program logic errors can be found prior to coding
    • Comprehend a high-level business-centric Decision Model from a testing perspective
    • Develop test cases from a single Rule Family in a Decision Model
    • Seek the minimum number of test cases for an entire Decision Model
    • Modify current testing scenarios to be Decision Model-driven and of higher quality
    • Provide test cases in parallel with completing requirements
    • Support test-driven development paradigms

      Its a Free Webinar. Webinar date: 9/2/2010
      Time: 11:00:00 AM to 12:30 PM
      Register Here: http://solutions.compaid.com/forms/WebinarA20100902?ProcessType=PreReg

      Testing without Requirements?

      [Author – Surbhi Shridhar] Are you doing testing without requirements? Are you doing testing as per the instructions given by development team? In this post we will discuss – 1. Some common risks/issues that you might face while testing the software without requirements, 2. Ways to tackle those risks and 3. Alternative ways of testing.
      On the fly development, pressurized deadlines, changing user requirements, legacy systems, large systems, varied requirements and many more are known influences for many a project manager to discard or not update the functional specification document. While a project manager/requirements manager may be aware of the system requirements, this can cause havoc for an independent test team.

      More so, with ever expanding teams, temporary resources, a large system may not be documented well enough or sometimes not at all, for the tester to form complete scenarios from the functional specification (if present) document.

      This can pose a great risk to the testing of the system. Requirements will never be clearly understood by the test team and the developers will adopt an ‘I am king’ attitude, citing incorrectly working functionality as the way the code or design is supposed to work. This can confuse end development and you may have a product on your hand that the business doesn’t relate to, the end users don’t understand and the testers can’t test.

      Dealing with a system that needs to be tested without functional specifications requires good planning, smart understanding and good documentation – lack of which caused the problem in the first place.
      Before I begin describing some ways in which you can tackle the problem – here are a few questions you could be asking yourself:

      1. What type of Software testing will be done on the system– unit / functional / performance / regression?
      2. Is it a maintenance project or is it a new product?
      3. From where have the developers achieved their understanding of the system?
      4. Is there any kind of documentation other than functional specification?
      5. Is there a business analyst or domain knowledge expert available?
      6. Is an end user available?
      7. Is here a priority on any part of the functionality of the system?
      8. Are there known high-risk areas?
      9. What is the level of expertise of the development team and the testing team?
      10. Are there any domain experts within your test team?
      11. Are there any known skill sets in your team?
      12. Are you considering manual/automated testing?

      All of the questions should help you identify the key areas for testing, the ignorable areas for testing, the strong and weak areas of your team. Once you ascertain the level of skills and expertise in your team, you have choices available to choose from.
      Some common risks/issues that you might face while testing the software without requirements are: –

      1. Half baked development – expected, since your requirements aren’t sufficient
      2. Inexperienced resources.
      3. Unavailable functional knowledge in your team.
      4. Ability to comprehend diagrammatic representation in various ways.
      5. Inadequate time to document functionality properly.
      6. Unavailability of business analysts, developers, and team architects.
      7. Constantly changing requirements.
      8. Level of detail required may be too high and time to produce the detail maybe be less.
      9. Conflicting diagrams/views arising from ambiguous discussions with business analysts and developers (yes, this is possible!)

      And some ways to tackle the risks are: –

      1. Know your team strength.
      2. Know the technical “modules” as developed by the developers.
      3. Collate the modules or decompose them further, depending on the type of testing you plan to do. For e.g. a system test will require you to combine modules while a unit test might require you to break down a module to extensive detail.
      4. Assign modules as per person strength and knowledge.
      5. Maintain a clean communication line between the team, developers and the business analysts.
      6. If the above is done, changing your documentation to match the ever-changing requirements should be possible, although- beware, it could create havoc with your estimates.
      7. When conflicts arise, make sure to discuss it out with the parties involved. In case that is not possible, refer to prototypes- if any. If there are no prototypes, validate it from an end user and the desired business approach/need.
      8. To avoid ambiguous interpretation of diagrams or documents, make sure to determine standards for your teams to follow. In case of diagrams, choose those approaches, which are clean and analysis free. When a diagram is open for analysis, it is open for interpretation, which may lead to errors. Maintaining error free documentation and tracking of the same can be painful for a large project.
      9. If level of detail required is high with limited time on your hands, document the basics – i.e. a very high-level approach and update the test cases as you go ahead with testing.
      10. Unfortunately there is little you can do in terms of inexperienced resources or inadequate functional knowledge expertise within your team. Be prepared to hire outside help or build up the knowledge by assigning a team member to the task.

      Alternative Ways of Testing
      If there is absolutely no time for you to document your system, do consider these other approaches of testing:
      User/Acceptance Testing: User Testing is used to identify issues with user acceptability of the system. The most trivial to the most complicated defects can be reported here. It is also possibly the best way to determine how well people interact with the system. Resources used are minimal and potentially requires no prior functional or system knowledge. In fact the more unknown a user is to the system, the more preferred he is for user testing. A regressive form of user testing can help uncover some defects in the high-risk areas.

      Random Testing: Random Testing has no rules. The tester is free to do what he pleases and as he seems fit. The system will benefit however, if the tester is functionally aware of the domain and has a basic understanding of business requirements. It will prevent reporting of unnecessary defects. A variation of random testing – called as Monte Carlo simulation, which is described as generation of values for uncertain variables over and over to simulate a model can also be used. This can be used for systems, which have large variation in input data of the numbers or data kind.

      Customer Stories: User/Customer stories can be defined as simple, clear, brief descriptions of functionality that will be valuable to real users. Small stories are written about the desired function. The team writing the stories will include customer, project manager, developer and the tester. Concise stories are hand written on small cards such that the stories are small and testable. Once a start is made available to the tester, available functional and system knowledge can help speed up the process of testing and further test case development.

      Conclusion
      In the end, all one requires is careful thought, planning and mapping out of tasks before one actually begins testing. Testers and Test Analysts usually react by pressing the panic button and begin testing – which ultimately leads to complete chaos and immeasurable quality of the product. Clear thinking with valid goals will help you and your product achieve desirable quality. You might


      Author – Surbhi Shridhar, B.Tech(I.T.) is a Senior Software Engineer. She has 4 years of valuable experience in the software field.
      References