Tips to Prioritizing the tests | A way to achieve Time-boxing in testing

A common problem is faced in testing phases (especially when UAT/release is near) – There are so many tests for testing/execution but there is so less time. In 2004, Johanna Rothman gives a recipe in her article “So Many Tests, So Little Time
| TimeBox the testing”. From Johanna’s article, I conclude that Prioritizing the tests is one important step to achieve time-boxing in testing.


For those who don’t know what is time-boxing:
Time-boxing is a well-established project management (time management technique) used to limit project scope in a fixed duration by forcing tradeoffs. Time-boxing is a valuable technique for testing as well. For example you need to test the application before UAT and there are only 4 weeks left.
Here we will discuss, how to prioritize tests (test scripts/test cases), complete the most important tests in given time and deliver a high quality product. There can be many other best practices for time-boxing the testing but Prioritizing the tests is one important step to achieve time-boxing in testing.


Why to Prioritize Tests:

  • We can’t test everything.
  • There is never enough time to do all testing you would like.

(Read Testing Limitations)
Tips to Prioritizing the tests:

  • Possible ranking criteria ( all risk based)
  • Test where a failure would be most serve (take help from development team)
  • Test where failures would be most visible (take help from development team)
  • Take the help of customer/Business Analyst in understanding what is most important to him.
  • What is most critical to the customers business? (Take the help of customer/Business Analyst)
  • Areas changed most often. (take help from development team)
  • Areas with most problems in the past. (take help from development team)
  • Most complex areas, or technically critical. (take help from development team, Business Analyst)

Note: If you follow above, whenever you stop testing, you have done the best testing in the time available.

Choosing right scripting technique/Framework for Automation Testing

While planning for automation testing for any project, a tough question for Testing automation engineers are – “Which automation scripting techniques or framework should be chosen?”
Choosing right framework/scripting technique helps in maintaining the costs and maintaining good ROI. Costs associated with test scripting are due to the development effort and the maintenance efforts. The approach of scripting used during test automation has effect on the costs


Various framework/scripting techniques are generally used:
1. Linear (simple record and playback)
2. Structured (uses control structures – typically ‘if-else’, ‘switch’, ‘for’, ‘while’ conditions/ statements)
3. Hybrid
4. Data driven (data is separated and stored in external excel sheets)
5. Keyword driven


Note:

  • In Easy techniques like record and playback, there is low development cost but high maintenance cost.
  • In Advanced techniques like keyword driven testing, there is high development cost but low maintenance cost.

So Test managers/test Architects should be wise in choosing the right automation framework. Test managers need to identify the following for each scripting technique / framework:

  • Is the approach structured?
  • How much programming/development is required? As scripting approach changes from liner to keyword driven scripts, the development costs are increases.
  • What kind of programming skills needed? Like – In linear less proficiency is required in programming, but in keyword driven more proficiency is required in programming.
  • How much planning and management efforts required for the automation project? Planning required managing the automation project increases as we move from linear to Keyword framework.
  • How much Maintenance is required? Maintenance cost of automation project decreases as we move from linear to Keyword framework.

References:
1. Software Test Automation: Effective use of test execution tools by Dorothy Graham and Mark Fewster

Selenium for Functional testing of web applications

Functional testing and black box is a methodology used to test the behavior that has an application from the viewpoint of their functions, validated for this purpose various aspects ranging from the aesthetics of the front end, the navigation within your pages between pages within the forms, the compliance of technical specifications associated with fields, buttons, bars, among other pages, entry permits and access to consultations and modifications, management of parameters, the management of the modules that constitute , and other conditions that make up the various “features” expected to provide the system to operate the end user as a normal and correct.
To meet this objective, the tester must choose a set of inputs under certain pre-defined within a certain context, and to check whether the outputs are correct or incorrect depending on the outcome defined in advance between the parties and / or techniques for the customer / supplier.

This form of testing an application is made “outside”, that is why “black box testing” because the test covers the roads that follow the internal procedures of the program.

In connection with this test, although there are many tools now that this one out for various reasons will be published in future articles is: Selenium.

Selenium works directly on the web browser, its installation is simple and handling is so intuitive that allows quickly define and configure a test case, recording the journey in a page and then save the sequence of steps as a test script and and then play it when you want.
Useful points:

  • Selenium is an open-source tool that not only allows the testing of the system but also facilitates the acceptance testing of web applications.
  • Integrates with Firefox, and includes the ability to write the tests directly in Java, C #, Python and Ruby.
  • This solution has three basic tools to record a sequence of steps within a website, simulate the test with different browsers and automated test generation.
  • Selenium IDE is a plug-in for Firefox which allows you to record and execute scripts directly from your browser.
  • Selenium RC is a library and server written in Java that allows you to run scripts from local or remote through commands.
  • Grids Selenium: Selenium server allows multiple coordinate in order to run scripts on multiple platforms and devices at the same time.

Download – Selenium IDE is a plug-in for Firefox.

Useful links

  • Firefox Install Link
  • Firefox Addons Link
  • OpenQA WebSite
  • Types of Software errors and bugs | Most Common Software bugs

    by Padmini C (Author of Beginners guide to Software Testing)

    Following are the most common software errors that aid you in software testing. This helps you to identify errors systematically and increases the efficiency and productivity of software testing. This topic surely helps in finding more bugs more effectively 🙂 . Also, you can use this as a checklist while preparing test cases and while performing testing.

    Types of errors with examples:
    User Interface Errors: Missing/Wrong Functions, Doesn’t do what the user expects, Missing information, Misleading, Confusing information, Wrong content in Help text, Inappropriate error messages. Performance issues – Poor responsiveness, Can’t redirect output, inappropriate use of key board.

    Error Handling: Inadequate – protection against corrupted data, tests of user input, version control; Ignores – overflow, data comparison, Error recovery – aborting errors, recovery from hardware problems.

    Boundary related errors: Boundaries in loop, space, time, memory, mishandling of cases outside boundary.

    Calculation errors: Bad Logic, Bad Arithmetic, Outdated constants, Calculation errors, Incorrect conversion from one data representation to another, Wrong formula, Incorrect approximation.

    Initial and Later states: Failure to – set data item to zero, to initialize a loop-control variable, or re-initialize a pointer, to clear a string or flag, Incorrect initialization.

    Control flow errors: Wrong returning state assumed, Exception handling based exits, Stack underflow/overflow, Failure to block or un-block interrupts, Comparison sometimes yields wrong result, Missing/wrong default, Data Type errors.

    Errors in Handling or Interpreting Data: Un-terminated null strings, Overwriting a file after an error exit or user abort.

    Race Conditions: Assumption that one event or task finished before another begins, Resource races, Tasks starts before its prerequisites are met, Messages cross or don’t arrive in the order sent.

    Load Conditions: Required resources are not available, No available large memory area, Low priority tasks not put off, Doesn’t erase old files from mass storage, Doesn’t return unused memory.

    Hardware: Wrong Device, Device unavailable, Underutilizing device intelligence, Misunderstood status or return code, Wrong operation or instruction codes.

    Source, Version and ID Control: No Title or version ID, Failure to update multiple copies of data or program files.

    Testing Errors: Failure to notice/report a problem, Failure to use the most promising test case, Corrupted data files, Misinterpreted specifications or documentation, Failure to make it clear how to reproduce the problem, Failure to check for unresolved problems just before release, Failure to verify fixes, Failure to provide summary report.

    Measuring Software test effectiveness

    The main objectives of testing are to establish confidence and to find defects. This article describes some measures for test effectiveness.
    Defect Detection Percentage (DDP)
    DPP = Defects known by testing/Total Known Defects
    Whenever a piece of software is written, defects are inserted during development. The more effective the testing is in finding those defects, the fewer will escape into operation. For example, if 100 defects have been built into the software, and our testing only finds 50, then its DDP is 50%. If we had found 80 defects, we would have a DDP of 80%; if we had found only 35, our DDP would only have been 35%. Thus it is the escapes, those defects that escaped detection, which determine the quality of the detection process.
    Although we may never know the complete total of defects inserted, this measure is a very useful one, both for monitoring the testing process and for predicting future effort.

    The basic definition of the Defect Detection Percentage is the number of defects found by testing, divided by the total known defects. Note that the total known defects consists of the number of defects found by (this) testing plus the total number of defects found afterwards. The scope of the testing represented in this definition may be a test phase such as system testing or beta testing, testing for a specific functional area, testing of a given project, or any other testing aspect which it is useful to monitor.

    The total known defects found so far is a number that can only increase as time goes on, so the DDP computed will always go down over time.

    DDP at different test stages or application areas DDP can be measured at different stages of software development and for different types of testing:

    • Unit testing: Because testing at this stage is usually fairly informal, the best option is for each individual developer to track his or her own DDP, as a way to improve personal professionalism, as recommended by Humphreys (1997).
    • Link, integration, or system testing: The point at which software is turned over to a more formal process is normally the earliest practical point to measure DDP.
    • Different application areas, such as functional areas, major subsystems, or commercial products. The DDP would not be the same over all groups, as they may have different test objectives, but over time this can help the test manager to plan the best value testing within limited constraints.
    • DDP of early defect detection activities such as early test design and reviews or inspections. Early test design (the V-model) can find defects when they are much cheaper to fix, as can reviews and inspections. Knowing the DDP of these early activities as well as the test execution DDP can help the test manager to find the most cost effective mix of defect detection activities