Which test cases to be Automated?


The testing elapsed time can be shortened, therefore leading to a huge saving in terms of time and money. Generally, the ROI begins to appear in the third iteration of automated testing.

High Scoring Test Cases
  • Tests that need to be run for every build of the application (sanity check, regression)
  • Tests that use multiple data values for the same actions (data driven tests)
  • Complex and time consuming tests
  • Tests requiring a great deal of precision
  • Tests involving many simple, repetitive steps
  • Testing needed on multiple combinations of OS, DBMS & Browsers
  • Creation of Data & Test Beds
  • Data grooming
Low Scoring Test Cases
    • Usability testing – “How easy is the application to use?”
    • One-time testing
    • “ASAP” testing – “We need to test NOW!”
    • Ad hoc/random testing – based on intuition and knowledge of application
    • Device Interface testing
    • Batch program testing
    • Back-end testing
– by the author of “Quick 101 on Automation”


Types of Testing tool for Client Server testing.

Following types of testing tools can be useful for client server testing:

1) Load/Stress Testing Tools: Example Astra Site Test by Mercury Interactive, Silk Performer by Seague Software to evaluate web based systems when subjected to large volumes of data of transactions.

2) Performance Testing: Example Load Runner to see the performance of the client server system.

3) UI testing: Example Win Runner by Mercury Interactive to perform UI testing.

 


The Need of Unit Testing


The developers in your company do not believe in testing their code or doing unit testing. How do you tackle this problem and explain them the need of unit testing?


The Need of Unit Testing:a) Unit testing gives programmers measurable confidence in the source code they produce.

b) Unit testing uncovers defects in source code shortly after it is written, which saves valuable time and resources, sometimes by orders of magnitude.

c) Unit testing significantly reduces the amount of debugging necessary by avoiding defects in the first place and by catching those that do occur while they are relatively easy to detect and fix.

d) Unit testing avoids the practice of testing everything at once when it is significantly more expensive in terms of cost, development team morale, and customer satisfaction.

e) Unit testing helps you focus on exactly what is important for a module so that when all of your tests run successfully, you can be reasonably sure that your module has no major defects.


Test Effectiveness and Test Efficiency – Revisiting


Effectiveness – How well the user achieves the goals they set out to achieve using the system (process).
Efficiency – The resources consumed in order to achieve their goals.In Software field, these terms are primarily used to indicate the effort put in to develop the Software and How much is customer satisfied with the product.
Effectiveness signifies how well the customer requirements have been met i.e. does the final product provides the solution to the Customer problem effectively.

Efficiency is something, which is internal to the Organization that produced the Software product. It is basically- how efficiently the available resources (time, hardware, personnel, expertise etc.) were utilized to create the Software product.


Thus, Effectiveness of Software is a measure of Customer response on meeting product’s requirements and Efficiency is a measure of optimum utilization of resources to create the Software product.

Effective – producing a powerful effect
Efficient – producing results with little wasted effort

Example:
James efficiently made the calls he had wanted to make. Robert didn’t make the calls he had planned to make, but effectively met his sales quota by profiting from a chance encounter with a business acquaintance.

Key Point:When you’re effective, you are able to accomplish the worthwhile goal you’ve chosen. When you’re efficient, you quickly carry out actions. You won’t be effective, however, unless those actions result in your achieving a meaningful goal.

Test Effectiveness = ((Defects removed in a phase) / (Defect injected + Defect escaped)) * 100
Test Efficiency = (Test Defects / (Test Defects + Acceptance Defects)) * 100

The Test defects = Unit + Integration + System defects
Acceptance Defects = Bugs found by the customer


Common Software Testing Pitfalls


by – Wing La; Source: http://ebiz.u-aizu.ac.jp/~paikic/lecture/2007-2/adv-internet/papers/TestE-Commerce.pdf

Poor estimation. Developers underestimate the effort and resources required for testing. Consequently, they miss deadlines or deliver a partially tested system to the client.

Untestable requirements. Describing requirements ambiguously renders them impossible or difficult to test.

Insufficient test coverage. An insufficient number of test cases cannot test the full functionality of the system.

Inadequate test data. The test data fails to cover the range of all possible data values—that is, it omits boundary values.

➤  False assumptions. Developers sometimes make claims about the system based on assumptions about the underlying hardware or software. Watch out for statements that begin “the system should” rather than “the system [actually] does.”

➤  Testing too late. Testing too late during the development process leaves little time to manoeuvre when tests  find major defects.

➤  “Stress-easy” testing. When testing does not place the system under sufficiently high levels of stress, it fails to  investigate system breakpoints, which therefore remain  unknown.

Environmental mismatch. Testing the system in an environment that is not the same as the environment on which it will be installed doesn’t tell you about how it will work in the real world. Such mismatched tests make little or no attempt to replicate the mix of peripherals, hardware, and applications present in the installation environment.

Ignoring exceptions. Developers sometimes erroneously slant testing toward normal or regular cases. Such testing often ignores system exceptions, leading to a system that works most of the time, but not all the time.

Configuration mismanagement. In some cases, a software release contains components that have not been tested at all or were not tested with the released versions of the other components. Developers can’t ensure that the component will work as intended.

Testing overkill. Over-testing relatively risk-free areas of the system diverts precious resources from the system’s more highrisk (and sometimes difficult to test) areas.

No contingency planning. There is no contingency in the test plan to deal with significant defects discovered during testing.

Non-independent testing. When the development team carries out testing, it can lack the objectivity of an independent testing team.