Better Requirements | Better Testability


“Good Manual Testers are hard to find..”

In order to do better testing, testers should understand the quality attributes. Testability is one important quality attribute.

Testability is the degree of difficulty of testing a system. This is determined by both aspects of the system under test and its development approach.

  • Higher testability: more better tests, same cost.
  • Lower testability:  Weaker tests, same cost.

When the degree of testability is lower, then there are chances of skipping bugs during the testing. Main causes for lower testability –


1. Functional/Non-functional Requirements – Poor/unambiguous requirements written by the author. Insufficient requirement reviews by testers.

2. Technical requirements/Architecture/Design of the system – Development teams concerns functionality for the customer, not for the testers. Testing Team does not get Management support and testing team not involved in the design meetings.

Higher testability can be achieved –
Note – Testability is not only testable, but also Easy to Test. The goal of increasing the testability of software is not just to detect defects but more importantly, to detect defects as soon as they are introduced.



1. If requirements are explicitly defined. The source for many improvements to the testability of the system will stem from the requirements. Business Analyst should keep the requirements language simple so that it should be easily understandable by testing and dev teams.
While reviewing the requirements/specifications – testers/test case designers should inspect requirements for testability. But how Testers can inspect the requirements for testability?
– James Bach has defined the Heuristics of Software Testability. Go thru this checklist. This is very useful while reviewing the spec documents.

– Go thru Guidelines and Checklist for Requirement Document Review.

2. Testers should be involved in the software design/architecture meetings, so that testers can understand how the software system is being developed. This is very required because it will help testers in understand the design, the complexity of the system and relations of one function/module with another function/module. This will help testers in getting the higher testability. Thru this testing team knows which programs are being touched in what scenario.
Sometimes, to cover the client’s requirements, development team develops the requirements in a complex way which makes the testing difficult. For example, developers might include some nightly jobs (db jobs) in the system which corrects the data. Now these things increase the complexity of the system which results into lower testability. These things should be avoided. This is one more reason that why testers should be involved in the software design/architecture meetings


References:


Which test cases to be Automated?


The testing elapsed time can be shortened, therefore leading to a huge saving in terms of time and money. Generally, the ROI begins to appear in the third iteration of automated testing.

High Scoring Test Cases
  • Tests that need to be run for every build of the application (sanity check, regression)
  • Tests that use multiple data values for the same actions (data driven tests)
  • Complex and time consuming tests
  • Tests requiring a great deal of precision
  • Tests involving many simple, repetitive steps
  • Testing needed on multiple combinations of OS, DBMS & Browsers
  • Creation of Data & Test Beds
  • Data grooming
Low Scoring Test Cases
    • Usability testing – “How easy is the application to use?”
    • One-time testing
    • “ASAP” testing – “We need to test NOW!”
    • Ad hoc/random testing – based on intuition and knowledge of application
    • Device Interface testing
    • Batch program testing
    • Back-end testing
– by the author of “Quick 101 on Automation”


Common Software Testing Pitfalls


by – Wing La; Source: http://ebiz.u-aizu.ac.jp/~paikic/lecture/2007-2/adv-internet/papers/TestE-Commerce.pdf

Poor estimation. Developers underestimate the effort and resources required for testing. Consequently, they miss deadlines or deliver a partially tested system to the client.

Untestable requirements. Describing requirements ambiguously renders them impossible or difficult to test.

Insufficient test coverage. An insufficient number of test cases cannot test the full functionality of the system.

Inadequate test data. The test data fails to cover the range of all possible data values—that is, it omits boundary values.

➤  False assumptions. Developers sometimes make claims about the system based on assumptions about the underlying hardware or software. Watch out for statements that begin “the system should” rather than “the system [actually] does.”

➤  Testing too late. Testing too late during the development process leaves little time to manoeuvre when tests  find major defects.

➤  “Stress-easy” testing. When testing does not place the system under sufficiently high levels of stress, it fails to  investigate system breakpoints, which therefore remain  unknown.

Environmental mismatch. Testing the system in an environment that is not the same as the environment on which it will be installed doesn’t tell you about how it will work in the real world. Such mismatched tests make little or no attempt to replicate the mix of peripherals, hardware, and applications present in the installation environment.

Ignoring exceptions. Developers sometimes erroneously slant testing toward normal or regular cases. Such testing often ignores system exceptions, leading to a system that works most of the time, but not all the time.

Configuration mismanagement. In some cases, a software release contains components that have not been tested at all or were not tested with the released versions of the other components. Developers can’t ensure that the component will work as intended.

Testing overkill. Over-testing relatively risk-free areas of the system diverts precious resources from the system’s more highrisk (and sometimes difficult to test) areas.

No contingency planning. There is no contingency in the test plan to deal with significant defects discovered during testing.

Non-independent testing. When the development team carries out testing, it can lack the objectivity of an independent testing team.


Software Tester’s Mental Life


Boris Beizer, identified five phases of a tester’s mental life:

Phase 0: There’s no difference between testing and debugging. Other than in support of debugging, testing has no purpose.

Phase 0 Thinking: Testing = Debugging

Phase 1: The purpose of testing is to show that the software works.

Phase 1 Thinking: The S/W Works

Phase 2: The purpose of testing is to show that the software doesn’t work.

Phase 2 Thinking: The S/W Doesn’t Work

Phase 3: The purpose of testing is not to prove anything, but to reduce the perceived risk of the software not working to an acceptable value.

Phase 3 Thinking: Test for Risk Reduction

Phase 4: Testing is not an act. It is a mental discipline that results in low-risk software without much testing effort.

Phase 4 Thinking: A State of Mind

Now think – Which tester’s mental phase are you?


Why start testing Early?


Introduction :
You probably heard and read in blogs “Testing should start early in the life cycle of development”. In this chapter, we will discuss Why start testing Early? very practically.Fact One
Let’s start with the regular software development life cycle:

When project is planned

 

 
  • First we’ve got a planning phase: needs are expressed, people are contacted, meetings are booked. Then the decision is made: we are going to do this project.
  • After that analysis will be done, followed by code build.
  • Now it’s your turn: you can start testing.

Do you think this is what is going to happen? Dream on.

This is what’s going to happen:

This is what actual happened when the project executes

 

  • Planning, analysis and code build will take more time then planned.
  • That would not be a problem if the total project time would pro-longer. Forget it; it is most likely that you are going to deal with the fact that you will have to perform the tests in a few days.
  • The deadline is not going to be moved at all: promises have been made to customers, project managers are going to lose their bonuses if they deliver later past deadline.

Fact Two
The earlier you find a bug, the cheaper it is to fix it.

Price of Buggy Code

If you are able to find the bug in the requirements determination, it is going to be 50 times cheaper
(!!) than when you find the same bug in testing.
It will even be 100 times cheaper (!!) than when you find the bug after going live.

Easy to understand: if you find the bug in the requirements definitions, all you have to do is change the text of the requirements. If you find the same bug in final testing, analysis and code build already took place. Much more effort is done to build something that nobody wanted.

Conclusion: start testing early!
This is what you should do:

Testing should be planned for each phase
  • Make testing part of each Phase in the software life cycle
  • Start test planning the moment the project starts
  • Start finding the bug the moment the requirements are defined
  • Keep on doing that during analysis and design phase
  • Make sure testing becomes part of the development process
  • And make sure all test preparation is done before you start final testing. If you have to start then, your testing is going to be crap!

Want to know how to do this?
Go to the Functional testing step by step page. (will be added later)