Error Guessing
Why can one Tester find more errors than another Tester in the same piece of software?

More often than not this is down to a technique called ‘Error Guessing’. To be successful at Error Guessing, a certain level of knowledge and experience is required. A Tester can then make an educated guess at where potential problems may arise. This could be based on the Testers experience with a previous iteration of the software, or just a level of knowledge in that area of technology. This test case design technique can be very effective at pin-pointing potential problem areas in software. It is often be used by creating a list of potential problem areas/scenarios, then producing a set of test cases from it. This approach can often find errors that would otherwise be missed by a more structured testing approach.


An example of how to use the ‘Error Guessing’ method would be to imagine you had a software program that accepted a ten digit customer code. The software was
designed to only accept numerical data.

Here are some example test case ideas that could be considered as Error Guessing:
1. Input of a blank entry
2. Input of greater than ten digits
3. Input of mixture of numbers and letters
4. Input of identical customer codes

What we are effectively trying to do when designing Error Guessing test cases, is to think about what could have been missed during the software design.

This testing approach should only be used to compliment an existing formal test method, and should not be used on its own, as it cannot be considered a complete form of testing software.

Exploratory Testing
This type of testing is normally governed by time. It consists of using tests based on a test chapter that contains test objectives. It is most effective when there are little or no specifications available. It should only really be used to assist with, or compliment a more formal approach. It can basically ensure that major functionality is working as expected without fully testing it.

Ad-hoc Testing

This type of testing is considered to be the most informal, and by many it is considered to be the least effective. Ad-hoc testing is simply making up the tests as you go along. Often, it is used when there is only a very small amount of time to test something. A common mistake to make with Ad-hoc testing is not documenting the tests performed and the test results. Even if this information is included, more often than not additional information is not logged such as, software versions, dates, test environment details etc.
Ad-hoc testing should only be used as a last resort, but if careful consideration is given to its usage then it can prove to be beneficial. If you have a very small window in which to test something, then the following are points to consider:

1. Take some time to think about what you want to achieve
2. Prioritize functional areas to test if under a strict amount of testing time
3. Allocate time to each functional area when you want to test the whole item
4. Log as much detail as possible about the item under test and its environment
5. Log as much as possible about the tests and the results

Random Testing

A Tester normally selects test input data from what is termed an ‘input domain’ in a structured manner. Random Testing is simply when the Tester selects data from the input domain ‘randomly’. In order for random testing to be effective, there are some important open questions to be considered:
1. Is random data sufficient to prove the module meets its specification when tested?
2. Should random data only come from within the ‘input domain’?
3. How many values should be tested?

As you can tell, there is little structure involved in ‘Random Testing’. In order to avoid dealing with the above questions, a more structured Black-box Test Design could be implemented instead. However, using a random approach could save valuable time and resources if used in the right circumstances. There has been much debate over the effectiveness of using random testing techniques over some of the more structured techniques. Most experts agree that using random test data provides little chance of producing an effective test. There are many tools available today that are capable of selecting random test data from a specified data value range. This approach is especially useful when it comes to tests associated at the system level. You often find in the real world that ‘Random Testing’ is used in association with other structured techniques to provide a compromise between targeted testing and testing everything.