Difference between Test Plan and Test Strategy | Do we really need Test Plan documents?

Difference between Test Plan and Test Strategy | Do we really need Test Plan documents?

In this post we will discuss:

  • Difference between Test Plan and Test Strategy (by both theoretically as per IEEE829 and practically, i.e. what actually happens in testing projects) in brief and simple manner.
  • Do we really need Test Plan documents?

Theory Says –
Test Plan – “A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort”
The purpose of the Master Test Plan, as stated by the IEEE Std 829 is to provide an overall test planning and test management document for multiple levels of test (either within one project or across multiple projects).
You can see the IEEE standard test plan is here.

Test Strategy – Test Strategy or Test Approach is a set of guide lines that describes test design. Test strategy says – How testing is going to be performed? What will be the test architecture?
Test Strategy can be at two levels – Company/Organization Level and Project Level. For e,g, If company is product based, then there can be a general test strategy for testing their software products. Same strategy can be implemented (with/without  some modifications) at project level whenever a new project comes.
Might be these are OK for beginners, but actually this is not enough.
Well, James Bach introduces following definitions in his blog (in green)–

  • Test Plan: the set of ideas that guide a test project
  • Test Strategy: the set of ideas that guide test design
  • Test Logistics: the set of ideas that guide the application of resources to fulfil a test strategy

I find these ideas to be a useful jumping off point. Here are some implications:

  • The test plan is the sum of test strategy and test logistics.
  • The test plan document does not necessarily contain a test plan. This is because many test plan documents are created by people who are following templates without understanding them, or writing things to please their bosses, without knowing how to fulfil their promises, or simply because it once was a genuine test plan but now is obsolete.
  • Conversely, a genuine test plan is not necessarily documented. This is because new ideas may occur to you each day that change how you test. In my career, I have mostly operated without written test plans.

Conclusion:
Test Strategy and Test Plan Relation
Is Test Plan document is required for testing?
Test Plan covers – What needs to be tested, How testing is going to be performed? Resources needed for testing, Timelines and Risk associated.

For efficient and effective test planning we don’t really need IEEE-829 template for test plan. Planning can be done without test plan document also. Identify your test Approach (strategy) and go with your testing. Many test plans are being created just for the sake of processes.
Note – We are not against Test Plan documents. Use test plan documents if they really contains some useful information.

In the upcoming posts, we will discuss points to take care while deciding a test strategy & guide to prepare a Test Plan.
Related Posts:

Tricky Software testing Terms

What is Ramp Testing? – Continuously raising an input signal until the system breaks down.
What is Depth Testing? – A test that exercises a feature of a product in full detail.
What is Quality Policy? – The overall intentions and direction of an organization as regards quality as formally expressed by top management.
What is Race Condition? – A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.
What is Emulator? – A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.
What is Dependency Testing? – Examines an application’s requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
What is Documentation testing? – The aim of this testing is to help in preparation of the cover documentation (User guide, Installation guide, etc.) in as simple, precise and true way as possible.
What is Code style testing? – This type of testing involves the code check-up for accordance with development standards: the rules of code comments use; variables, classes, functions naming; the maximum line length; separation symbols order; tabling terms on a new line, etc. There are special tools for code style testing automation.
What is scripted testing? – Scripted testing means that test cases are to be developed before tests execution and some results (and/or system reaction) are expected to be shown. These test cases can be designed by one (usually more experienced) specialist and performed by another tester.
Random Software Testing Terms and Definitions:
• Formal Testing: Performed by test engineers
• Informal Testing: Performed by the developers
• Manual Testing: That part of software testing that requires human input, analysis, or evaluation.
• Automated Testing: Software testing that utilizes a variety of tools to automate the testing process. Automated testing still requires a skilled quality assurance professional with knowledge of the automation tools and the software being tested to set up the test cases.
• Black box Testing: Testing software without any knowledge of the back-end of the system, structure or language of the module being tested. Black box test cases are written from a definitive source document, such as a specification or requirements document.
• White box Testing: Testing in which the software tester has knowledge of the back-end, structure and language of the software, or at least its purpose.
• Unit Testing: Unit testing is the process of testing a particular complied program, i.e., a window, a report, an interface, etc. independently as a stand-alone component/program. The types and degrees of unit tests can vary among modified and newly created programs. Unit testing is mostly performed by the programmers who are also responsible for the creation of the necessary unit test data.
• Incremental Testing: Incremental testing is partial testing of an incomplete product. The goal of incremental testing is to provide an early feedback to software developers.
• System Testing: System testing is a form of black box testing. The purpose of system testing is to validate an application’s accuracy and completeness in performing the functions as designed.
• Integration Testing: Testing two or more modules or functions together with the intent of finding interface defects between the modules/functions.
• System Integration Testing: Testing of software components that have been distributed across multiple platforms (e.g., client, web server, application server, and database server) to produce failures caused by system integration defects (i.e. defects involving distribution and back-office integration).
• Functional Testing: Verifying that a module functions as stated in the specification and establishing confidence that a program does what it is supposed to do.
• Parallel/Audit Testing: Testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly.
• Usability Testing: Usability testing is testing for ‘user-friendliness’. A way to evaluate and measure how users interact with a software product or site. Tasks are given to users and observations are made.
• End-to-end Testing: Similar to system testing – testing a complete application in a situation that mimics real world use, such as interacting with a database, using network communication, or interacting with other hardware, application, or system.
• Security Testing: Testing of database and network software in order to keep company data and resources secure from mistaken/accidental users, hackers, and other malevolent attackers.
• Sanity Testing: Sanity testing is performed whenever cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes testing basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.
• Regression Testing: Testing with the intent of determining if bug fixes have been successful and have not created any new problems.
• Acceptance Testing: Testing the system with the intent of confirming readiness of the product and customer acceptance. Also known as User Acceptance Testing.
• Installation Testing: Testing with the intent of determining if the product is compatible with a variety of platforms and how easily it installs.
• Recovery/Error Testing: Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
• Adhoc Testing: Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an addition to formal testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed – usually done by skilled testers. Sometimes ad hoc testing is referred to as exploratory testing.
• Configuration Testing: Testing to determine how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software.
• Load Testing: Testing with the intent of determining how well the product handles competition for system resources. The competition may come in the form of network traffic, CPU utilization or memory allocation.
• Penetration Testing: Penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage. This type of testing usually requires sophisticated testing techniques.
• Stress Testing: Testing done to evaluate the behavior when the system is pushed beyond the breaking point. The goal is to expose the weak links and to determine if the system manages to recover gracefully.
• Smoke Testing: A random test conducted before the delivery and after complete testing.
• Pilot Testing: Testing that involves the users just before actual release to ensure that users become familiar with the release contents and ultimately accept it. Typically involves many users, is conducted over a short period of time and is tightly controlled. (See beta testing)
• Performance Testing: Testing with the intent of determining how efficiently a product handles a variety of events. Automated test tools geared specifically to test and fine-tune performance are used most often for this type of testing.
• Exploratory Testing: Any testing in which the tester dynamically changes what they’re doing for test execution, based on information they learn as they’re executing their tests.
• Beta Testing: Testing after the product is code complete. Betas are often widely distributed or even distributed to the public at large.
• Gamma Testing: Gamma testing is testing of software that has all the required features, but it did not go through all the in-house quality checks.
• Mutation Testing: A method to determine to test thoroughness by measuring the extent to which the test cases can discriminate the program from slight variants of the program.
• Glass Box/Open Box Testing: Glass box testing is the same as white box testing. It is a testing approach that examines the application’s program structure, and derives test cases from the application’s program logic.
• Compatibility Testing: Testing used to determine whether other system software components such as browsers, utilities, and competing software will conflict with the software being tested.
Comparison Testing: Testing that compares software weaknesses and strengths to those of competitors’ products.
• Alpha Testing: Testing after code is mostly complete or contains most of the functionality and prior to reaching customers. Sometimes a selected group of users are involved. More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department.
• Independent Verification and Validation (IV&V): The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn’t fail in an unacceptable manner. The individual or group doing this work is not part of the group or organization that developed the software.
• Closed Box Testing: Closed box testing is same as black box testing. A type of testing that considers only the functionality of the application.
• Bottom-up Testing: Bottom-up testing is a technique for integration testing. A test engineer creates and uses test drivers for components that have not yet been developed, because, with bottom-up testing, low-level components are tested first. The objective of bottom-up testing is to call low-level components first, for testing purposes.
• Bug: A software bug may be defined as a coding error that causes an unexpected defect, fault or flaw. In other words, if a program does not perform as intended, it is most likely a bug.
• Error: A mismatch between the program and its specification is an error in the program.
• Defect: Defect is the variance from a desired product attribute (it can be a wrong, missing or extra data). It can be of two types – Defect from the product or a variance from customer/user expectations. It is a flaw in the software system and has no impact until it affects the user/customer and operational system. 90% of all the defects can be caused by process problems.
• Failure: A defect that causes an error in operation or negatively impacts a user/ customer.
• Quality Assurance: Is oriented towards preventing defects. Quality Assurance ensures all parties concerned with the project adhere to the process and procedures, standards and templates and test readiness reviews.
• Quality Control: quality control or quality engineering is a set of measures taken to ensure that defective products or services are not produced, and that the design meets performance requirements.
• Verification: Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walkthroughs and inspection meetings.
• Validation: Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verification are completed.

Testing Levels and Types
There are basically three levels of testing i.e. Unit Testing, Integration Testing and System Testing.
Various types of testing come under these levels.

Unit Testing: To verify a single program or a section of a single program.
Integration Testing: To verify interaction between system components
Prerequisite: unit testing completed on all components that compose a system
System Testing: To verify and validate behaviors of the entire system against the original system objectives
Software testing is a process that identifies the correctness, completeness, and quality of software.

V Model to W Model | W Model in SDLC Simplified

V Model to W Model | W Model in SDLC Simplified

We already discuss that V-model is the basis of structured testing. However there are few problem with V Model. V Model Represents one-to-one relationship between the documents on the left hand side and the test activities on the right. This is not always correct. System testing not only depends on Function requirements but also depends on technical design, architecture also. Couple of testing activities are not explained in V model. This is a major exception and the V-Model does not support the broader view of testing as a continuously major activity throughout the Software development life-cycle.

Paul Herzlich introduced the W-Model. In W Model, those testing activities are covered which are skipped in V Model.

The ‘W’ model illustrates that the Testing starts from day one of the of the project initiation.

If you see the below picture, 1st “V” shows all the phases of SDLC and 2nd “V” validates the each phase. In 1st “V”, every activity is shadowed by a test activity. The purpose of the test activity specifically is to determine whether the objectives of that activity have been met and the deliverable meets its requirements. W-Model presents a standard development life-cycle with every development stage mirrored by a test activity. On the left hand side, typically, the deliverable of a development activity (for example, write requirements) is accompanied by a test activity test the requirements and so on.
W Model final
Fig 1: W Model
W Model 2
Fig 2: Each phase is verified/validated. Dotted arrow shows that every phase in brown is validated/tested through every phase in sky blue.
Now, in the above figure,

  • Point 1 refers to – Build Test Plan & Test Strategy.
  • Point 2 refers to – Scenario Identification.
  • Point 3, 4 refers to – Test case preparation from Specification document and design documents
  • Point 5 refers to – review of test cases and update as per the review comments.

So if you see, the above 5 points covers static testing.

  • Point 6 refers to – Various testing methodologies (i.e. Unit/integration testing, path testing, equivalence partition, boundary value, specification based testing, security testing, usability testing, performance testing).
  • After this, there are regression test cycles and then User acceptance testing.

Conclusion – V model only shows dynamic test cycles, but W models gives a broader view of testing. the connection between the various test stages and the basis for the test is clear with W Model (which is not clear in V model).

You can find more comparison of W Model with other SDLC models Here.

Selenium Interview Questions

Posting the Interview questions of Selenium – Automated Software Testing Tool..

  1. What is the difference between an assert and a verify with Selenium commands?
  2. What Selenese commands can be used to help debug a regexp?
  3. What is one big difference between SilkTest and Selenium, excluding the price?
  4. Which browsers can Selenium IDE be run in?
  5. If a Selenium function requires a script argument, what would that argument look like in general terms?
  6. If a Selenium function requires a pattern argument, what five prefixes might that argument have?
  7. What is the regular expression sequence that loosely translates to “anything or nothing?”
  8. What is the globbing sequence that loosely translates to “anything or nothing?
  9. What does a character class for all alphabetic characters and digits look like in regular expressions?
  10. What does a character class for all alphabetic characters and digits look like in globbing?
  11. What must one set within SIDE in order to run a test from the beginning to a certain point within the test?
  12. What does a right-pointing green triangle at the beginning of a command in SIDE indicate?
  13. How does one get rid of the right-pointing green triangle?
  14. How can one add vertical white space between sections of a single test?
  15. What Selenium functionality uses wildcards?
  16. Which wildcards does SIDE support?
  17. What are the four types of regular expression quantifiers which we’ve studied?
  18. What regular expression special character(s) means “any character?”
  19. What distinguishes between an absolute and relative URL in SIDE?
  20. How would one access a Selenium variable named “count” from within a JavaScript snippet?
  21. What Selenese command can be used to display the value of a variable in the log file, which can be very valuable for debugging?
  22. If one wanted to display the value of a variable named answer in the log file, what would the first argument to the previous command look like?
  23. Where did the name “Selenium” come from?
  24. Which Selenium command(s) simulates selecting a link?
  25. Which two commands can be used to check that an alert with a particular message popped up?
  26. What does a comment look like in Column view?
  27. What does a comment look like in Source view?
  28. What are Selenium tests normally named (as displayed at the top of each test when viewed from within a browser)?
  29. What command simulates selecting the browser’s Back button?
  30. If the Test Case frame contains several test cases, how can one execute just the selected one of those test cases?
  31. What globbing functionality is NOT supported by SIDE?
  32. What is wrong with this character class range? [A-z]
  33. What are four ways of specifying an uppercase or lowercase M in a Selenese pattern?
  34. What does this regular expression match? regexp:[1-9][0-9],[0-9]{3},[0-9]{3}
  35. What are two ways to match an asterisk within a Selenese regexp?
  36. What is the generic name for an argument (to a Selenese command) which starts with //?
  37. What Selenese command is used to choose an item from a list?
  38. How many matches exist for this pattern? regexp:[13579][02468]
  39. What is the oddity associated with testing an alert?
  40. How can one get SIDE to always record an absolute URL for the open command’s argument?
  41. What Selenese command and argument can be used to transfer the value of a JavaScript variable into a SIDE variable?
  42. How would one access the value of a SIDE variable named name from within a JavaScript snippet used as the argument to a Selenese command?
  43. What is the name of the type of JavaScript entity represented by the last answer?
  44. What string(s) does this regular expression match? regexp:August|April 5, 1908
  45. What Selenium regular expression pattern can be used instead of the glob below to produce the same results? verifyTextPresent | glob:9512?
  46. What Selenium globbing pattern can be used instead of the regexp below to produce the same results? verifyTextPresent | regexp:Hush.*Charlotte

Software Testing Checklist – Major Areas of Testing – What to Look at?

In this post we will discuss the points to take care while testing following applications:

  • STAND-ALONE DATABASE APPLICATION
  • CLIENT/SERVER APPS
  • WEB BASED APPLICATIONS
  • PRINTERS AND DRIVERS – Interface software
  • LOCALIZATION / INTERNATIONALIZATION

STAND-ALONE DATABASE APPLICATION

  • installation (copy files, settings to registry, icon, groups). “Vanilla Windows” installation.
  • forms: test each field for capacity (5 test cases) and valid/Invalid input (3+), functionality
  • reports – calculations, data display (window sizes), colors, proportions, query attached
  • search – create database for testing that feature, each search-able field, combinations of 2 and 3, wild cards (* and ?) and their positioning
  • sort by multiple criteria, use empty fields to be substituted by data from other fields
  • import/export use Complete Record for testing (empty fields, shorter strings, data going to wrong field), volume testing
  • backup/restore – same as above

CLIENT/SERVER APPS

  • administration
  • installation (client, server),
  • user rights/privileges
  • error messages
  • database security (user ID, password)

login: boundary testing (6-12 characters), letters & digits, case sensitivity
password: boundary testing (6-12 characters), letters & digits, case sensitivity, replacing characters for asterisks, no CUT/COPY, # of failures

WEB BASED APPLICATIONS

  • Browser/OS Compatibility (overlapping frames, missing images, links not linking, fonts changing sizes)
  • performance
  • navigation (Dynamic Pages)

PRINTERS AND DRIVERS

  • Printer Drivers (especially new) are buggy
  • do not use new printers and drivers for testing applications
  • try multiple printers/drivers to decide if the problem is in the AUT or in driver
  • how do you install a new printer?
  • how do you know the version of a driver? (right click menu/printing preferences/right mouse click menu/About…)
  • Verification – overlapping actual result page with expected result page
  • Bug report – attach scanned image of the page with highlighted problems
  • Enumeration becomes a part of the bug description
  • test case is a file/document to be printed
  • Automation of creating printouts -is easy
  • To automate verification we print to file and binary compare files automatically
  • Problems to look for: incomplete content (part of data not printed), font issues (size, style, etc.), overlapping, lost positioning/adjustments, typing outside of page (labels, envelops), double-sided (order changes on the fly)

LOCALIZATION / INTERNATIONALIZATION

  • adjusting software to another language, currency, time/date format
  • double-byte characters set problem
  • standard names for controls are provided by Microsoft (GUI standards) in many languages
  • LOOK AT controls (menu items, label, list boxes)
  • EXPECT labels/list items not fitting into frames, incomplete translation
  • Internationalization – making software independent of language, currency, date format

CSTE Sample Papers | CSTE Sample Questions

On Readers request, I am posting CSTE Certification practice questions.

CSTE Certification Essay based questions:
1. What fields would you include in creating a new defect tracking program (used by QA, developers, etc)? (25 points)
a. Draw a pictorial diagram of a report you would create for developers to determine project status.
b. Draw a pictorial diagram of a report you would create for users and management to show project status.
2. What 3 tools would you purchase for your company for use in testing and justify why you would want them? (this question is in both essay parts, only rephrased. I think 10 points each time)
3. Describe the difference between validation and verification. (5 points)
4. Put the following testing types in order and give a brief description of each. System testing, acceptance testing, unit testing, integration testing, benefits realization testing. (10 points)
5. Describe automated capture/playback tools and list the benefits of using them. (10 points)
6. The QAI is starting a project to put the CSTE certification online. They will use an automated process for recording candidate information, scheduling candidates for exams, keeping track of results and sending out certificates. Write a brief test plan for this new project. (30 points)
7. List what you think are the two primary goals of testing. (5 or 10 points)
8. If you company is going to conduct a review meeting, what position would u select in the review committee and why?
9. What are the three factors will effect the Testing to stop?
10. Write any three attributes which will impact the Testing Process ?
11.This is a Problem solving question, Write a Test Transaction for the below, If a company is going to deduct 6.2% of sales reduction on first $62,000 earning.
12. What activity is done in Acceptance Testing, which is not done in System testing to ensure the Customer requirements?
13. Prepare a checklist for the developers on Unit Testing before the application comes to Testing Department?
CSTE Multiple Choice Questions:
1. What is the percent of the total cost of quality that comes from rework?
2. What is the percent of the total gross of sales that come from product failure?
3. What is the cost of quality?
4. What is management by fact?
5. What are the three types of interfaces?
6. What three rules should be followed for all reviews?
7. What is boundary value testing?
8. What is decision/branch coverage strategy?
9. Which of the following is not one of the 6 Structural Test Approaches?
10. Which of the following is not one of the 8 Functional Test Approaches?
11. Which of the following is not a perspective of quality?

a. transcendent
b. product-based
c. translucent
d. user-based
e. value-based
f. manufacturing based

12. True or False. Effectiveness is doing things right and efficiency is doing the right things.
13. Which of the following is not one of Deming’s 14 points for management?

a. Adopt a new philosophy
b. Eliminate slogans, exhortations, and targets for the work force
c. Mobility of management
d. Create constancy of purpose
14. True or False. The largest cost of quality is from production failure.

15. Defects are least costly to correct at what stage of the development cycle?

a. Requirements
b. Analysis & Design
c. Construction
d. Implementation

16. A review is what category of cost of quality?

a. Preventive
b. Appraisal
c. Failure

17. True or False. A defect is related to the term fault.
18. What type of change do you need before you can obtain a behaviour change?

a. Lifestyle
b. Vocabulary
c. Internal
d. Management

19. Software testing accounts for what percent of software development costs?

a. 10-20
b. 40-50
c. 70-80
d. 5-10

20. The purpose of software testing is to:

a. Demonstrate that the application works properly
b. Detect the existence of defects
c. Validate the logical design

21. True or False. One of the key concepts of a task force is that the leader be an expert in leading groups as opposed to an expert in a topical area.
22. Match the following terms with their definitions:

a. Black box testing
b. White box testing
c. Conversion testing
d. Thread testing
e. Integration testing