Mastering the Art of Test Case Writing for Software Testers

Test case writing is a crucial aspect of software testing, as it ensures that a product is thoroughly tested and any bugs are detected before release. A well-written test case should be clear, concise, and easy to understand, making it simple for testers to follow and execute.

To write an effective test case, there are several important elements to consider. First, it’s essential to have a clear understanding of the requirements and objectives of the software being tested. This will ensure that the test case is written to cover all necessary scenarios and will help in detecting potential bugs.

Next, it’s important to determine the test case’s objective or the expected outcome. This could include testing specific functionality, validating inputs and outputs, or checking for compliance with industry standards.

When writing the test case, it’s also essential to include specific steps for the tester to follow. These steps should be detailed and easy to understand, and should include any inputs or expected outputs. Additionally, it’s important to include any prerequisites or dependencies that must be met before the test case can be executed.

Here are the key characteristics of effective test cases:

  1. Clarity and simplicity: Effective test cases should be clear, concise, and easy to understand, making it simple for testers to follow and execute.
  2. Comprehensive coverage: Test cases should cover all necessary scenarios and requirements, ensuring thorough testing and bug detection.
  3. Specific objectives: Test cases should have specific objectives, such as testing specific functionality, validating inputs and outputs, or checking for compliance with industry standards.
  4. Detailed steps: Test cases should include detailed steps for the tester to follow, with clear inputs and expected outputs.
  5. Relevance: Test cases should be relevant to the software being tested, and not generic or irrelevant.
  6. Repeatability: Test cases should be written in such a way that they can be easily repeated, ensuring consistent results.
  7. Traceability: Test cases should be traceable back to the requirements or objectives of the software, allowing for easy tracking and reporting of test results.
  8. Maintainability: Effective test cases should be easily maintainable, allowing for updates and changes as needed.

Examples of test cases for a login feature in a software application may include:

Test Case: Verify that a user can successfully log in with a valid username and password

  • Objective: To verify that the login feature is working correctly
  • Steps:
    1. Open the login page
    2. Enter a valid username and password
    3. Click on the login button
    4. Verify that the user is directed to the home page
  • Expected Result: The user should be successfully logged in and directed to the home page

Test Case: Verify that an error message is displayed when an incorrect password is entered

  • Objective: To verify that the login feature is handling invalid credentials correctly
  • Steps:
    1. Open the login page
    2. Enter a valid username and an incorrect password
    3. Click on the login button
    4. Verify that an error message is displayed
  • Expected Result: An error message should be displayed, indicating that the entered password is incorrect

In conclusion, test case writing is an essential skill for software testers. By understanding the requirements and objectives of the software, determining the test case’s objective, and providing clear, easy-to-follow steps, testers can ensure that their test cases are thorough and effective in detecting bugs.

I strongly recommend enrolling in Blackbox Software Testing Foundation courses to bolster your testing abilities. These comprehensive, scientifically-based courses are a must-have for any serious software tester.

Language of Testing | Software Testing Vocabulary


While communicating with colleagues or clients or within testing team, we commonly use vocabulary like “unit testing, “functional testing”, regression testing”,” system testing”, “test policies”, Bug Triage” etc.
If we communicate the same to a person who is not a test professional we need to explain in detail each and every term. So in this case communication becomes so difficult and painful. To speak the language of testing, you need to learn its vocabulary.Find below a huge collection of testing vocabulary:Affinity Diagram: A group process that takes large amounts of language data, such as developing by brainstorming, and divides it into categories
Audit: This is an inspection/assessment activity that verifies compliance with plans, policies and procedures and ensures that resources are conserved.

Baseline:A quantitative measure of the current level of performance.
Benchmarking: Comparing your company’s products, services or processes against best practices or competitive practices, to help define superior performance of a product,service or support processes.
Black-box Testing: A test technique that focuses on testing the functionality of the program component or application against its specifications without knowlegde of how the system constructed.
Boundary value analysis: A data selection technique in which test data is chosen from the “boundaries” of the input or output domain classes, data structures and procedure parameters. Choices often include the actual minimum and maximum boundary values, the maximum value plus or minus one and the minimum value plus or minus one.
Branch Testing: A test method that requires that each possible branch on each decision be executed on at least once.
Brainstorming: A group process for generating creative and diverse ideas.
Bug: A catchall term for all software defects or errors.

Debugging: The process of analysing and correcting syntactic, logic and other errors identified during testing.
Decision Coverage: A white-box testing technique that measures the number of – or percentage – of decision directions executed by the test case designed. 100% Decision coverage would indicate that all decision directions had been executed at least once during testing. Alternatively each logical path through the program can be tested.
Decision Table
A tool for documenting the unique combinations of conditions and associated results in order to derive unique test cases for validation testing.
Defect Tracking Tools
Tools for documenting defects as they are found during testing and for
tracking their status through to resolution.
Desk Check: A verification technique conducted by the author of the artifcat to verify the completeness of their own work. This technique does not involve anyone else.
Dynamic Analysis: Analysis performed by executing the program code.Dynamic analysis executes or simulates a development phase product and it detects errors by analyzing the response of the product to sets of input data.

Entrance Criteria: Required conditions and standards for work product quality that must be present or met for entry into the next stage of the software development process.
Equivalence Partitioning: A test technique that utilizes a subset of data that is representative of a larger class. This is done in place of undertaking exhaustive testing of each value of the larger class of data.
Error or defect: 1.A discrepancy between a computed, observed or measured value or condition and the true, specified or theortically correct value or conditon 2.Human action that results in software containing a fault (e.g., omission or misinterpretation of user requirements in a software specification, incorrect translation or omission of a requirement in the design specification)
Error Guessing: Test data selection techniques for picking values that seem likely to cause defects. This technique is based upon the theory that test cases and test data can be developed based on intuition and experience of the tester.
Exhaustive Testing: Executing the program through all possible combination of values for program variables.
Exit criteria: Standards for work product quality which block the promotion of incomplete or defective work products to subsequent stages of the software development process.

Flowchart
Pictorial representations of data flow and computer logic. It is frequently easier to understand and assess the structure and logic of an application system by developing a flow chart than to attempt to understand narrative descriptions or verbal explanations. The flowcharts for systems are normally developed manually, while flowcharts of programs can be produced.
Force Field Analysis
A group technique used to identify both driving and restraining forces that influence a current situation.
Formal Analysis
Technique that uses rigorous mathematical techniques to analyze the algorithms of a solution for numerical properties, efficiency, and correctness.
Functional Testing
Testing that ensures all functional requirements are met without regard to the final program structure.

Histogram
A graphical description of individually measured values in a data set that is organized according to the frequency or relative frequency of occurrence. A histogram illustrates the shape of the distribution of individual values in a data set along with information regarding the average and variation.

Inspection
A formal assessment of a work product conducted by one or more qualified independent reviewers to detect defects, violations of development standards, and other problems. Inspections involve authors only when specific questions concerning deliverables exist. An inspection identifies defects, but does not attempt to correct them. Authors take corrective actions and arrange follow-up reviews as needed.
Integration Testing
This test begins after two or more programs or application components have been successfully unit tested. It is conducted by the development team to validate the interaction or communication/flow of information between the individual components which will be integrated.

Life Cycle Testing
The process of verifying the consistency, completeness, and correctness of software at each stage of the development life cycle.

Pass/Fail Criteria
Decision rules used to determine whether a software item or feature passes or fails a test.
Path Testing
A test method satisfying the coverage criteria that each logical path through the program be tested. Often, paths through the program are grouped into a finite set of classes and one path from each class is tested.
Performance Test
Validates that both the online response time and batch run times meet the defined performance requirements.
Policy
Managerial desires and intents concerning either process (intended objectives) or products (desired attributes).
Population Analysis
Analyzes production data to identify, independent from the specifications, the types and frequency of data that the system will have to process/produce. This verifies that the specs can handle types and frequency of actual data and can be used to create validation tests.
Procedure
The step-by-step method followed to ensure that standards are met.
Process
1. The work effort that produces a product. This includes efforts of people and equipment guided by policies, standards, and procedures.
2. A statement of purpose and an essential set of practices (activities) that address that purpose.
Proof of Correctness
The use of mathematical logic techniques to show that a relationship between program variables assumed true at program entry implies that another relationship between program variables holds at program exit.

Quality
A product is a quality product if it is defect free. To the producer, a product is a quality product if it meets or conforms to the statement of requirements that defines the product. This statement is usually shortened to: quality means meets requirements. From a customer’s perspective, quality means “fit for use.”
Quality Assurance (QA)
Deals with ‘prevention’ of defects in the product being developed.It is associated with a process.The set of support activities (including facilitation, training, measurement, and analysis) needed to provide adequate confidence that processes are established and continuously improved to produce products that meet specifications and
are fit for use.
Quality Control (QC)
Its focus is defect detection and removal. Testing is a quality control activity
Quality Improvement
To change a production process so that the rate at which defective products (defects) are produced is reduced. Some process changes may require the product to be changed.
 


Career in Software Testing – Life as a Tester


(An Article for beginners and fresher testers)
The myths revolving around the field of Software testing that it is a field which does not require any specialization of any sort as far as skill or talent is concerned has proved to be wrong. It is has in fact become the backbone of every organization now and it is the tester who brings out the maximum portion of revenue to the business due to its capability to find bugs at a stage where if left uncaught could prove really expensive for the business.Typically, you can find two category of software testers- Black Box and White Box Testers. Being a Black Box tester keeps you away from all the hassles of programming language and you seem to enjoy your work and playing with the application just like the end user does. So it gives you more insight to think about the way the end user can perform his actions and accordingly find the bugs.

White Box testers on the other hand have a life which plays around the programming techniques as well as their capability to analyse more into coding and can be helpful to the development team. There is no difference in the salary anyways as far as differentiating between their categories is concerned. A tester earns the same as any other developer does since they equally play a vital role in the enhancement of the organization.

Life as a Software tester does not seem to be as easy as it appears for the reason that they act as a foundation to any product for their ability to find hidden bugs even under the circumstances when the product appears to be free of bugs. In case, it gets missed by the testers, it the organization who has to borne the losses in terms of money as well as reputation. So as a tester, you need to make sure that you get yourself familiar in depth on the application you intend to work.

With capability of finding bugs at an early stage, a tester if often appreciated for his work without which there would have been thousands of dollars spent unnecessary on doing something which could have been caught by the expertise of the testers. Working for 8-9 hours in an office is as much essential for a tester as it is for a developer.

It can be figured out from above the importance of a Software Tester in an IT industry and his life does not seem to be that much easy as it appears to be.

Guest article by Varun Arora.

Also Read – Why Software Testing is a challenging job?


Which test cases to be Automated?


The testing elapsed time can be shortened, therefore leading to a huge saving in terms of time and money. Generally, the ROI begins to appear in the third iteration of automated testing.

High Scoring Test Cases
  • Tests that need to be run for every build of the application (sanity check, regression)
  • Tests that use multiple data values for the same actions (data driven tests)
  • Complex and time consuming tests
  • Tests requiring a great deal of precision
  • Tests involving many simple, repetitive steps
  • Testing needed on multiple combinations of OS, DBMS & Browsers
  • Creation of Data & Test Beds
  • Data grooming
Low Scoring Test Cases
    • Usability testing – “How easy is the application to use?”
    • One-time testing
    • “ASAP” testing – “We need to test NOW!”
    • Ad hoc/random testing – based on intuition and knowledge of application
    • Device Interface testing
    • Batch program testing
    • Back-end testing
– by the author of “Quick 101 on Automation”


Types of Testing tool for Client Server testing.

Following types of testing tools can be useful for client server testing:

1) Load/Stress Testing Tools: Example Astra Site Test by Mercury Interactive, Silk Performer by Seague Software to evaluate web based systems when subjected to large volumes of data of transactions.

2) Performance Testing: Example Load Runner to see the performance of the client server system.

3) UI testing: Example Win Runner by Mercury Interactive to perform UI testing.