Defect Trends report in Software Testing


The Defect Trends report calculates a rolling average of the number of bugs that the team has opened, resolved, and closed based on the filters that you specify. The rolling average is based on the per QA Build.Defect Trends reports are very important for Development/Test Managers and senior management to understand how the Bug/Defect resolve and close rate is.

The following illustration displays an example of the Bug Trends report.

Defect Trends report

Following is the example of Unhealthy Defect Trend
Unhealthy Defect Trends report


Excuses for testers when bugs are caught in later testing cycles/UAT/Production


[Note – This post might not be helpful for testers who work in independent test companies]
One of the biggest pain-point for testers is that whenever a bug is caught in UAT or caught in later stages then blame is unfairly put on testing teams. This further results into unfair yearly reviews and appraisals of testing teams.In many mid size companies, less value is given to testing teams. Testing culture lacks in those companies. Management should understand that testing team is “also responsible” [not “only responsible”] when a bug is caught in production.

Testers should be proactive and should be able deal with such situations. They should have the excuses ready when they are asked “HOW DID YOU MISS THAT BUG?”. In this post, I am writing some excuses so that testers can put themselves on safer side whenever bugs are found in UAT/Production. Each excuse depends upon a situation, so give below excuses carefully.

Excuse 1: Bug is missed because this scenario is not in the test case. These test cases are reviewed and approved by Business Analyst/Product Manager/XXXX person.

Excuse 2: Testing team already reported a similar bug which is not fixed yet. That’s why we got this new bug in UAT/Production. [most common excuse].

Excuse 3: The bug is occurring because of the last minute changes in the application by development team. Project management team should come up with a strategy so that we can avoid last minute changes.

Excuse 4: Bug is missed because this scenario/rule is not mentioned in the requirement document [most common excuse].

Excuse 5: Testing is done in hurry because enough time was not given to testing team. Project management team should be proactive and should make an effective plan.

Excuse 6: Bug is missed because we (testers) did not test this functionality in later testing cycles. This functionality is not included in the testing scope.

Excuse 7: Bug is missed because we tested only that functionality which we got in the list of impacted areas. Whenever a change is made in the existing functionality then Development Team/Development Lead/Manager should give testing team the detailed list of impacted areas so that all impacted areas can be tested and bugs can be avoided in UAT/Production.

Excuse 8: This was the same bug which we got in our testing environment but at that time it was inconsistent. We reported it once but then both dev and testing teams were not able to replicate again.

Excuse 9: This bug might be occurring because Developers were fixing the bugs on Testing environment on the same time when testers were testing. Testing team cannot be blamed. Project management team should come up with a strategy so that we can avoid changes directly on QA/Testing environment.

Excuse 10: Why this is a bug? This is working as designed. Please show us which section of requirement/specification document states this rule. [Attn testers: Make this excuse only when you are sure that there are discrepancies in specification document].

Excuse 11: This bug is occurring when user selects a specific value in the dropdown/test data. It is working fine with other values. Exhaustive testing is impossible.

Well these are not actually excuses. These can be the actual reasons why an application is shipped to client with major bugs.

Do you have good excuses 🙂 ? Share in comments
Like this post? –>

Long Live Testers | Happy Testing


Form and structure of test cases MATTER! – Webinar

Effective testing is often the result of good test cases. The “goodness” of test cases is typically associated with the test case contents, and is seen as ensuring high coverage and therefore effective testing.
The objective of this webinar is to highlight that the form and structure of test cases matter significantly to good testing in addition to the test case content. In fact a good form and structure aids generating good content.

Key Benefits:

The form and structure enables –
(1) test cases to be sharply goal focused i.e. what types of defects can uncover
(2) allows a clear assessment of effectiveness of test cases
(3) allows one to select appropriate test cases to optimize execution
(4) allows one to objectively assess the “system health” and finally
(5) enables development of shorter automated scripts enabling easy maintenance

Join the HBT (Hypothesis Based Testing) Series Webinar to learn more. T Ashok, Founder & CEO, STAG Software and Architect of HBT will deliver the webinar.

Click Here to Register for the webinar.

User Acceptance Testing Concepts


.. Continuing the Beginners Guide to Software Testing series

User Acceptance testing is

  • the formal testing done on the system to ensure that it satisfies the acceptance criteria before the system is put into production. [Most of the times it is done by users/clients]
  • the incremental process of approving or rejecting the system during development and maintenance.

Acceptance Testing checks the system against the requirements of the user. It is done by real people using real data and real documents to ensure ease of use and functionality of systems. Users who understand the business functions run the tests as given in the acceptance test plans, including installation and Online help Hardcopies of user documentation are also being reviewed for usability and accuracy. The testers/users formally document the results of each test, and provide error reports, correction requests to the developers.



User Acceptance testing Myth – Passing the UAT acknowledges that the system is fit for use and also it acknowledges the process of development was adequate.


Reality: “Passing the acceptance tests does not necessarily mean the software is acceptable. That’s just the first hurdle to acceptance.” – via @the Three Amigos – Better Software Magazine [NOVEMBER/DECEMBER 2011]

Now a days we are using Agile and Incremental software development models. So Acceptance testing should be the ongoing activity. It needs to involved in the development process and approximate correction need to be made whenever it fails the acceptance criteria.

Ongoing Software Acceptance Testing enables:

  • Early detection of software problem.
  • Early consideration of user needs during software development.
  • Ensure user are involved in system and acceptance criteria.
  • Decision involved based on the results.


QTP Books Giveaway Winners

QTP Books Giveaway Winners

Thanks all for participating in the QTP books giveaway.
So Guys, the wait is over.. Lucky winners of QTP books giveaway are – 

QTP Unplugged 2nd Edition book winners – 
  • Rani Samyuktha – sirisam.buddy@*****.com
  • Srikanth – srikanth_200662@****.com
  • Amit Verma – amit.v@*****.com
  • Falguni Panchal – falguni136@******.com
  • Vanita Hasija – vinita.hasija@*********.com
And I thought I knew QTP! Book winners – [These 4 books are giveaway from Tarun’s side. Thanks Tarun]
  • Revathi P – revathibe07@*****.com
  • Mahesh Makapur – maheshbm@********.in
  • Sandeep Singh – sandeep_cadian@**********.com
  • Mehul Jain – mehuljain86@**********.com
Congratulations to all winners. Winners will receive the email regarding the books soon. 
Lucky winners are declared by the author Tarun Lalwani. Note that – We originally planned for 4 QTP Unplugged books giveaway. But Tarun is so generous he himself is giving away 4 And I thought I knew QTP! books too. Thanks to Tarun. 
This contest was only for Indian users. We will soon come back with a contest for international users as well. So stay tuned.
Happy Testing.