(a conversation between a development manager and a tester).

Once there was a Software Development Manager named John who worked at a software company. John was heading both Development and Quality Engineering Teams. John believed that using test automation was a powerful tool and he always encouraged his team to automate as many tests as possible. He thought that automating everything was the best way to achieve full testing coverage, catch any bugs, and provide the best possible software for customers.

 

One day, John called in his Software Tester, Sarah, “Hey Sarah, we need to improve our test coverage for the upcoming release, and the best way to do that is to automate as much as possible,” John said enthusiastically. “I want you and your team to focus on automating every possible test case.”

 

Sarah replied, “John, we’ve tried automating everything before, but it’s not always practical. Automating every test case takes a lot of time and effort, and there are some tests that just can’t be automated.”

 

John was persistent, “We can’t afford to leave anything to chance. Automating everything will make sure that we catch any issues before they make it to production. I want to make sure that our customers receive high-quality software every time.”

Sarah and her team set out to automate every possible test case, but they soon realized that it wasn’t as simple as John had made it seem. They encountered many issues along the way, including long execution times, maintenance issues, and a high number of false positives. Despite their efforts, they found that they were still leaking bugs to production.

 

The bugs which were leaked to production were not straightforward ones. There were complex scenarios and edge cases that the team had not considered. For example, they had automated the basic checkout process, but had missed some specific edge cases that occurred only in certain rare circumstances, such as when a user entered an invalid coupon code or when they tried to use a special discount code that had already expired. These scenarios were not accounted for in the automated tests, and they ended up leaking to production.

 

Sarah called for a meeting with John to discuss their findings. “John, we’ve automated everything, but we’re still leaking bugs to production. The cost of automating everything was too high, and it didn’t give us the results we were looking for. We need to find a balance between automation and manual testing to make sure that we catch all the bugs.”

 

John listened to Sarah’s concerns and finally realized that automation couldn’t solve all their problems. “Sarah, you’re right. I see now that automating everything is not the answer”

 

The team went back to the drawing board and developed a plan that included a mix of manual and automated testing. They focused on automating the most critical test cases while leaving some of the less important ones for manual testing. They also introduced exploratory testing and adhoc testing for edge cases and complex scenarios, where human intelligence and creativity are required to identify and uncover potential issues.

 

This approach allowed them to catch more bugs while also reducing the overall cost of testing. They were able to detect edge cases and complex scenarios that they had missed before and found that the human testers were better at identifying these issues than automation.

 

In the end, the team delivered a successful release that met all the quality standards, and John learned a valuable lesson about the limits of test automation. The moral of the story is that while test automation is an important tool, it’s not a silver bullet that can solve all testing problems. A balanced approach that incorporates both automation and manual testing is the key to delivering high-quality software. Automation can handle regression cases and straightforward scenarios, while exploratory and adhoc testing is required for complex scenarios and edge cases.