Automating everything is not a silver bullet (a conversation between a development manager and a tester).

(a conversation between a development manager and a tester).

Once there was a Software Development Manager named John who worked at a software company. John was heading both Development and Quality Engineering Teams. John believed that using test automation was a powerful tool and he always encouraged his team to automate as many tests as possible. He thought that automating everything was the best way to achieve full testing coverage, catch any bugs, and provide the best possible software for customers.

 

One day, John called in his Software Tester, Sarah, “Hey Sarah, we need to improve our test coverage for the upcoming release, and the best way to do that is to automate as much as possible,” John said enthusiastically. “I want you and your team to focus on automating every possible test case.”

 

Sarah replied, “John, we’ve tried automating everything before, but it’s not always practical. Automating every test case takes a lot of time and effort, and there are some tests that just can’t be automated.”

 

John was persistent, “We can’t afford to leave anything to chance. Automating everything will make sure that we catch any issues before they make it to production. I want to make sure that our customers receive high-quality software every time.”

Sarah and her team set out to automate every possible test case, but they soon realized that it wasn’t as simple as John had made it seem. They encountered many issues along the way, including long execution times, maintenance issues, and a high number of false positives. Despite their efforts, they found that they were still leaking bugs to production.

 

The bugs which were leaked to production were not straightforward ones. There were complex scenarios and edge cases that the team had not considered. For example, they had automated the basic checkout process, but had missed some specific edge cases that occurred only in certain rare circumstances, such as when a user entered an invalid coupon code or when they tried to use a special discount code that had already expired. These scenarios were not accounted for in the automated tests, and they ended up leaking to production.

 

Sarah called for a meeting with John to discuss their findings. “John, we’ve automated everything, but we’re still leaking bugs to production. The cost of automating everything was too high, and it didn’t give us the results we were looking for. We need to find a balance between automation and manual testing to make sure that we catch all the bugs.”

 

John listened to Sarah’s concerns and finally realized that automation couldn’t solve all their problems. “Sarah, you’re right. I see now that automating everything is not the answer”

 

The team went back to the drawing board and developed a plan that included a mix of manual and automated testing. They focused on automating the most critical test cases while leaving some of the less important ones for manual testing. They also introduced exploratory testing and adhoc testing for edge cases and complex scenarios, where human intelligence and creativity are required to identify and uncover potential issues.

 

This approach allowed them to catch more bugs while also reducing the overall cost of testing. They were able to detect edge cases and complex scenarios that they had missed before and found that the human testers were better at identifying these issues than automation.

 

In the end, the team delivered a successful release that met all the quality standards, and John learned a valuable lesson about the limits of test automation. The moral of the story is that while test automation is an important tool, it’s not a silver bullet that can solve all testing problems. A balanced approach that incorporates both automation and manual testing is the key to delivering high-quality software. Automation can handle regression cases and straightforward scenarios, while exploratory and adhoc testing is required for complex scenarios and edge cases.

What is RUM (Real User Monitoring) and how it helps in performance testing?

Real User Monitoring (RUM) is a method of monitoring the performance of a website or web application as experienced by actual users. This type of monitoring involves tracking various metrics, such as page load times, server response times, and JavaScript execution times, from the perspective of the user’s web browser.

This type of monitoring uses data collected from user interactions with the application to provide insight into how the application is performing in the real world.

RUM typically involves the use of JavaScript code that is added to the application’s web pages. This code is used to collect data on the performance of the application, including information about page load times, user interactions, and any errors that may occur. This data is then sent to a central server for analysis and can be used to identify potential performance issues and improve the overall user experience.

In the context of Application Performance Management (APM), RUM can be used to supplement traditional server-side monitoring techniques. By providing a more complete picture of how an application is performing from the user’s perspective, RUM can help organizations identify and resolve performance issues more quickly and effectively.

RUM is useful for performance testing because it provides valuable insights into how a website or web application is actually performing in the wild. By tracking metrics like page load times and server response times, RUM can help identify performance bottlenecks and other issues that may not be apparent when testing in a controlled environment. This can help organizations optimize the performance of their websites and web applications, leading to a better user experience.

Practical Tips to Speed up the Software Delivery Performance

Hey Guys, I just completed the book “Accelerate”. This book will give you enough ideas to improve software delivery performance and how to measure software delivery performance using statistical methods. I would like to share industry proven and practical tips which helps you speed up the Software Delivery Performance.

Accelerate classified the capabilities into 4 categories:

  1. Technology & Automation
  2. Product and Process
  3. Lean Management and Monitoring
  4. Cultural

Technology & Automation capabilities:

  1. Use version control for all production artifacts
  2. Automate the deployment process
  3. Implement continuous integration
  4. Use trunk-based development methods
  5. Implement test automation (tests are run automatically throughout the development)
  6. Support test data management
  7. Shift Left on Security
  8. Implement continuous delivery (CD)
  9. Use a loosely coupled architecture
  10. Architecting for empowered teams

Product and Process Capabilities:

  1. Gather and implement customer feedback
  2. Make the flow of work visible through the value stream
  3. Work in small batches
  4. Foster and enable team experimentation

Lean Management and Monitoring Capabilities:

  1. Have a lightweight change approval processes
  2. Monitor across application and infrastructure to inform business decisions
  3. Check system health proactively
  4. Improve processes and manage work with work-in-process (WIP) limits
  5. Visualize work to monitor quality and communicate throughout the team

Cultural Capabilities:

  1. Support a generative culture (Performance Oriented)
  2. Climate for learning
  3. Collaboration among teams
  4. Provide resources and tools that make work meaningful
  5. Support transformational leadership

In next couple of articles, we will go over these capabilities one by one.

~Happy testing

Why You Should Consider the Testing Pyramid Structure?

The purpose of the test pyramid is to provide a structure for designing and executing tests. It is also the cornerstone of testing at Google.

The test pyramid has four layers – unit tests, service tests, integration tests, end-to-end or acceptance tests. By dividing up the different types of tests into these levels, it becomes easier to decide what type of testing needs to be done when building and maintaining software.

(more…)

Free Course – Applitools – Next Gen Visual Testing Tool (Powered by AI)

Applitools is Next generation test automation platform powered by Visual AI.
This tool can help you in Increase quality, accelerate delivery and reduce cost with the world’s most intelligent test automation platform.

This tool is now one of the most demanding test tools in the parket. UntimateQA is offering a free course on applitools. All folks who are currently working on Selenium should undergo thos course.
Link to course: https://courses.ultimateqa.com/courses/applitools (website seems still under development, but you can enroll the course)

Training Content Part 1: Getting started
  • 00-Course overview
  • Course prerequisites
  • 03 Course syllabus
  • How to download and install Visual Studio
  • Create Applitools account and install Nuget packages
  • Why must we automate visual testing
  • Benefits of Applitools

Part 2: Baselines and Dashboard
  • What is a baseline in Applitools
  • 01-Code overview
  • 02-Baseline examples
  • 03 1st comparison
  • 04 Zooming, resizing and layers
  • 05 Toggling and Floating region

Part 3: Match Levels and Regions
  • 01 Exact match level
  • 02 Strict match level
  • 03 Content match level
  • 04 Layout match level
  • 05 Introduction to Ignore regions
  • 06 Ignore regions in code
  • 07 Ignoring multiple regions
  • 08 Floating region
  • 09 Strict region
  • 10 Content region
  • 11 Layout region

Part 4: Test Manager
  • 01 Test Manager UI
  • 02 Batches
  • 03 Full page screenshots with CSS stitch
  • 04 CSS stitching vs standard scroll
  • 04 Bugs and test steps in UI
  • 05 Test steps code
  • 07 Fluent API
  • 08 How programatic regions look
  • 09 Conclusions
Link to course: https://courses.ultimateqa.com/courses/applitools (website seems still under development, but you can enroll the course)