Simplified – Shift Left in Software Testing


– Article by Sumit Kalra

Shift Left is a buzzword in Software Testing. It is not new, in fact it has always been around. Shift left is all about creating a culture where testers can be involved early in the software development life cycle to start testing activities early. Idea is to reduce the risks.

Perhaps inspired by the maxim, “a stitch in time saves nine”, Shift Left is a practical attempt to actually ensure a timely stitch; to check for errors in the software testing process earlier than the conventional time to do so.  Shift Left testing means testing earlier in the software development cycle, so that risks and unknowns can be reduced which enables smooth deliveries to the clients.

Performing testing activities late in the cycle results into – failure at managing the evolving demands and requirements, and as such soon produces unhealthy consequences for the organization ranging from higher production costs to extended sales time, and accidental defects.

Traditional Software Testing Practices

Studies put forward explicitly where it is indicated that cost of fixing bugs late in the software development life cycle is very high. A must Read – http://softwaretestingtimes.com/2010/04/why-start-testing-early.html

When Shift Left practices are put in SDLC, the software system testing activities takes place much earlier in software development life cycle. The goal is to fix any minor looming issues that might crop up in the future of the project, and thereby meeting the marks of quality in delivery. So when organizations adopt the Shift Left strategy, they are able to test, analyze project, pass judgment on the system and refine it into something much better bit-by-bit.

Shift Left in Software Testing Practices

Few Examples of Shift Left:

  • Pair with the developers – More Collaboration and brainstorming on the requirements / test scenarios with the Team (including Devs and PO) so that unknowns and risks can be discovered earlier in the phase. Both Devs and QEs will have the same understanding on the requirements.
    • Rework (Issue fixing & retesting) will be less.
    • Scope creep will be less.
  • Test different layers – In SOA applications, APIs are developed first. Team should plan the API testing so that issues can be identified early in the cycle (rather than just testing from the UI).
  • Plan Non-functional Testing early in the cycle (Performance Testing, Security Testing etc) – Identifies issues early and reduces the risks.
  • Automate the “Automation Test Case Execution” – Integrate the automation scripts with Jenkins/any Build automation tools that automation scripts should run in CI region before the new code deployment. It will help in ensuring that new code change is safe or not.
  • Automate Unit Tests, Integration tests, API Tests – These tests runs faster and help in identifying the defects early in the cycle.

Advantages of Shifting Left:

  • Faster delivery – Accelerate the release cycles – Ensure smooth deliveries.

By carrying out a Shift Left operation, it is possible to efficiently improve the speed of delivery of the project. This is because they get to find out all the flaws quickly enough in the development phase of the software, reduce the time interval between releases while making the necessary adjustments, and finally producing a refined and quality software. A well-calculated procedure can in fact lead to the quality development and timely completion of tasks. The idea of Quality Assurance is the eventual improvement of these procedures after their development and documentation.

  • Software Development free of Hassles

All the project requirements become clear to every member of the organization when the errors are detected at an earlier time in the requirement phase.

  • Meeting Customer Demands sufficiently

Shift Left technique improves your client satisfaction mark, as the approach enables you deliver faster and more quality tasks.

  • Rework is reduced – Defects caused by humans to be minimized.
  • Time can be saved and team can focus more on Automation Testing.
  • Risk is reduced – Cut-down complications that could surface in the production phase.

To Summarize, Shift Left is a step towards:

Delivering QUALITY @ SPEED

Reduce the  COST without cutting the RESOURCES

Here is an awesome video (less than 5 minute) to understand the Shift Left concept – 

— Article by Sumit Kalra

 

 


Sprint Retrospectives – From Testing Team perspective


Retrospectives are carried out after every sprint. Basic purpose of this retrospective meeting is –

  • What went well – [Continue doing]
  • What didn’t went well – [Stop doing]
  • Improvement Areas – [Start doing]

Some common problems are seen across all teams –

  • Team members are not giving enough inputs in retrospective. How to get desired inputs from team members?
  • Improvement Areas are identified during retrospective, but How much improvement is made by team after each sprint? Answer is – Team does not know.
  • Retrospective meeting ended up in a blame game – Testers vs Devs. How to stop this?

In this post we will focus on problem 1: Team members are not giving enough inputs in retrospective. How to get desired inputs from team members?
There is a famous quote – if you don’t ask you don’t get. Similarly, if you don’t ask your team about right questions you will not get desired inputs from team. Manager/Scrum master need to go through all phases of the sprint and get inputs from team. Manager/Scrum master should ask following questions to the team:

Note –
1. You may need to tailor these questions as per your need.
2. I understand that in many projects, there are time limitations and all these questions are not asked to every individual in one meeting. In this case you can give this questionnaire to team in advance to that they can participate effectively in the retrospective.
3. In upcoming posts, we will discuss with we can automate most the points from following questions, so that you can get information from Test Management/Project Management tools you are using.

 

Requirement Analysis Phase:

  • Are you satisfied with time given for R&D?
  • Has all necessary trainings been provided?
  • Has major conflicts and impacted areas been identified?
  • Has functional and non-function requirements been taken into consideration?
  • Has query sessions planned with BA/Product Owner?
  • Are we able to understand the functional and technical design?
  • Overall – any Learnings/Challenges from overall from this phase?
  • Are you satisfy with the quality of requirements/user-stories? Any rejected requirements?

Test Case Writing Phase:

  • Did we meet TCW deadlines?
  • Are reviews done on time?
  • Are we able to complete TCW before the feature/functionality is delivered for testing?
  • Any major functionality related issues reported which are not covered in the test cases? [I understand that testers cannot cover all scenarios in test cases. However, it’s good to a count if issues/scenarios which are not covered in test cases. Will discuss the same in upcoming posts]
  • Overall – any Learning/Challenges from overall from this phase?

Test Planning & Control:

  • Estimations – Team members involvement in providing estimations? Has tem members been involved?
  • Planning meeting happened on time?
  • Test Lead/manager identified risks, planned leaves, etc and communicated to stakeholders on time?
  • Has team members identified risks (not meeting deadlines, etc) and communicated to stakeholders on time?
  • Have we consider functional and Non-Functional testing into consideration?
  • Has team able to meet deadlines?
  • Overall – any Learning/Challenges from overall from this phase?

Test Execution:

  • Test Execution (Functional Testing, Regression, Integration, Staging) completed on time?
  • Any challenges faced while running Automation testing (in case of automated regression)
  • Any cases missed?

Bug Reporting Quality:

  • Has testing team provided all required details in issue reports?
  • Count of invalid defects?
  • Has Impact analysis been done by testing team based on product knowledge? (A common pain-point of developers is – While reporting a bug, Testers write that “bug persists in xyz page”. However, then again the bug is reopened because of different page. So it’s very important for testers to write all possible pages/areas where the bug is occurring)

Overall Quality of application under test:

  • Any critical/complex bugs reports in later phase (which can be found and reported earlier)
  • Any issues reported from sprint demos which are not caught during testing?
  • Any concerns with the quality delivered by developers? Any straight forward blockers encountered during testing?

Communication & co-ordination:

  • Communication & co-ordination between – Test Team and Dev Team, Test Team and BA/Product Owner, Tester-Tester,
  • Enough collaboration happening between tester and developer to understand the Application/Feature design and Testing strategy.
  • Overall – any Learnings/Challenges from overall from this phase?
  • Any concerns with Daily Scrum meetings/Stand-ups? Completed on time? Are we able to focus on each person accomplished?

Others:

  • Test Environment stability?
  • Any issues with Builds and deployments?

For Scrum Master / Managers – So now, if you don’t get desired information in retrospective from team then do not blame the team. Make sure you are asking right questions to get the desired inputs from the team.
 
In Upcoming posts we will discuss the following:

  • Ways to measure the improvement made by Test team in each sprint?
  • Sprint Metrics & how to automate?
  • Stop blame game – Testers vs Devs. Self-managed Teams.

– Happy Testing


What if outstanding defect count is high and software production push date is near?


My this article is in response to a LinkedIn question “What to do when we are out of time, software is not bug free and tomorrow its going for production?

Here are my thoughts on this:
1. Make a list of all open defects and share across all key stakeholders like Requirement Owner (PO, Business Analyst), Dev Lead, Dev Manager, Test Lead, Test Manager, Delivery Manager (stakeholders may vary.. depends on your project)

2. Schedule a defect triage meeting to prioritize all defects in following categories –

Category A: Issues which must be fixed. Software cant be released without that.
Category B: high-priority Issues that should be included in the solution if it is possible
Category C: Issues which is considered desirable but not necessary
Category D: Defects which will not be fixed (issues which does not generate any value addition to the software)

3. All stakeholders should decide go/no go based on the following:
– Can we accommodate Category A issues within deadlines? 

  • If Yes – Can we accommodate category B issues within deadlines? If No – Can we accommodate category B issues post the release? 
  • If No – Which all issues from category A can be accommodated? Can release date be postponed? If Yes, are there any contractual obligations from customer? 

Note – “accommodate” means – Issues should be fixed, code reviewed, tested with impact areas.

Remember that last minutes major fixes are usually dangerous. So take wise decision. This decision has to be taken by key stakeholders (not just test team). Make sure that team team give their inputs on impact and testing efforts based on the experience.

Also remember that – Testing team has “some” control on quality. Entire team (dev / test / pms etc) are responsible for the quality.

How to avoid these situations?These situations generally occurs when Project managers / Scrum Master / Dev Managers are not seriously tracking the project progress.

But this does not mean that Software Test team should not do this. At-least before one week of the testing deadline (depends upon your project length), Testing team should start raising concerns about the health of software.
Starting informing the stakeholders that which all features blocked form testing with these issues.
Make sure developers are giving priority of issue fixes rather than working on other stuff.

Make sure there should not be any communication gap from testing team.

Many other factors software testers need to take care of:

  • report valid defects, 
  • test critical features first, 
  • make sure devs deliver critical features first, 
  • large features with more story points should be broken down to sub-features and should be delivered accordingly, 
  • clear doubts in requirement walk-through phase, etc 
  • + many more.. will take sometime later
 
Happy Testing


Evaluating / Interviewing Software Test Lead and Managers – Test Management Questions.


Some unique and useful questions (specific to Test Management) to evaluate Software Test Leads and Managers:


1. Under extreme conditions, how you can keep yourself and your team energized?2. What is your criteria to evaluate a software tester?

3. Your thoughts on limiting the scope of testing by measuring testability?

4. How you can encourage your team (testers) for bug advocacy?

5. Any thoughts on reducing the cost of testing without cutting the resources?

6. How you keep yourself updated on the latest testing techniques, strategies, testing tools/ test frameworks and so on? (If answer is like – “Reading websites.. blogs..” then ask the names of blogs/sites).

7. How we can achieve time-boxing in testing?

8. In case when there is time crunch, do you think forcing tradeoff(s) in testing is a right step?

9. It is being said by experts that “Studying epistemology helps you test better“. Do you agree on this? Your thoughts on Epistemology?


Defect Trends report in Software Testing


The Defect Trends report calculates a rolling average of the number of bugs that the team has opened, resolved, and closed based on the filters that you specify. The rolling average is based on the per QA Build.Defect Trends reports are very important for Development/Test Managers and senior management to understand how the Bug/Defect resolve and close rate is.

The following illustration displays an example of the Bug Trends report.

Defect Trends report

Following is the example of Unhealthy Defect Trend
Unhealthy Defect Trends report


Excuses for testers when bugs are caught in later testing cycles/UAT/Production


[Note – This post might not be helpful for testers who work in independent test companies]
One of the biggest pain-point for testers is that whenever a bug is caught in UAT or caught in later stages then blame is unfairly put on testing teams. This further results into unfair yearly reviews and appraisals of testing teams.In many mid size companies, less value is given to testing teams. Testing culture lacks in those companies. Management should understand that testing team is “also responsible” [not “only responsible”] when a bug is caught in production.

Testers should be proactive and should be able deal with such situations. They should have the excuses ready when they are asked “HOW DID YOU MISS THAT BUG?”. In this post, I am writing some excuses so that testers can put themselves on safer side whenever bugs are found in UAT/Production. Each excuse depends upon a situation, so give below excuses carefully.

Excuse 1: Bug is missed because this scenario is not in the test case. These test cases are reviewed and approved by Business Analyst/Product Manager/XXXX person.

Excuse 2: Testing team already reported a similar bug which is not fixed yet. That’s why we got this new bug in UAT/Production. [most common excuse].

Excuse 3: The bug is occurring because of the last minute changes in the application by development team. Project management team should come up with a strategy so that we can avoid last minute changes.

Excuse 4: Bug is missed because this scenario/rule is not mentioned in the requirement document [most common excuse].

Excuse 5: Testing is done in hurry because enough time was not given to testing team. Project management team should be proactive and should make an effective plan.

Excuse 6: Bug is missed because we (testers) did not test this functionality in later testing cycles. This functionality is not included in the testing scope.

Excuse 7: Bug is missed because we tested only that functionality which we got in the list of impacted areas. Whenever a change is made in the existing functionality then Development Team/Development Lead/Manager should give testing team the detailed list of impacted areas so that all impacted areas can be tested and bugs can be avoided in UAT/Production.

Excuse 8: This was the same bug which we got in our testing environment but at that time it was inconsistent. We reported it once but then both dev and testing teams were not able to replicate again.

Excuse 9: This bug might be occurring because Developers were fixing the bugs on Testing environment on the same time when testers were testing. Testing team cannot be blamed. Project management team should come up with a strategy so that we can avoid changes directly on QA/Testing environment.

Excuse 10: Why this is a bug? This is working as designed. Please show us which section of requirement/specification document states this rule. [Attn testers: Make this excuse only when you are sure that there are discrepancies in specification document].

Excuse 11: This bug is occurring when user selects a specific value in the dropdown/test data. It is working fine with other values. Exhaustive testing is impossible.

Well these are not actually excuses. These can be the actual reasons why an application is shipped to client with major bugs.

Do you have good excuses 🙂 ? Share in comments
Like this post? –>

Long Live Testers | Happy Testing