Retrospectives are carried out after every sprint. Basic purpose of this retrospective meeting is –

  • What went well – [Continue doing]
  • What didn’t went well – [Stop doing]
  • Improvement Areas – [Start doing]

Some common problems are seen across all teams –

  • Team members are not giving enough inputs in retrospective. How to get desired inputs from team members?
  • Improvement Areas are identified during retrospective, but How much improvement is made by team after each sprint? Answer is – Team does not know.
  • Retrospective meeting ended up in a blame game – Testers vs Devs. How to stop this?

In this post we will focus on problem 1: Team members are not giving enough inputs in retrospective. How to get desired inputs from team members?
There is a famous quote – if you don’t ask you don’t get. Similarly, if you don’t ask your team about right questions you will not get desired inputs from team. Manager/Scrum master need to go through all phases of the sprint and get inputs from team. Manager/Scrum master should ask following questions to the team:

Note –
1. You may need to tailor these questions as per your need.
2. I understand that in many projects, there are time limitations and all these questions are not asked to every individual in one meeting. In this case you can give this questionnaire to team in advance to that they can participate effectively in the retrospective.
3. In upcoming posts, we will discuss with we can automate most the points from following questions, so that you can get information from Test Management/Project Management tools you are using.

 

Requirement Analysis Phase:

  • Are you satisfied with time given for R&D?
  • Has all necessary trainings been provided?
  • Has major conflicts and impacted areas been identified?
  • Has functional and non-function requirements been taken into consideration?
  • Has query sessions planned with BA/Product Owner?
  • Are we able to understand the functional and technical design?
  • Overall – any Learnings/Challenges from overall from this phase?
  • Are you satisfy with the quality of requirements/user-stories? Any rejected requirements?

Test Case Writing Phase:

  • Did we meet TCW deadlines?
  • Are reviews done on time?
  • Are we able to complete TCW before the feature/functionality is delivered for testing?
  • Any major functionality related issues reported which are not covered in the test cases? [I understand that testers cannot cover all scenarios in test cases. However, it’s good to a count if issues/scenarios which are not covered in test cases. Will discuss the same in upcoming posts]
  • Overall – any Learning/Challenges from overall from this phase?

Test Planning & Control:

  • Estimations – Team members involvement in providing estimations? Has tem members been involved?
  • Planning meeting happened on time?
  • Test Lead/manager identified risks, planned leaves, etc and communicated to stakeholders on time?
  • Has team members identified risks (not meeting deadlines, etc) and communicated to stakeholders on time?
  • Have we consider functional and Non-Functional testing into consideration?
  • Has team able to meet deadlines?
  • Overall – any Learning/Challenges from overall from this phase?

Test Execution:

  • Test Execution (Functional Testing, Regression, Integration, Staging) completed on time?
  • Any challenges faced while running Automation testing (in case of automated regression)
  • Any cases missed?

Bug Reporting Quality:

  • Has testing team provided all required details in issue reports?
  • Count of invalid defects?
  • Has Impact analysis been done by testing team based on product knowledge? (A common pain-point of developers is – While reporting a bug, Testers write that “bug persists in xyz page”. However, then again the bug is reopened because of different page. So it’s very important for testers to write all possible pages/areas where the bug is occurring)

Overall Quality of application under test:

  • Any critical/complex bugs reports in later phase (which can be found and reported earlier)
  • Any issues reported from sprint demos which are not caught during testing?
  • Any concerns with the quality delivered by developers? Any straight forward blockers encountered during testing?

Communication & co-ordination:

  • Communication & co-ordination between – Test Team and Dev Team, Test Team and BA/Product Owner, Tester-Tester,
  • Enough collaboration happening between tester and developer to understand the Application/Feature design and Testing strategy.
  • Overall – any Learnings/Challenges from overall from this phase?
  • Any concerns with Daily Scrum meetings/Stand-ups? Completed on time? Are we able to focus on each person accomplished?

Others:

  • Test Environment stability?
  • Any issues with Builds and deployments?

For Scrum Master / Managers – So now, if you don’t get desired information in retrospective from team then do not blame the team. Make sure you are asking right questions to get the desired inputs from the team.
 
In Upcoming posts we will discuss the following:

  • Ways to measure the improvement made by Test team in each sprint?
  • Sprint Metrics & how to automate?
  • Stop blame game – Testers vs Devs. Self-managed Teams.

– Happy Testing