Learn how to leverage the platform to streamline your sprint retrospective sessions. This article provides the adoption journey for a variety of users, tips for aligning your sprint retrospective sessions with your broader platform adoption goals. and a checklist of key questions, practical guidance to using the platform (to get answers).
How can you align your sprint planning sessions with the broader platform adoption goals?
The following questions can be specifically mapped with the Goals and objectives of using the platform and determined daily to keep a check:
Is the team planning their work well?
Issue status report, sprint progress report, overlapping requirements flag
Are the resources productive and well utilized?
Overloaded resources flag, team reports
How is the quality of the product (designs, requirements, code and test cases)?
Design handoff quality flag, tasks benchmark flag, missing test cases flag, Feature branch/PR technology flags
Are there release risks with the product?
Changing specifications flag, missing test cases flag, Feature branch/PR technology flags, code review summary/ dashboard, issue extended beyond sprint flag, planned hours exceeded flag
Have the technical debts reduced?
Feature branch/PR technology flags
During the sprint retrospective sessions, the Scrum team can use Cubyts in the following manner;
- Discuss any blockers w.r.t usage of the platform
- Use your observations on the platform during the sprint to validate the discussion in retrospect.
- You can document your observations by sharing or taking snapshots and saving them in your Slack/Team channel
- Include the breakdown of flag types as a key point to discuss
Misc. questions and places to find answers on the platform:
The Scrum team can use these questions as a guideline to be prepared for a retrospective session. These questions can help them to focus on the concerned topics and ensure that they have an evidence-based and solution-oriented conversation.
Were my requirements up to the benchmarks?
Were the story definitions clear? Were they technically ready?
Did the specifications change during the course of the sprint?
Were there overlapping requirements/stories we could have merged?
Were test cases available on time?
Did an issue jump this sprint as well? What was the root cause?
Why are only certain developers always overloaded?
If a story was finished but was not released, why? What were the root causes?
How effective was the review process?
How are developers responding to the review?
Was there a discrepancy between the actual size of the work done and the estimations planned?
Who is delaying the development? - the reviewers or the developers? How can we solve this problem?
Did the developers progress well?
Using the platform to answer the questions above.
The following sources can be used on the platform by the Scrum Team to determine the answers to the questions asked in the section above:
Design handoffs below quality levels flag
Tasks planned not meeting benchmarks flag
Changing specifications flag
Overlapping requirements in design issues flag
Missing test cases flag
Issue extended beyond sprint
Planned hours exceeded flag
Overloaded resources flag, Team reports
PR pending release flag
Feature branch/PR technology flags
PR summary, PR status, Code review summary and Code review status reports
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article