Target Audience
PR Status is a set of widgets designed for Scrum masters, Engineering managers & Engineering leads demonstrating a wide spectrum of metrics that collectively indicates the progress of code, quality of code and efficiency of the team (building the codebase).
PR age widget:
Overview:
This widget enables users to analyze the age of pull requests (PRs) across multiple dimensions, such as: Developer, Feature, Reviewer, Complexity of PR, etc. By tracking PR age alongside related metrics (e.g., review time, cycle time), this widget offers a comprehensive view of the development process. It helps teams identify inefficiencies and implement targeted improvements.
Insights for users:
This widget provides actionable insights to streamline the PR lifecycle, enhance team collaboration, and accelerate feature delivery, teams can prioritize actions such as:
Prioritizing Reviews: Encourage reviewers to allocate time for timely feedback and approvals.
Enforcing SLAs for PRs: Implement service-level agreements to prevent PRs from stalling and ensure consistent progress.
Breaking Down Large PRs: Decide on splitting large, complex PRs into smaller, more manageable units to simplify reviews.
Promoting Early and Frequent Reviews: Foster a culture of early and continuous feedback to catch issues sooner and reduce the risk of delays.
PRs by reviewer and PRs by developer widgets:
Overview:
This widget enables users to analyze the distribution of PRs by reviewers and developers.
Insights for users:
Workload balancing: Identify developers or reviewers who are overburdened with PRs (enabling leads to reprioritize and redistribute tasks to ensure consistent productivity & efficiency).
Team collaboration: Asses whether the review workload is shared equitably among team members, fostering a collaborative culture.
Resource allocation: If certain reviewers or developers are handling a disproportionately high volume of PRs, it could indicate a need for additional team members or specialized roles.
Comment count by PR and Comments addressed:
These two widgets enable users to analyze the PRs from the lens of comments raised and comments addressed across multiple dimensions, such as: Developer, Feature, Reviewer, Complexity of PR, etc.
Overview:
Comment Count per PR: Reflects the feedback developers receive for the code introduced in their PRs.
Percentage of Comments Addressed: Highlights the effectiveness of developers in responding to feedback and implementing the necessary changes.
Insights for users:
- High Comment Counts (per PR):
- May indicate that the PR is large or complex.
- Could also point to areas needing improvement in code quality.
- Low Comment Counts:
- Could suggest a lack of depth in the review process.
- High Percentage of Comments Addressed:
- Demonstrates that developers are effectively responding to feedback, which is a positive indicator for code quality.
No Comments Addressed:
- May indicate low-quality or insufficient feedback during reviews.
- Alternatively, it could suggest reluctance or disengagement from the developer in addressing comments.
Time to first review and Approval on first review:
These two widgets provide a detailed view of PR reviews in terms of time to first review and approval on the first review across multiple dimensions, such as: Developer, Feature, Reviewer, Complexity of PR, etc.
Overview:
Time to first review: Tracks how quickly a PR gets its first review after submission.
Approval on first review: Tracks whether a PR gets approved in the first review cycle.
Insights for users:
- Quick reviews: Suggests that the team has a good process for prioritizing PR reviews, which helps maintain development momentum.
- PRs with "No reviews": Indicates bottlenecks or areas where the review process might be lagging. Leads can investigate whether this is due to resource constraints, lack of clarity in the ownership of reviews, or other blockers.
- High approval rate on the first review: Indicates that developers are submitting high-quality PRs with fewer issues, leading to faster approvals.
- Low approval rate (or repeated "No" approvals): Suggests areas for improvement in PR quality or alignment with coding standards. Leads might consider introducing better pre-review processes, such as automated checks or team-wide guidelines.
- Developer trends:
- If some developers consistently experience delayed reviews, it may highlight process inefficiencies or team-specific dependencies.
- Trends by individual developers can help identify those who may need additional support, training, or mentorship to improve their PR submissions.
Lines of code changed and Lines of code changed due to review comments:
These widgets analyze lines of code (LOC) changed per PR and LOC changes due to review comments, offering insights into development practices and review processes.
Overview:
Lines of code changed: Tracks the magnitude of changes (both additions and deletions) per PR (from the inception of the PR).
Lines of code changed due to review comments: Tracks how much of the code was modified following review comments (additions or deletions).
Insights for users:
- High LOC:
- Indicates large or complex PRs.
- Potential challenges in review depth and quality, as larger PRs are harder to review effectively.
- Suggests a need to encourage breaking work into smaller, manageable PRs for better review cycles.
Moderate LOC:
- Ideal size range for PRs, balancing meaningful work with effective review cycles.
- Teams should aim for this range where possible.
Low LOC:
- Indicates small, incremental changes.
- May suggest adherence to a CI/CD pipeline or a preference for frequent, smaller PRs.
- Consistently low LOC could also indicate less impactful contributions and should be reviewed in context.
- Positive LOC Changes:
- Indicates that review comments led to constructive additions or improvements in the code.
- A high proportion of positive changes suggests an effective feedback loop where reviews enhance the PR quality.
- Negative LOC Changes: Indicates that significant code was removed due to review comments, which could imply:
- Over-engineering or unnecessary code in the initial PR.
- Misalignment in requirements or scope between the author and reviewers.
- Developer trends:
- Variations in LOC across developers might reflect differences in task assignments, coding style, or workload distribution.
- Developers with frequent negative LOC changes might need additional support in understanding project requirements or improving initial PR quality.
- Variations in LOC across developers might reflect differences in task assignments, coding style, or workload distribution.
Delayed stories and Deployment errors:
These widgets analyze delayed stories and deployment errors, highlighting areas of concern for delivery timelines and deployment reliability.
Overview:
Delayed stories: Tracks the number of stories (or tasks) delayed beyond their expected timeline per developer.
Deployment errors: Tracks errors during deployment for stories worked on by each developer.
Insights for users:
Interpreting delays in stories:
- Overloading certain developers with complex tasks or excessive work.
- Challenges in estimating task timelines effectively.
- Dependency issues where stories rely on delayed inputs from others.
Interpreting Deployment Errors: Repeated errors could indicate issues with:
- Lack of adequate testing (unit, integration, or end-to-end).
- Gaps in understanding deployment pipelines or configurations.
- Incomplete or incorrect requirements at the implementation stage.
- Actions for Improvement:
- Balance workload distribution more effectively.
- Provide support or mentorship to developers frequently associated with delays.
- Investigate whether systemic issues, such as unclear requirements or dependencies, are causing delays.
- Encourage automated testing to catch issues earlier in the pipeline.
- Provide targeted training on deployment processes and error resolution.
- Conduct postmortems for recurring deployment errors to identify root causes and prevent repetition.
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article