Metric Definition
Engineering throughput
Track from
Code review velocity
Code review velocity measures the elapsed time from when a pull request is opened to when the review is completed. It captures how quickly a team provides feedback on proposed code changes, which directly influences how fast work moves from development to deployment. Slow reviews create bottlenecks, force context switching, and inflate lead times far beyond what the actual coding effort requires.
8 min read
What is code review velocity?
Code review velocity is the time it takes for a pull request to receive a completed review after being opened. A review is considered complete when the final approver marks it as approved, or when the review process concludes with an actionable outcome (approved, changes requested, or rejected).
The metric matters because code review is one of the largest sources of wait time in modern software development. A developer who finishes coding a feature in four hours may then wait two days for a review. During that wait, the developer starts other work and loses the mental context of the original feature. When review feedback finally arrives, switching back to address comments takes longer and produces lower-quality responses than if the feedback had come within hours.
Code review velocity is distinct from review thoroughness. Speed without quality is counterproductive. The goal is not to rush reviews but to reduce the idle time between when a review is requested and when a reviewer begins. Most of the delay in code reviews is not the time spent reading code; it is the time the pull request spends waiting in a queue.
Teams typically track code review velocity as a median or percentile rather than an average, because a small number of very slow reviews (complex changes, holiday periods, understaffed teams) can skew the mean. The p50 tells you what typical looks like, while the p90 reveals how bad it gets for the slow tail.
Code review velocity measures wall-clock time from PR opened to review completed, not the time the reviewer spends reading code. The goal is to shrink the queue time, not to rush the actual review. A reviewer who picks up a PR within an hour and spends 45 minutes on a thorough review delivers far better velocity than one who waits three days before spending the same 45 minutes.
Decomposing code review velocity with a metric tree
Breaking code review velocity into its components reveals where the time actually goes and which interventions will have the greatest effect.
In most teams, queue time dominates. The pull request sits idle waiting for a reviewer to pick it up. Active review time is typically measured in minutes, not days. Iteration cycles add time when the initial review requests significant changes and the author must rework and resubmit.
Connecting code review velocity to lead time for changes and deployment frequency shows the downstream impact. A team with a two-day median review time cannot deploy faster than every two days, regardless of how quickly they write code. Review velocity often sets the ceiling on overall delivery speed.
Benchmarks by team maturity
| Review velocity (median) | Team profile | Typical practices |
|---|---|---|
| Under 4 hours | Elite. Reviews are treated as a top priority. | Dedicated review windows, small PRs, automated checks, strong ownership culture. |
| 4 to 24 hours | High performing. Reviews happen within the same working day. | Review-first culture, reasonable PR sizes, clear review assignments. |
| 1 to 3 days | Average. Reviews compete with other work. | Ad hoc review assignments, larger PRs, reviews fitted around development work. |
| Over 3 days | Slow. Reviews are a significant bottleneck. | Unclear ownership, large PRs, reviews deprioritised, insufficient reviewer capacity. |
Research from the DORA programme and industry surveys consistently finds that elite-performing teams complete reviews within hours. The gap between elite and average is not about review quality; it is about prioritisation and process design. Teams that treat reviews as interruptions will always be slower than teams that treat reviews as first-class work.
Strategies to improve code review velocity
- 1
Keep pull requests small
Small PRs (under 400 lines of change) are faster to review, produce better feedback, and have fewer iteration cycles. A reviewer can complete a small PR in 15 to 30 minutes, making it easy to fit into the flow of the day. Large PRs create a psychological barrier that causes reviewers to procrastinate.
- 2
Establish review SLAs
Set a team expectation for initial review response time, such as four working hours. Making the expectation explicit and visible changes behaviour. Track adherence to the SLA as part of the team's process health metrics.
- 3
Automate what machines can check
Linting, formatting, type checking, and test execution should happen in CI before a human reviewer sees the PR. When reviewers spend time on issues that a linter could have caught, both velocity and morale suffer. Automated checks free reviewers to focus on design, logic, and correctness.
- 4
Rotate and distribute review load
Concentrated review responsibilities create bottlenecks. If one senior engineer reviews all PRs, their availability becomes the constraint. Distribute review responsibilities across the team, pair junior reviewers with seniors for learning, and use automated assignment tools to balance load.
- 5
Block dedicated review time
Some teams find success with dedicated review windows, such as the first 30 minutes of each morning. This creates a predictable rhythm where authors know when to expect feedback and reviewers can batch their review work without constant context switching.
Tracking code review velocity with KPI Tree
KPI Tree lets you model code review velocity as part of a broader engineering delivery tree, connecting review speed to its upstream causes and downstream effects. The tree can segment review velocity by team, repository, reviewer, and PR size to pinpoint where bottlenecks concentrate.
Linking review velocity to cycle time, lead time for changes, and deployment frequency creates a complete delivery pipeline view. When review velocity degrades, you can trace the impact through to deployment delays and quantify the cost of slow reviews in terms of delayed value delivery.
Each team can own their review velocity node and set improvement targets. Over time, the tree builds a record of whether process changes, such as smaller PR sizes or dedicated review windows, are actually moving the metric.
Related metrics
Lead time for changes
DORA metric
Operations MetricsMetric Definition
Lead Time for Changes = Production Deploy Time - Code Commit Time
Lead time for changes measures the elapsed time from when a developer commits code to when that code is successfully running in production. It is one of the four DORA (DevOps Research and Assessment) metrics and captures the full latency of the software delivery pipeline. Shorter lead times mean faster feedback, lower risk per release, and a tighter connection between engineering effort and user value.
Deployment frequency
DORA metric
Operations MetricsMetric Definition
Deployment Frequency = Number of Production Deployments / Time Period
Deployment frequency measures how often an organisation successfully releases code to production. It is one of the four DORA (DevOps Research and Assessment) metrics that predict software delivery performance and organisational outcomes. Teams that deploy more frequently deliver value to users faster, reduce the risk of each individual release, and create tighter feedback loops between development and production.
Cycle time
Process speed
Operations MetricsMetric Definition
Cycle Time = Process End Time − Process Start Time
Cycle time measures the total elapsed time from the start to the end of a process. It is a fundamental operations metric used in manufacturing, software development, service delivery, and any context where the speed of a process directly affects throughput, cost, and customer satisfaction.
Code churn rate
Engineering quality
Operations MetricsMetric Definition
Code Churn Rate = (Lines Changed Within N Days of Being Written / Total Lines Written) x 100
Code churn rate measures the percentage of code that is rewritten or deleted shortly after being written. It captures how much rework occurs within a codebase over a given period, revealing instability in requirements, design decisions, or development practices. A moderate level of churn is normal and healthy, but persistently high churn signals wasted effort and process problems that deserve investigation.
Optimise code review velocity with KPI Tree
Build a delivery pipeline metric tree that connects review speed to deployment frequency and lead time. See where reviews queue up, which teams need capacity, and track improvement over time.