GitHub Metric
Engineering
Code Review Velocity = Median(First Review Timestamp − PR Ready Timestamp)
Code Review Velocity measures the elapsed time from when a pull request is opened or marked ready for review to when the first substantive review is submitted. It is a key driver of lead time for changes. Long review waits are one of the most common causes of developer context-switching.
Full guide: definition, formula, and benchmarksCode Review Velocity
Code Review Velocity measures the elapsed time from when a pull request is opened or marked ready for review to when the first substantive review is submitted. It is a key driver of lead time for changes. Long review waits are one of the most common causes of developer context-switching.
How to calculate code review velocity
Why code review velocity matters for GitHub users
Every hour a pull request waits for review is an hour the author must context-switch to other work, reducing flow state and overall throughput. Fast review velocity keeps the development pipeline moving and prevents work-in-progress from piling up.
For GitHub teams, this metric highlights whether review load is evenly distributed or concentrated on a few individuals. Addressing imbalances can dramatically improve cycle time without requiring anyone to work harder.
Understand and act on code review velocity with KPI Tree
Sync pull request timeline events from GitHub into your warehouse and compute review velocity in KPI Tree. Position it upstream of lead time for changes in your metric tree to visualise causal impact.
Assign RACI ownership to team leads and set alerts when median review time exceeds your SLA, prompting redistribution of review load.
Get started with your GitHub data
Pull metrics from GitHub directly through the Model Context Protocol.
Connect your existing warehouse where GitHub data already lands.
Our professional services team can build you turn-key AI foundations in a matter of weeks. Data warehouse on Snowflake/BigQuery, ELT with Fivetran, all modelled in dbt with a semantic layer.
Related GitHub metrics
Code Review Quality Score
EngineeringMetric Definition
Code Review Quality Score evaluates the substantiveness of pull request reviews by weighting factors such as comment depth, suggestions made, files reviewed versus files changed, and time spent. It distinguishes meaningful reviews from rubber-stamp approvals. Higher scores correlate with fewer post-merge defects.
Lead Time for Changes
EngineeringMetric Definition
Lead Time = Production Deployment Timestamp − Commit Timestamp
Lead Time for Changes measures the elapsed time from when a code change is committed to when it is successfully running in production. It is one of the four DORA metrics and a key indicator of delivery pipeline efficiency. Elite performers achieve lead times measured in hours rather than days or weeks.
Pull Request Bottleneck Analysis
EngineeringMetric Definition
Pull Request Bottleneck Analysis examines the stages of the PR lifecycle - authoring, review wait, review-in-progress, CI execution, and merge - to identify where delays accumulate. It transforms aggregate cycle time into an actionable breakdown that pinpoints specific process failures.
Team Collaboration Index
EngineeringMetric Definition
Team Collaboration Index quantifies the degree of cross-functional and cross-team interaction on GitHub, including cross-team code reviews, co-authored commits, discussion participation, and issue triage across repository boundaries. It measures whether knowledge and responsibility are shared or siloed.
Explore code review velocity across integrations
All GitHub metrics
Empower your team to understand and act on GitHub data
Map what drives your metrics, measure progress at any grain, prove what works statistically, and deliver personalised action plans to every team member.