KPI Tree

Metric Definition

DORA metric

Lead Time for Changes = Production Deploy Time - Code Commit Time
Production Deploy TimeThe timestamp when the change is successfully running in the production environment
Code Commit TimeThe timestamp when the developer commits the code change to the version control system

Track from

Metric GlossaryOperations Metrics

Lead time for changes

Lead time for changes measures the elapsed time from when a developer commits code to when that code is successfully running in production. It is one of the four DORA (DevOps Research and Assessment) metrics and captures the full latency of the software delivery pipeline. Shorter lead times mean faster feedback, lower risk per release, and a tighter connection between engineering effort and user value.

8 min read

Generate AI summary

What is lead time for changes?

Lead time for changes is the duration between a code commit and its arrival in production. It encompasses every stage of the delivery pipeline: code review, automated testing, staging deployments, manual approvals, and the production deployment itself. It is the developer-facing equivalent of asking "how long until my work reaches users?"

This metric is distinct from the broader concept of lead time used in operations, which measures the total time from when a request is made to when it is fulfilled. In the DORA context, lead time for changes starts at the commit, not at the moment someone decides to build a feature. Product lead time (idea to production) is a valuable metric but a separate one.

The metric is also distinct from cycle time, which measures the active working time on a task. Lead time for changes includes all the waiting time between stages: the time code sits in a pull request awaiting review, the time a build queues behind other builds, the time a change waits for a deployment window. Often, this waiting time dominates the total.

Elite-performing teams, as defined by the DORA research, achieve a lead time for changes of less than one hour. This does not mean they write code faster. It means their pipeline has minimal wait states, high automation, and no unnecessary gates between commit and production.

Lead time for changes is measured per change, not per deployment. If a deployment batches 10 commits, each commit has its own lead time calculated from its individual commit timestamp to the deployment timestamp. This prevents batch deployments from masking long wait times for early commits in the batch.

Lead time for changes within the DORA framework

Within the DORA framework, lead time for changes and deployment frequency together measure throughput, while change failure rate and mean time to recovery measure stability. The research consistently demonstrates that these are not trade-offs: elite teams achieve the best scores across all four metrics simultaneously.

Lead time for changes and deployment frequency are correlated but measure different things. A team might deploy frequently but still have long lead times if changes are batched and queued. Conversely, a team might have short lead times for individual changes but deploy infrequently because they produce few changes. Tracking both metrics reveals whether the delivery pipeline is both fast and regularly exercised.

The interaction between lead time and change failure rate is particularly important. Long lead times often indicate large batch sizes, and large batches have higher failure rates because they contain more changes and are harder to test thoroughly. Reducing lead time naturally reduces batch size, which tends to reduce the change failure rate as a side effect.

Performance levelLead time for changesTypical characteristics
EliteLess than one hourTrunk-based development. Fully automated CI/CD. No manual approval gates. Feature flags for risk management.
HighBetween one day and one weekStrong automation with short-lived branches. Automated tests. Some manual review steps.
MediumBetween one week and one monthPartial automation. Longer code review cycles. Manual testing phases.
LowBetween one month and six monthsLarge batch releases. Manual testing. Change advisory boards. Deployment windows.

Decomposing lead time for changes with a metric tree

Lead time for changes is the sum of time spent in each stage of the delivery pipeline. A metric tree breaks the total into stage-by-stage durations and surfaces the bottlenecks.

In most organisations, the majority of lead time is spent waiting rather than processing. Code review time often dominates because it depends on human availability. A pull request opened at 4pm might not receive its first review until the next morning, adding 16 hours of wait time for just a few minutes of actual review work.

The CI pipeline time is typically the most measurable and improvable stage. Build and test duration can be optimised through parallelisation, caching, and test selection. Queue wait time can be reduced by scaling CI runners. These are infrastructure investments with predictable returns.

Approval gates often represent the largest hidden cost. A change advisory board that meets weekly adds up to 7 days of lead time regardless of how fast the rest of the pipeline operates. Replacing these manual gates with automated policy checks and post-deployment monitoring can eliminate days of waiting while maintaining or improving safety.

Strategies to reduce lead time for changes

  1. 1

    Measure each pipeline stage independently

    Before optimising, instrument each stage of the pipeline to understand where time is actually spent. Many teams assume the build is the bottleneck when code review or approval gates consume far more time. Stage-level measurement directs effort to where it will have the most impact.

  2. 2

    Set code review SLAs and reduce batch size

    Establish an expectation that pull requests receive first review within a defined window, such as 4 hours. Smaller pull requests are reviewed faster and with higher quality. Teams that review 50-line changes in minutes often take days to review 500-line changes.

  3. 3

    Parallelise and cache CI pipelines

    Run independent test suites and build steps in parallel rather than sequentially. Cache dependencies and build artefacts between runs. Use test impact analysis to run only the tests affected by each change. Target a total CI time under 10 minutes.

  4. 4

    Replace manual gates with automated checks

    Audit every manual approval step and ask whether it can be automated. Security checks, compliance policies, and change risk assessments can often be encoded as automated rules that run in the pipeline. Reserve human review for genuinely exceptional cases.

  5. 5

    Enable continuous deployment

    If every change that passes automated tests is deployed to production automatically, the deployment stage adds minutes rather than hours or days. Combined with feature flags and canary deployments, continuous deployment is both fast and safe.

Reducing lead time for changes is not about cutting corners on quality. The DORA research shows that elite teams have both the shortest lead times and the lowest change failure rates. The techniques that reduce lead time, such as smaller batches, automated testing, and continuous deployment, also improve quality.

Lead time for changes and business outcomes

Lead time for changes directly affects the speed at which an organisation can respond to opportunities and threats. A team with a one-hour lead time can fix a critical production bug within hours of discovery. A team with a one-month lead time must wait for the next release cycle, leaving the bug in production for weeks.

The same dynamic applies to feature delivery. When a competitor launches a new capability, the organisation with shorter lead times can respond faster. When customer feedback reveals a product problem, shorter lead times enable faster iteration. Over months and years, this speed advantage compounds into a significant competitive moat.

Lead time also affects developer satisfaction and retention. Engineers who see their work reach users quickly feel a stronger connection between effort and impact. Those who wait weeks or months for their code to reach production often experience frustration and disengagement. In a competitive hiring market, engineering velocity is itself a retention tool.

For organisations already tracking throughput and cycle time, lead time for changes provides the specific, measurable view of how long the software delivery pipeline takes from end to end.

Tracking lead time for changes with KPI Tree

KPI Tree lets you model lead time for changes as a stage-by-stage metric tree that connects each pipeline phase to the total. Each stage, from code review to deployment, becomes a node with its own duration data, making bottlenecks visible at a glance.

The tree can be segmented by team, repository, and change type to identify whether lead time patterns are uniform or concentrated in specific areas. A team might have fast lead times for bug fixes but slow lead times for features, revealing that the bottleneck is specific to how feature work flows through the pipeline.

Connecting lead time to deployment frequency and downstream metrics like customer satisfaction score provides a complete view of whether pipeline speed improvements are translating into better delivery outcomes and user experience.

Related metrics

Deployment frequency

DORA metric

Operations Metrics
GitHub

Metric Definition

Deployment Frequency = Number of Production Deployments / Time Period

Deployment frequency measures how often an organisation successfully releases code to production. It is one of the four DORA (DevOps Research and Assessment) metrics that predict software delivery performance and organisational outcomes. Teams that deploy more frequently deliver value to users faster, reduce the risk of each individual release, and create tighter feedback loops between development and production.

View metric

Cycle time

Process speed

Operations Metrics
Jira

Metric Definition

Cycle Time = Process End Time − Process Start Time

Cycle time measures the total elapsed time from the start to the end of a process. It is a fundamental operations metric used in manufacturing, software development, service delivery, and any context where the speed of a process directly affects throughput, cost, and customer satisfaction.

View metric

Throughput

Output volume

Operations Metrics

Metric Definition

Throughput = Total Units Completed / Time Period

Throughput measures the number of units produced, tasks completed, or transactions processed in a given time period. It is the fundamental measure of an operation's productive capacity and the primary output metric for manufacturing, logistics, software development, and service delivery.

View metric

Code review velocity

Engineering throughput

Operations Metrics
GitHub

Metric Definition

Code Review Velocity = Review Completed Timestamp − Pull Request Opened Timestamp

Code review velocity measures the elapsed time from when a pull request is opened to when the review is completed. It captures how quickly a team provides feedback on proposed code changes, which directly influences how fast work moves from development to deployment. Slow reviews create bottlenecks, force context switching, and inflate lead times far beyond what the actual coding effort requires.

View metric

Reduce lead time for changes with KPI Tree

Build a delivery pipeline tree that breaks lead time into code review, CI, approval, and deployment stages. See where changes wait, identify bottlenecks, and track the impact of pipeline investments on delivery speed.

Experience That Matters

Built by a team that's been in your shoes

Our team brings deep experience from leading Data, Growth and People teams at some of the fastest growing scaleups in Europe through to IPO and beyond. We've faced the same challenges you're facing now.

Checkout.com
Planet
UK Government
Travelex
BT
Sainsbury's
Goldman Sachs
Dojo
Redpin
Farfetch
Just Eat for Business