KPI Tree
KPI Tree

Five levels from ad hoc reporting to optimised decision-making

The metrics maturity model

Most organisations have more data than they can use and less clarity than they need. The metrics maturity model gives you a framework to assess where you are, understand what is holding you back, and chart a practical path to the next level.

9 min read

Generate AI summary

Why metrics maturity matters

Every organisation measures something. The question is whether those measurements actually change behaviour. A company that tracks revenue in a monthly spreadsheet and a company that uses a connected system of leading and lagging indicators to make daily decisions are both "data-driven" in the loosest sense. But the outcomes they produce are worlds apart.

Metrics maturity is the degree to which an organisation uses structured, connected measurement to drive decisions at every level. Low maturity means metrics exist but sit in silos, arrive too late to act on, and generate more confusion than clarity. High maturity means every team knows which numbers they own, how those numbers connect to company-level outcomes, and what to do when something moves.

The gap between these two states is not primarily a technology problem. Organisations do not graduate from ad hoc reporting to optimised decision-making by buying a better dashboard tool. They graduate by building the habits, structures, and shared understanding that turn data into action. That progression is what the metrics maturity model describes.

“Metrics maturity is not about how much data you collect. It is about how effectively your organisation turns measurement into coordinated action.

The five levels of metrics maturity

The model defines five levels, each representing a qualitative shift in how an organisation relates to its metrics. Progression is not strictly linear. An organisation might be at Level 3 for its revenue metrics but Level 1 for customer experience. The levels describe a general trajectory, not a rigid ladder.

LevelNameHow metrics are usedTypical toolingDecision speed
1Ad hocMetrics are pulled manually when someone asks a question. No standard definitions. Different teams report different numbers for the same thing.Spreadsheets, email attachments, ad hoc SQL queriesDays to weeks
2DefinedKey metrics have agreed definitions and are tracked in dashboards. Reporting is regular but still backward-looking. Teams monitor their own numbers in isolation.BI dashboards, scheduled reports, basic data warehouseDays
3ConnectedMetrics are linked in a hierarchy that shows cause and effect. Teams understand how their numbers feed into company-level outcomes. Ownership is clear.Metric trees, integrated dashboards, alerting systemsHours to a day
4PredictiveLeading indicators are actively monitored. Teams can forecast the impact of changes before they reach lagging outcomes. Investigation is systematic, not reactive.Metric trees with correlation analysis, anomaly detection, forecasting toolsHours
5OptimisedMetrics drive continuous experimentation. Teams run structured tests against specific nodes in the metric tree. The organisation learns and adapts faster than competitors.Experimentation platforms, automated alerting, metric trees as the operating systemMinutes to hours

Most organisations sit somewhere between Level 1 and Level 2. They have dashboards, but those dashboards do not connect to each other. They have metric definitions, but different teams still argue about what counts as an "active user." Moving beyond Level 2 requires a structural shift: from tracking metrics in isolation to connecting them into a model that reflects how the business actually works.

How to recognise where you are

Self-assessment is difficult because every organisation believes it is more mature than it actually is. The following indicators are based on observable behaviours, not aspirations. Be honest about which description matches your day-to-day reality, not your best-case scenario.

Level 1: Ad hoc

When a senior leader asks "why did revenue drop?", someone spends two days pulling data from three different systems. Two analysts produce different answers because they used different definitions. Metrics live in spreadsheets that are emailed around. Nobody knows which version is current. Decisions are based on gut feel supported by selective data.

Level 2: Defined

Your organisation has a BI tool with dashboards that people actually look at. Key metrics have documented definitions. Monthly or weekly reporting happens on schedule. But when a metric moves, the "why" investigation still takes days and involves several ad hoc queries. Each team monitors their own metrics without understanding how they connect to what other teams measure.

Level 3: Connected

Metrics are organised into a hierarchy. When revenue drops, you can trace the tree downward and identify which branch moved within minutes, not days. Every metric has a named owner. Cross-functional conversations start from shared context because everyone navigates the same model. Teams understand not just their own numbers but how those numbers influence company outcomes.

Level 4: Predictive

You monitor leading indicators that signal changes before they hit lagging outcomes. When activation rate drops, you do not wait for it to show up in churn numbers next quarter. Correlation analysis helps you quantify the strength of relationships in your metric tree. You can forecast the likely impact of a change in one input on the metrics above it.

Level 5: Optimised

Teams run structured experiments targeted at specific nodes in the metric tree. You know which levers have the highest expected impact and you test changes against them systematically. The metric tree is not just a monitoring tool but the operating system for how the organisation learns. Decisions are made quickly because the framework for evaluating them already exists.

How to move from one level to the next

Each transition requires different work. Trying to skip levels rarely succeeds because each level builds the foundations the next one depends on. An organisation that jumps straight to experimentation without first connecting its metrics will optimise inputs that do not actually drive the outcomes it cares about.

  1. 1

    From Ad hoc to Defined

    The first transition is about standardisation. Pick your 15 to 20 most important metrics. For each one, document a single definition that every team agrees on: what it measures, how it is calculated, where the data comes from, and who is responsible for its accuracy. Put these metrics into a shared dashboard that updates automatically. The goal is to eliminate the situation where two people produce different numbers for the same metric. This transition is primarily a governance exercise. The technology is secondary.

  2. 2

    From Defined to Connected

    The second transition is structural. Take your defined metrics and organise them into a hierarchy that shows cause and effect. Start with your North Star metric at the top and decompose it into the drivers and sub-drivers beneath it. Assign a named owner to every node. This is where metric trees become essential. A dashboard shows you metrics side by side. A metric tree shows you how they relate to each other. This transition changes how people investigate problems: instead of asking "what happened?", they start asking "which input changed?"

  3. 3

    From Connected to Predictive

    The third transition adds forward-looking capability. Identify the leading indicators in your tree: the metrics that move before lagging outcomes change. Measure the correlation strength between connected nodes so you can quantify how much a change in one metric is expected to affect its parent. Set up alerts for leading indicators so teams can act before the impact reaches headline numbers. This transition requires analytical maturity and often involves the data team building correlation models on top of the metric tree structure.

  4. 4

    From Predictive to Optimised

    The final transition embeds experimentation into the operating model. Teams identify the highest-leverage nodes in the tree and run structured tests to improve them. Results are measured not in isolation but by tracing the effect through the tree to company-level outcomes. The experimentation cadence is continuous, not occasional. This transition requires a cultural shift: from treating metrics as a reporting mechanism to treating them as the primary tool for organisational learning.

How metric trees accelerate maturity

The single most impactful thing an organisation can do to advance its metrics maturity is to build a metric tree. This is not because metric trees are inherently magical. It is because the process of building one forces you to do the work that maturity requires.

Building a metric tree requires you to define your metrics consistently, because you cannot connect nodes that are measured differently by different teams. It requires you to model cause and effect, because the tree structure demands that every metric explains what it drives. It requires you to assign ownership, because a tree without owners is just a diagram. And it requires cross-functional alignment, because no single team can build a tree that spans the entire business.

In other words, a metric tree is not a Level 3 tool that you adopt after you have graduated from Level 2. It is the mechanism that gets you from Level 2 to Level 3 and then provides the scaffolding for Levels 4 and 5.

The tree above illustrates how connected metrics naturally surface leading indicators. Once metrics are arranged in a hierarchy, you can see which lower-level inputs move before upper-level outcomes change. That visibility is what makes the jump from Level 3 (connected) to Level 4 (predictive) possible. Without the tree structure, identifying leading indicators is guesswork. With it, the leading indicators reveal themselves through the shape of the model.

Practical tip

You do not need to build the entire tree before you start seeing value. Start with your North Star metric and the first two levels of decomposition. Connect those nodes to live data. That alone moves you from Level 2 to the beginning of Level 3, and gives you a foundation to build on incrementally.

Common stalling points and how to overcome them

Most organisations stall at a specific level and stay there for years. The stalling points are predictable, and understanding them can help you avoid or break through them.

Stalling at Level 1: the data quality trap

Organisations convince themselves they cannot define metrics until their data is perfect. This is backwards. Define the metrics first, then use the definitions to identify and prioritise data quality issues. Waiting for perfect data is a recipe for permanent immaturity. Start with what you have and improve iteratively.

Stalling at Level 2: the dashboard proliferation problem

Teams keep building more dashboards, but none of them connect to each other. Adding another dashboard does not increase maturity. It increases noise. The fix is to stop building sideways and start building upward: connect existing metrics into a hierarchy that shows how they relate. Fewer dashboards with connected context beats more dashboards with isolated numbers.

Stalling at Level 3: the ownership vacuum

The metric tree exists and the connections are clear, but nobody acts on what it shows. This happens when ownership is assigned to teams rather than individuals, or when owners do not have the authority to make changes. Every metric needs a named person who is empowered to investigate and act. Without individual accountability, the tree becomes a visualisation rather than an operating model.

Stalling at Level 4: the culture gap

The data infrastructure is mature but the organisation does not run experiments. Teams have the tools to predict impact but default to opinion-based decisions. Closing this gap requires leadership to model the behaviour: insist on hypotheses before decisions, celebrate learning from failed experiments, and use the metric tree as the framework for evaluating results.

Notice that each stalling point has a different root cause. Level 1 is a governance problem. Level 2 is a structural problem. Level 3 is an accountability problem. Level 4 is a cultural problem. Applying the wrong solution to the wrong stalling point is one of the most common mistakes organisations make. Building more dashboards does not fix an ownership vacuum. Running experiments does not fix undefined metrics. Match the intervention to the actual bottleneck.

A practical assessment checklist

Use the following questions to assess where your organisation sits today. For each question, answer honestly based on current reality, not planned improvements or recent one-off successes. The level where you first answer "no" to most questions is likely where your organisation currently operates.

  1. 1

    Level 1 checkpoint: Do you have standard metric definitions?

    Can two people in different departments pull the same metric and get the same number? Is there a single source of truth for how "active user," "conversion rate," or "revenue" is calculated? If not, you are at Level 1. The fix is to create a shared metric dictionary and get cross-functional agreement on definitions.

  2. 2

    Level 2 checkpoint: Are metrics reviewed regularly and automatically?

    Do your key metrics update automatically in dashboards that people actually look at on a weekly basis? Is there a standing meeting where performance against metrics is reviewed? If your reporting is still manual or sporadic, you are still solidifying Level 2.

  3. 3

    Level 3 checkpoint: Can you trace from outcome to cause?

    When your North Star metric drops, can you identify which specific input changed within an hour? Do your metrics exist in a hierarchy with clear causal relationships? Does every metric have a named owner? If investigation still requires ad hoc analysis across multiple tools, you have not yet reached Level 3.

  4. 4

    Level 4 checkpoint: Do you act on leading indicators?

    Do you have alerts on leading indicators that trigger action before lagging outcomes change? Can you quantify the expected impact of a change in one metric on the metrics above it? Do you forecast metric performance rather than only reporting it historically? If you are still primarily reactive, you are not yet at Level 4.

  5. 5

    Level 5 checkpoint: Is experimentation embedded in your operating rhythm?

    Do teams run structured experiments targeted at specific nodes in your metric tree? Are experiment results evaluated by tracing effects through the full hierarchy? Is the organisation learning and adapting faster this quarter than last? If experimentation is occasional rather than continuous, you are not yet at Level 5.

Move your metrics maturity to the next level

KPI Tree helps you connect metrics into a hierarchy, assign ownership, and surface the leading indicators that drive action. Start with a guided proof of concept and see the structure your organisation has been missing.

Experience That Matters

Built by a team that's been in your shoes

Our team brings deep experience from leading Data, Growth and People teams at some of the fastest growing scaleups in Europe through to IPO and beyond. We've faced the same challenges you're facing now.

Checkout.com
Planet
UK Government
Travelex
BT
Sainsbury's
Goldman Sachs
Dojo
Redpin
Farfetch
Just Eat for Business