Common metric anti-patterns and how to fix them
Most measurement systems fail not because organisations pick the wrong metrics, but because they fall into the same structural traps that every organisation eventually encounters. This guide catalogues the nine most common metric anti-patterns, explains why each one persists, and shows how a metric tree prevents or exposes every one of them.
9 min read
Why metric anti-patterns matter
An anti-pattern is a common response to a recurring problem that looks reasonable on the surface but produces consistently bad results. In software engineering, anti-patterns are catalogued and taught so that teams can recognise them before they cause damage. The same approach applies to measurement.
Every organisation that uses metrics to guide decisions will encounter the same set of structural traps. Some teams track too many numbers and lose focus. Others track too few and develop blind spots. Some build dashboards that nobody looks at after the first week. Others optimise a single metric so aggressively that they damage everything around it. These are not random failures. They are predictable patterns with well-understood causes, and they can be prevented by anyone who knows what to look for.
The cost of these anti-patterns is rarely dramatic. They do not announce themselves with a crisis. Instead, they erode trust in measurement gradually. Teams stop checking dashboards because the numbers feel disconnected from reality. Leaders make decisions based on metrics that no longer reflect what they were meant to measure. Strategy reviews become ritual exercises in presenting numbers rather than genuine attempts to understand performance. By the time anyone notices, the measurement system has been quietly failing for months.
The compounding cost
Metric anti-patterns rarely cause visible failures on their own. They compound silently. A metric without an owner drifts out of date. A drifted metric produces misleading signals. Misleading signals erode trust. Eroded trust means people stop looking at the data. And an organisation that stops looking at its data is flying blind, regardless of how many dashboards it has.
Nine metric anti-patterns every organisation should recognise
The anti-patterns below appear across industries, company sizes, and functional areas. Some are more common at early-stage startups, others at mature enterprises, but all of them can emerge anywhere that metrics are used to manage performance. Each entry describes the anti-pattern, its symptoms, and how a metric tree either prevents or exposes it.
1. Too many metrics
The organisation tracks dozens or even hundreds of metrics, but nobody can name the five that matter most. Dashboards are sprawling. Review meetings cycle through slides without any discussion changing anyone's behaviour. The symptom is easy to spot: when you ask three leaders which metrics matter most, you get three different answers. The root cause is a fear of missing something important, so everything gets measured "just in case." A metric tree fixes this by forcing a hierarchy. Only metrics that connect to a parent through a causal relationship earn a place in the tree. Everything else is either diagnostic detail or noise.
2. No hierarchy between metrics
Metrics exist as a flat list on a dashboard with no indication of which ones drive which. Revenue sits alongside page views alongside employee satisfaction with no structure explaining how they relate. Teams optimise their own numbers in isolation without understanding how those numbers connect to broader outcomes. The symptom is departments hitting their targets while the company misses its goals. A metric tree solves this directly: every metric is connected to a parent and children through causal or mathematical relationships, making the hierarchy explicit and visible to everyone.
3. Vanity metrics masquerading as KPIs
The organisation reports cumulative totals and ever-increasing numbers that feel good but cannot inform decisions. Total registered users, lifetime downloads, and raw page views dominate dashboards. The symptom is that metrics only ever go up, even when the business is struggling. Nobody asks "what should we do about this?" because the numbers never deliver bad news. A metric tree exposes vanity metrics by requiring every number to demonstrate a causal connection to an outcome. Metrics that cannot answer "what do you drive?" have no place in the tree.
4. Gaming and Goodhart's Law
A metric that was once a useful indicator becomes a target, and people optimise for the number rather than the outcome it represents. Call centre agents hang up on difficult calls to reduce average handle time. Marketers lower lead qualification criteria to inflate lead counts. The symptom is the metric improving while the business outcome it was meant to represent stays flat or worsens. A metric tree makes gaming visible by connecting every metric to its neighbours. When lead volume rises but lead-to-opportunity rate drops, the tree exposes the distortion immediately.
5. Set-and-forget metrics
Metrics are chosen during an annual planning cycle, added to a dashboard, and never revisited until the next planning cycle. Definitions drift as the business changes. Data pipelines break and nobody notices because nobody is watching. The symptom is a dashboard full of numbers that no longer reflect how the business actually operates. A metric tree resists this because the causal relationships between metrics require active validation. When the business model changes, the tree structure must change with it, which forces regular review rather than allowing silent decay.
6. Metrics without owners
Every metric appears on a dashboard, but nobody is specifically accountable for understanding why it moved, investigating anomalies, or taking action when it drifts off track. The symptom is metrics that decline for weeks without anyone noticing or responding. When someone finally asks "why did this drop?", nobody knows, because nobody was watching. A metric tree pairs naturally with ownership: every node in the tree has an owner who is responsible for understanding and improving that part of the system.
7. Conflicting metrics across teams
Different teams are incentivised on metrics that work against each other. Marketing optimises for lead volume while sales needs lead quality. Product ships features to hit a velocity target while support deals with the resulting bugs. The symptom is constant cross-functional tension where both teams are "hitting their numbers" but the business is not improving. A metric tree prevents this by making trade-offs visible. When two metrics share a parent, their relationship is explicit, and optimising one at the expense of the other shows up as a decline in the parent metric.
8. Measuring what is easy, not what matters
The organisation gravitates toward metrics that are readily available from default analytics tools rather than investing in instrumenting the metrics that would actually inform decisions. Page views are tracked because the analytics snippet provides them. Activation rate is not tracked because it requires defining and instrumenting a custom event. The symptom is a dashboard full of metrics that nobody acts on, alongside a list of unanswered strategic questions that the right metrics could resolve. A metric tree surfaces this gap by starting from outcomes and decomposing downward, which reveals what should be measured regardless of what is currently easy to measure.
9. Over-indexing on a single metric
The organisation becomes so focused on one number that everything else is neglected. A North Star metric is a powerful focusing tool, but when it becomes the only metric that anyone pays attention to, the business develops blind spots. Revenue grows while customer satisfaction erodes. User acquisition accelerates while retention collapses. The symptom is a single metric moving in the right direction while the broader health of the business deteriorates. A metric tree prevents this by decomposing the North Star into its component drivers, ensuring that the inputs to the top-level metric receive attention alongside the headline number.
How to diagnose your measurement system
Most organisations harbour several of these anti-patterns simultaneously. The following diagnostic questions can help you identify which ones are present in your measurement system. Answer honestly; the point is not to pass but to surface the patterns that are silently undermining your decisions.
| Diagnostic question | Anti-pattern if "yes" | What to do next |
|---|---|---|
| Can your leadership team name the same top 5 metrics without conferring? | Too many metrics / no hierarchy | Build a metric tree from your North Star down and agree on which nodes are KPIs |
| Do any of your core metrics only ever go up? | Vanity metrics | Replace cumulative totals with rates, ratios, or cohort-based measures |
| Has a metric improved while the business outcome it represents has not? | Gaming / Goodhart's Law | Pair every quantity metric with a quality counterbalance in your tree |
| Were your current KPIs last reviewed more than six months ago? | Set-and-forget | Schedule a quarterly metrics review tied to your tree structure |
| Can you name the owner of every metric on your main dashboard? | Metrics without owners | Assign an owner to every node in the tree with clear accountability |
| Do two or more teams regularly blame each other for metric misses? | Conflicting metrics | Map both teams' metrics in a shared tree to make trade-offs visible |
| Are there important strategic questions your dashboard cannot answer? | Measuring what is easy, not what matters | Start from the question and work backward to the metric, then invest in instrumentation |
If you answered "yes" to three or more of these questions, your measurement system likely has structural issues that no amount of dashboard redesign will fix. The underlying problem is usually the absence of a hierarchy that connects metrics to outcomes and to each other.
How metric trees prevent anti-patterns structurally
The nine anti-patterns above share a common root cause: metrics exist in isolation, disconnected from each other and from the outcomes they are meant to represent. A metric tree addresses this root cause directly by imposing three structural requirements that flat dashboards do not.
First, every metric must have a parent. This requirement eliminates vanity metrics (which cannot demonstrate a causal connection to an outcome), prevents the "too many metrics" problem (because the tree only accommodates metrics with a clear role in the system), and makes set-and-forget harder (because a broken or outdated metric creates a visible gap in the tree).
Second, every parent-child relationship encodes a hypothesis about how the business works. This makes trade-offs visible, prevents conflicting metrics from hiding in separate dashboards, and forces teams to confront the question of whether their local optimisation is helping or harming the broader system.
Third, the tree structure naturally suggests ownership. Each branch of the tree maps to a team or individual. When every node has an owner, the "metrics without owners" anti-pattern disappears, and the "set-and-forget" problem is caught early because the owner is accountable for the health of their branch.
In the tree above, consider how several anti-patterns are structurally prevented. A marketing team that inflates lead volume at the expense of lead quality will see the trade-off surface immediately in the shared parent, qualified pipeline. A product team that over-indexes on feature adoption at the expense of support satisfaction will see retention revenue flag the imbalance. A set-and-forget metric like time-to-value has an owner (the onboarding team) and a visible position in the tree, which means its decay would create a noticeable gap. And the tree as a whole makes it impossible to track hundreds of unrelated metrics, because every metric must justify its position through a connection to the metrics above and below it.
This is not a theoretical benefit. Organisations that build metric trees consistently report that the process of building the tree surfaces more problems than the tree itself. The act of asking "what drives this metric?" and "does this metric actually connect to an outcome?" is a structured audit that catches anti-patterns at the design stage rather than months later when the damage has compounded.
How anti-patterns reinforce each other
These anti-patterns rarely appear alone. They interact and reinforce each other in ways that make them harder to diagnose individually. Understanding these interactions explains why fixing one anti-pattern in isolation often fails, and why a structural solution like a metric tree is more effective than addressing each problem one at a time.
- 1
Too many metrics creates metrics without owners
When an organisation tracks a hundred metrics, assigning meaningful ownership to each one is impractical. So most metrics end up unowned. Unowned metrics are not maintained, their definitions drift, and they become set-and-forget by default. The proliferation of metrics and the absence of ownership are two symptoms of the same structural problem: no hierarchy to determine which metrics matter enough to warrant an owner.
- 2
Measuring what is easy feeds vanity metrics
Default analytics tools surface cumulative totals and raw counts by design. When teams measure what is easy rather than what matters, they inevitably end up with vanity metrics, because vanity metrics are precisely the ones that require the least effort to collect. Replacing them requires investing in custom instrumentation, which requires first knowing what should be measured, which requires a model of how the business works. Without that model, the path of least resistance leads directly to vanity.
- 3
Conflicting metrics and gaming amplify each other
When two teams have conflicting metrics, each team is incentivised to game their own number at the expense of the other. Marketing inflates lead volume because they are measured on it, knowing that the quality problem will show up in sales' numbers, not theirs. The conflict provides cover for the gaming: each team can point to the other as the cause of the downstream problem. In a metric tree, both teams' metrics share a parent, and the gaming surfaces as a decline in that shared parent rather than being hidden across separate dashboards.
- 4
Set-and-forget enables over-indexing
When metrics are not regularly reviewed, the organisation tends to anchor on whatever metric was most prominent when the targets were last set. Over time, this single metric accumulates more and more weight in decision-making, not because anyone decided it should, but because the other metrics were quietly forgotten. The North Star metric that was meant to be one of several focus areas becomes the only focus area through sheer inertia.
- 5
No hierarchy makes every other anti-pattern invisible
This is the meta-pattern that underlies all the others. Without a hierarchy connecting metrics to each other and to outcomes, there is no structural mechanism to detect any of the other anti-patterns. Gaming is invisible because the affected neighbouring metrics are on different dashboards. Vanity metrics go unchallenged because there is no requirement for causal connection. Conflicts between teams are hidden because each team's metrics exist in a separate silo. The hierarchy is not just one fix among many. It is the foundation that makes all the other fixes possible.
A practical framework for fixing your metrics
Diagnosing anti-patterns is necessary but insufficient. The organisations that actually improve their measurement systems follow a structured process to move from a collection of isolated metrics to an interconnected system that resists anti-patterns by design. The steps below provide a practical path from diagnosis to resolution.
- 1
Start with outcomes, not metrics
Before looking at any dashboard, write down the three to five business outcomes that matter most this year. Revenue growth, customer retention, operational efficiency: whatever they are, name them explicitly. These outcomes become the top level of your metric tree. Every metric in your system should trace back to one of these outcomes. If it cannot, it is either diagnostic detail (useful for investigation but not for core reporting) or noise that should be removed.
- 2
Decompose each outcome into its drivers
For each outcome, ask "what are the two or three things that most directly drive this?" Revenue decomposes into new business, expansion, and retention. Retention decomposes into onboarding success, feature adoption, and support quality. Keep decomposing until you reach metrics that a specific team can directly influence. This decomposition is the metric tree, and building it is the single most valuable exercise in measurement design.
- 3
Assign an owner to every node
Every metric in the tree needs a named person or team who is accountable for understanding it, investigating anomalies, and improving it. Ownership does not mean that person single-handedly controls the metric. It means they are the one who notices when it moves, understands why, and coordinates the response. Without ownership, the tree is just a diagram. With ownership, it is an accountability structure.
- 4
Pair quantity with quality at every level
For every metric that measures volume or speed, ensure the tree includes a sibling that measures quality or effectiveness. Lead volume sits alongside lead quality score. Features shipped sits alongside feature adoption rate. Tickets closed sits alongside customer satisfaction. These pairings make gaming structurally visible and ensure that optimising for speed does not come at the expense of value.
- 5
Schedule regular tree reviews
The tree is a living model of how your business works, and it needs to be reviewed as the business evolves. A quarterly review should check whether the causal relationships still hold, whether any metrics have drifted from their definitions, whether new strategic priorities require new branches, and whether any anti-patterns have crept back in. This is not a long process. An hour per quarter is usually sufficient to keep the tree healthy.
“The goal is not a perfect measurement system. It is a measurement system that fails visibly rather than silently. Anti-patterns will always emerge. What matters is whether your structure makes them obvious before they compound.”
The underlying principle
Every anti-pattern in this catalogue exploits the same weakness: isolated metrics that exist without context, without connections, and without accountability. A metric on its own is just a number. It cannot tell you whether it matters, whether it conflicts with another metric, whether it is being gamed, or whether it still reflects the reality it was meant to measure. Only a system of connected metrics can do that.
This is the fundamental insight behind metric trees. A tree is not a better way to organise a dashboard. It is a fundamentally different approach to measurement. Instead of asking "what should we track?", it asks "how does our business work?" Instead of listing metrics, it models the causal relationships between them. Instead of allowing metrics to exist in silos where anti-patterns can hide, it connects every number to its neighbours in a structure where distortions, conflicts, and gaps become visible to everyone.
The organisations that fall into metric anti-patterns are not careless. They are usually data-rich and analytically sophisticated. What they lack is structure. They have the numbers but not the connections between them. They have the dashboards but not the model that explains what the dashboards mean. A metric tree provides that structure, and in doing so, it transforms measurement from a reporting exercise into a tool for understanding and improving how the business actually works.
The nine anti-patterns in this guide will recur in every organisation that uses metrics. They are not bugs in human nature. They are predictable consequences of measurement systems that lack hierarchy, ownership, and causal connections. The good news is that all nine share a common structural fix. Build a metric tree that connects every metric to its drivers and its outcomes. Assign an owner to every node. Review the tree regularly. And treat the tree not as a reporting tool but as a testable model of how your business works. Do this, and the anti-patterns do not disappear entirely, but they become visible early enough to fix before they compound into something much harder to untangle.
Fix your metric anti-patterns with a metric tree
Every anti-pattern in this guide exploits the same weakness: metrics that exist in isolation. A metric tree connects every number to its drivers and outcomes, making distortions visible, ownership clear, and anti-patterns structurally impossible to hide.