Why every target creates an incentive to game it
Goodhart's Law and metric design
When a measure becomes a target, it ceases to be a good measure. Goodhart's Law is the single most important concept in metric design, and most organisations learn it the hard way. This guide explains why metrics get gamed, how to spot it, and how to build measurement systems that resist it.
9 min read
What is Goodhart's Law?
Goodhart's Law
"When a measure becomes a target, it ceases to be a good measure." This principle, first articulated by the British economist Charles Goodhart in 1975, is the foundational insight behind every metric that has ever backfired.
Charles Goodhart originally formulated his law in the context of monetary policy. He observed that when the Bank of England began targeting specific monetary aggregates, the statistical relationships that had made those aggregates useful as indicators promptly broke down. The act of targeting a measure changed the behaviour of the system being measured, which in turn destroyed the measure's predictive value. What had been a reliable signal became noise the moment it was elevated to a target.
The anthropologist Marilyn Strathern later generalised this insight beyond economics, restating it as: "When a measure becomes a target, it ceases to be a good measure." Her formulation stripped away the domain-specific context and revealed the universal principle beneath. It applies to monetary policy, to education, to healthcare, to software development, and to every organisation that uses metrics to manage performance. The mechanism is always the same: people optimise for what is measured, and in doing so they change the system in ways that decouple the measure from the underlying reality it was meant to represent.
This matters for every organisation that relies on metrics to make decisions, allocate resources, or evaluate performance. If you set a target, people will find ways to hit it. Some of those ways will involve genuinely improving the thing you care about. Others will involve gaming the metric itself, optimising the number while leaving the underlying reality unchanged or even degrading it. The gap between what the metric measures and what you actually care about is where Goodhart's Law lives. Understanding this gap is not optional. It is the difference between a measurement system that guides good decisions and one that actively produces bad ones.
Real-world examples of Goodhart's Law
Goodhart's Law is not a theoretical curiosity. It has produced spectacular failures across industries and centuries. The examples below span governments, corporations, and everyday business operations. Each one follows the same pattern: a reasonable-sounding metric is turned into a target, people optimise for the target rather than the outcome it was meant to represent, and the result is the opposite of what was intended.
The cobra effect
During British colonial rule in India, the government offered a bounty for dead cobras to reduce the snake population. Enterprising citizens began breeding cobras to collect the bounty. When the government discovered this and scrapped the programme, the breeders released their now-worthless cobras, increasing the population beyond its original level. The metric (dead cobras) was a perfect proxy for the goal (fewer cobras) until it became a target.
Soviet nail factories
When Soviet central planners set production targets by weight, factories produced small numbers of absurdly heavy nails that nobody could use. When they switched to targets based on quantity, factories produced millions of tiny, equally useless nails. Each metric was reasonable in isolation. As a target, each produced behaviour that satisfied the number while completely undermining the purpose.
Bank account fraud
A major US bank set aggressive targets for the number of accounts opened per employee. Staff responded by opening millions of unauthorised accounts in existing customers' names. The metric (accounts opened) was meant to proxy for customer acquisition and cross-selling. Instead, it incentivised fraud on an industrial scale, resulting in billions in fines and lasting reputational damage.
Call centre handle time
Many call centres target average handle time, the average duration of a customer call. When agents are evaluated on this metric, the rational response is to end calls as quickly as possible, even if the customer's problem is unresolved. Some agents hang up on difficult calls entirely. The metric goes down, customer satisfaction goes down with it, and repeat call volume goes up.
Standardised testing in schools
When schools are evaluated and funded based on standardised test scores, teachers allocate disproportionate time to test preparation at the expense of broader learning. Subjects not covered by the test are deprioritised. Students learn to pass the test rather than to understand the material. The metric (test scores) rises while the thing it was meant to measure (educational quality) stagnates or declines.
SaaS sign-up targets
A SaaS marketing team targeted on the number of sign-ups will naturally optimise for volume over quality. Free trial sign-ups increase, but lead-to-customer conversion drops because the new sign-ups were never genuinely interested in the product. The marketing team hits their number. The sales team misses theirs. The business is worse off despite the metric improving.
The common thread across all of these examples is not malice. In most cases, the people gaming the metric were responding rationally to the incentive structure they were given. The call centre agent who hangs up on a difficult call is not trying to harm the company. They are trying to meet the target that determines their performance review, their bonus, or their continued employment. The failure is in the metric design, not in the people responding to it. This distinction matters because the solution to Goodhart's Law is not to punish gaming. It is to design measurement systems where gaming is either impossible or indistinguishable from genuine improvement.
Why metrics get gamed
Understanding why metrics get gamed requires going beyond the observation that people respond to incentives. The behavioural science behind Goodhart's Law is well established, and it reveals that the problem is structural, not moral. Several reinforcing mechanisms drive the gap between what a metric measures and what an organisation actually wants.
Self-determination theory, developed by Edward Deci and Richard Ryan, demonstrates that extrinsic motivators such as targets, bonuses, and performance reviews can crowd out intrinsic motivation. When people are intrinsically motivated, they care about doing good work for its own sake. When an extrinsic target is introduced, their focus shifts from the work itself to the number that represents it. A teacher who loves helping students learn becomes a teacher who loves raising test scores. A developer who takes pride in code quality becomes a developer who closes tickets. The metric does not just measure behaviour. It reshapes it.
Campbell's Law, the social science cousin of Goodhart's Law, states that the more any quantitative social indicator is used for decision making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor. Donald Campbell published this in 1979, and it has been validated repeatedly in education, policing, healthcare, and corporate management. The mechanism is straightforward: when a metric carries high stakes, people allocate their effort toward influencing the metric specifically, even when that effort would be better spent on the broader objective.
The principal-agent problem compounds these effects. The person setting the target (the principal) and the person being measured against it (the agent) have different information and often different interests. The principal cannot perfectly observe the agent's behaviour, so they use the metric as a proxy. But the agent, who understands their own context far better than the principal does, can identify ways to satisfy the metric without satisfying the principal's actual intent. This information asymmetry is the engine that drives gaming. The wider the gap between what is measured and what is valued, the more room there is for the agent to exploit it.
Finally, targets create a narrowing of attention that psychologists call goal fixation. When people are given a specific number to hit, they develop tunnel vision around that number at the expense of everything else. This is not laziness or dishonesty. It is a well-documented cognitive effect. The metric becomes the frame through which all decisions are filtered, and anything outside that frame, no matter how important, fades from view.
How to design metrics that resist gaming
Goodhart's Law cannot be eliminated entirely. As long as metrics carry consequences, people will optimise for them. But the gap between the metric and the outcome it represents can be narrowed dramatically through thoughtful design. The principles below are drawn from behavioural science, systems thinking, and hard-won operational experience.
- 1
Pair every quantity metric with a quality metric
This is the single most effective defence against gaming. If you measure the number of leads generated, also measure lead-to-opportunity conversion rate. If you measure tickets closed, also measure customer satisfaction per ticket. If you measure lines of code written, also measure defect rate. The pairing makes it impossible to game one metric without exposing the distortion in the other. A call centre that measures both handle time and first-call resolution rate forces agents to find genuinely efficient solutions rather than simply ending calls.
- 2
Balance leading and lagging indicators
Leading indicators (such as pipeline created or features shipped) are easy to game because they are close to the activity being measured. Lagging indicators (such as revenue retained or customer lifetime value) are harder to game because they reflect real outcomes over time, but they are too slow to guide daily decisions. Using both creates a measurement system where short-term activity is validated against long-term results. If the leading indicators improve but the lagging indicators do not follow, you know something is being gamed.
- 3
Measure outcomes, not just outputs
Outputs are the things people produce: features shipped, calls made, reports written. Outcomes are the results those outputs are meant to create: customer problems solved, revenue generated, decisions improved. When you target outputs, people will produce more of them regardless of whether they achieve anything. When you target outcomes, people have to think about whether their outputs actually work. This shifts the focus from activity to impact.
- 4
Rotate metrics periodically
Any metric that remains a target long enough will eventually be gamed. People learn the system, discover its gaps, and optimise for the letter rather than the spirit. Rotating which metrics receive attention (while keeping the core set stable for trend analysis) prevents this ossification. You are not changing what you measure. You are changing which measurement carries the most weight in any given period. This keeps teams focused on the broader system rather than on a single number.
- 5
Create transparency through metric trees
A metric tree connects every metric to the ones above and below it through causal relationships. This transparency makes gaming visible. If a team games "leads generated" but the parent metric "lead-to-opportunity rate" declines, the tree exposes the distortion immediately. Metric trees also distribute accountability across a system rather than concentrating it on a single number, which reduces the incentive to game any one metric at the expense of others.
- 6
Build in qualitative checks
Not everything that matters can be quantified, and not everything that is quantified matters equally. Regular qualitative reviews, such as case studies of individual customer interactions, peer code reviews, or deal retrospectives, provide a counterweight to purely numerical targets. They surface the context that numbers alone cannot capture and create accountability for the spirit of the goal, not just the letter of the metric.
No single principle on this list is sufficient on its own. The power comes from combining them into a measurement system that is resistant to gaming at multiple levels. A well-designed system pairs quantity with quality, balances leading indicators against lagging ones, connects every metric to its neighbours in a tree, and supplements the numbers with qualitative judgement. It is not gaming-proof, because nothing is. But it raises the cost of gaming to the point where genuine improvement becomes the easier path.
How metric trees help prevent Goodhart's Law
Goodhart's Law thrives in isolation. When a metric exists as a standalone number, disconnected from the system it is meant to represent, there are no guardrails to prevent it from being gamed. A metric tree changes this dynamic fundamentally. By connecting every metric to its parent (the outcome it contributes to) and its children (the inputs that drive it), a tree creates a web of relationships where gaming one metric produces visible distortions in the metrics around it.
Consider a SaaS marketing team that is targeted on leads generated. In a flat dashboard, they can inflate this number by lowering qualification criteria, running broad campaigns that attract unqualified sign-ups, or even purchasing email lists. The number goes up, the target is hit, and nobody notices the problem until the sales team complains weeks later. In a metric tree, however, leads generated sits beneath lead-to-opportunity rate and alongside lead quality score. The moment the team inflates lead volume through low-quality channels, the conversion rate drops visibly in the tree. The gaming is exposed not by a manager's intuition but by the structure of the measurement system itself.
This structural transparency is the key insight. A metric tree does not prevent people from wanting to game metrics. It prevents them from doing so invisibly. When every metric is connected to its neighbours, gaming one metric without affecting the others requires actually improving the underlying reality, which is the goal in the first place. The tree turns the incentive structure from "hit your number by any means" into "improve your part of the system without degrading the rest."
In the tree above, gaming "Leads Generated" by lowering quality would immediately show up as a decline in "Lead-to-Opportunity Rate" and a drop in "Lead Quality Score." The tree makes the trade-off visible to everyone, not just the person gaming the metric. This is profoundly different from a dashboard where each number exists in its own silo. The tree reveals the causal structure of the business, and causal structure is exactly what Goodhart's Law exploits when it is hidden.
Metric trees also address the problem of tunnel vision. When a team owns a single metric in isolation, that metric becomes their entire world. Every decision is filtered through it, and every adjacent concern is treated as someone else's problem. A metric tree reframes the relationship: your metric is part of a system, and your job is to improve it without degrading the metrics around it. This systemic framing shifts behaviour from local optimisation to global improvement, which is precisely the shift that Goodhart's Law demands.
The deeper lesson
It is tempting to read Goodhart's Law as an argument against measurement. If every metric that becomes a target stops being useful, why measure anything at all? This conclusion is understandable but wrong. Goodhart's Law is not an argument against measurement. It is an argument against naive measurement, against the assumption that a single number can capture a complex reality, that hitting a target is the same as achieving a goal, or that the map is the territory.
The organisations that fall hardest to Goodhart's Law are not the ones that measure too much. They are the ones that measure too narrowly. They pick a single metric, elevate it to a target, attach consequences to it, and then wonder why people optimise for the number instead of the outcome. The solution is not less measurement but richer measurement: systems of interconnected metrics where each number is checked by its neighbours, where quantity is balanced by quality, where leading indicators are validated by lagging ones, and where the structure of the measurement system mirrors the structure of the business itself.
“The goal of measurement is not to produce numbers. It is to produce understanding. When your measurement system helps people understand how the business works, what drives outcomes, and where the leverage points are, gaming becomes both harder and less attractive. Understanding is the antidote to Goodhart's Law.”
This is the deeper insight that metric trees embody. A tree does not just organise metrics. It encodes a theory of how the business works: which inputs drive which outputs, how changes in one area ripple through to others, and where the causal relationships are strong or weak. When people interact with this model, they develop a systemic understanding that makes narrow optimisation feel obviously counterproductive. They can see that gaming leads generated will harm lead-to-opportunity rate, which will harm marketing-sourced revenue, which will harm total revenue. The tree does not just prevent gaming through surveillance. It prevents gaming through comprehension.
Goodhart's Law will never be fully overcome. As long as humans design metrics and other humans are evaluated against them, there will be a gap between the measure and the thing it represents. But the size of that gap is a design choice. Organisations that build rich, interconnected measurement systems, that pair quantity with quality, that use trees to expose causal relationships, and that supplement numbers with qualitative judgement, will find the gap small enough to live with. Those that rely on isolated targets and hope for the best will rediscover Goodhart's Law every quarter, wondering each time why the numbers look good and the business does not.
Build a measurement system that resists gaming
Goodhart's Law exploits isolated metrics. A metric tree connects every number to its neighbours, making gaming visible and genuine improvement the path of least resistance. Map your metrics, assign ownership, and see the system as a whole.