Control the inputs. Measure the outputs.
Input metrics vs output metrics
Input metrics measure the controllable activities your teams perform every day. Output metrics measure the business results those activities produce. Understanding the distinction, and connecting the two, is the difference between managing by hope and managing by system.
9 min read
What are input metrics and output metrics?
Definition
An input metric measures a controllable activity or resource that a team directly influences through their work. An output metric measures a business result or outcome that emerges from the cumulative effect of many inputs. You do the inputs. You get the outputs.
The distinction between input metrics and output metrics is one of the most practical frameworks in performance management. It answers a question that every team faces: which numbers should I actually try to move today, and which numbers should I watch to see whether my efforts are working?
Input metrics are the things your teams do. The number of sales calls made, the number of features shipped, the number of support tickets resolved within SLA, the number of blog posts published. These are activities that someone in your organisation controls directly. If a sales development representative decides to make more calls tomorrow, that input metric goes up. There is a direct causal link between effort and measurement.
Output metrics are the results those activities produce. Revenue, retention rate, customer lifetime value, market share. No single person can decide to increase revenue tomorrow the way they can decide to make more calls. Output metrics are the consequence of hundreds of inputs interacting over time. They are essential for knowing whether the business is succeeding, but they are terrible for telling individuals what to do differently.
The most famous articulation of this distinction comes from Amazon. In his 2009 letter to shareholders, Jeff Bezos wrote that Amazon focuses its energy on "the controllable inputs to our business" because that is "the most effective way to maximise financial outputs over time". Amazon tracks roughly 500 measurable goals, and nearly 80% of them are input metrics. The company spends hundreds of hours iterating on which input metrics to use, refining them until each one genuinely predicts and drives the outputs that matter.
Input and output metrics compared
The table below summarises the key differences between input and output metrics. The most important distinction is controllability: input metrics respond directly to team effort, while output metrics are the downstream consequence of many inputs working together.
| Input metrics | Output metrics | |
|---|---|---|
| Definition | Controllable activities and resources that teams directly influence | Business results and outcomes that emerge from upstream inputs |
| Controllability | Directly controllable. A team can move the number today. | Not directly controllable. The number moves as a consequence of many inputs. |
| Timing | Real-time or near real-time. Changes are visible immediately. | Delayed. Results appear days, weeks, or months after the inputs that caused them. |
| Feedback loop | Fast. Teams see the effect of their actions quickly. | Slow. The gap between action and result obscures cause and effect. |
| Examples | Calls made, features shipped, blog posts published, tickets resolved, demos booked | Revenue, retention rate, NPS, profit margin, market share, customer lifetime value |
| Risk if used alone | Activity without accountability. Teams stay busy but never validate whether their inputs produce the intended results. | Delayed feedback. Problems surface too late to correct course. Teams cannot diagnose what went wrong. |
| Where they sit in a metric tree | Leaf nodes and lower branches. The operational edge of the business. | Root node and upper branches. The strategic outcomes the business exists to achieve. |
How input and output metrics relate to leading and lagging indicators
Input metrics and leading indicators are closely related concepts, but they are not identical. The same applies to output metrics and lagging indicators. Understanding the overlap, and the differences, prevents a common source of confusion.
Leading and lagging indicators are defined by timing: does the metric change before or after the outcome you care about? Input and output metrics are defined by controllability: can a team directly influence the metric, or is it an emergent result? In practice, most input metrics are also leading indicators, and most output metrics are also lagging indicators. But the overlap is not perfect.
Where they overlap
Sales calls made is both an input metric (the team controls it) and a leading indicator (it predicts future pipeline). Revenue is both an output metric (nobody controls it directly) and a lagging indicator (it reflects past activity). For most operational metrics, the two frameworks align neatly.
Where they diverge
Market sentiment is a leading indicator of future revenue, but it is not an input metric because your team does not control it. Conversely, the number of bugs shipped per release is an input metric (engineering controls code quality) but it is not meaningfully leading or lagging until you specify the outcome it predicts. The input/output lens focuses on control. The leading/lagging lens focuses on prediction.
Why both lenses matter
Use the input/output distinction when deciding what teams should focus their energy on. Use the leading/lagging distinction when building predictive models and early warning systems. A metric tree naturally accommodates both: inputs sit at the leaves, outputs sit at the root, and the causal chain from bottom to top maps leading indicators to lagging outcomes.
The practical takeaway is that you should not treat these as competing frameworks. They are complementary lenses on the same system. When someone says "focus on leading indicators", they almost always mean "focus on input metrics you can control that predict the output you care about". The language differs, but the underlying logic is the same: find the controllable upstream activities that reliably drive the downstream results, and pour your attention there.
Amazon and the power of input metrics
No company has championed input metrics more visibly than Amazon. Understanding how Amazon uses them illustrates why the input/output distinction matters so much in practice.
Amazon organises its measurement system around what it calls "controllable input metrics". These are metrics that the company can directly influence through operational decisions: product selection, price, page load speed, delivery speed, in-stock rate. The output metrics, revenue, profit, and free cash flow, are tracked and reported to shareholders, but they are not the primary focus of internal management. As Bezos put it, Amazon spends little time discussing financial results internally because the company believes that focusing on controllable inputs is the most effective way to maximise financial outputs over time.
The discipline Amazon applies to selecting input metrics is instructive. The company does not simply pick obvious activity metrics and assume they drive results. It spends hundreds of hours iterating on each input metric to ensure it genuinely predicts and drives the output it is supposed to influence.
“We believe that focusing our energy on the controllable inputs to our business is the most effective way to maximise financial outputs over time. — Jeff Bezos, 2009 Letter to Shareholders”
A well-known example is how Amazon refined its "selection" metric. Initially, the company tracked the number of product detail pages as a proxy for selection breadth. But more pages did not necessarily mean better selection from the customer's perspective. The metric evolved to track the number of detail page views (pages customers actually visited), then to the percentage of detail page views where products were in stock, and finally to the percentage of detail page views where products were in stock and ready for two-day shipping. Each iteration brought the input metric closer to the customer experience it was supposed to drive.
This evolution reveals a crucial principle: not all input metrics are created equal. A poorly chosen input metric can drive the wrong behaviour just as easily as a well-chosen one can drive the right behaviour. The number of product pages is easy to game by listing low-quality or irrelevant products. The percentage of viewed products available for fast shipping is much harder to inflate without genuinely improving the customer experience.
Amazon reviews both input and output metrics together in its Weekly Business Review, a practice that ensures the connection between inputs and outputs is constantly tested. If the input metrics are improving but the output metrics are not, the causal hypothesis is wrong and the input metrics need to change. If both are moving together, the system is working as designed. This feedback loop between input and output is what makes the framework so powerful.
How metric trees connect inputs to outputs
A metric tree is, at its core, a map from outputs to inputs. The output sits at the root of the tree. As you decompose it downward, each level becomes more controllable, more operational, and more input-like. The leaf nodes at the bottom of the tree are pure input metrics: the daily activities that individual teams and people perform. The trunk and upper branches are output metrics: the business results that emerge from the collective effect of all the inputs below.
This structure solves the central problem with both input and output metrics used in isolation. Input metrics without connection to outputs become activity for its own sake. Output metrics without connection to inputs become scorecards that nobody can act on. The tree connects them, showing exactly which inputs feed which outputs, and allowing anyone in the organisation to trace the path from their daily work to the business result it serves.
In the tree above, MRR is the output metric that the business ultimately cares about. Nobody can directly control MRR. But every leaf node is something a specific person can do more of, do better, or do faster. A sales development representative controls outbound sequences sent. A customer success manager controls QBRs delivered. A support engineer controls tickets resolved within SLA.
The middle layers of the tree are where input metrics begin to aggregate into intermediate outputs. Qualified pipeline is an output of outbound activity and discovery calls, but it is simultaneously an input to revenue. This is the same relativity that applies to leading and lagging indicators: every metric in the tree is an output of the metrics below it and an input to the metrics above it. The tree makes this chain of dependency visible, navigable, and ownable.
In KPI Tree, this structure is interactive. You can click on any output metric and trace downward to see every input that feeds it. When an output metric moves unexpectedly, you do not need a week-long investigation to find the cause. You walk down the tree and look at which inputs changed. When you want to improve an output, you identify which input levers have the most room to move and assign owners to move them.
Common mistakes with input and output metrics
The input/output framework is straightforward in theory. In practice, organisations make predictable mistakes that undermine its value.
- 1
Only tracking outputs
This is the most common failure mode. The executive team reviews revenue, retention, and profit every month, but nobody tracks the input metrics that drive them. When an output metric declines, the organisation has no diagnostic pathway. It can see that revenue fell but cannot identify whether the cause was fewer leads, lower conversion, smaller deal sizes, or higher churn. Without input metrics, every decline triggers an ad hoc investigation that takes weeks. With input metrics connected in a tree, the diagnosis takes minutes.
- 2
Only tracking inputs
The opposite failure mode, and equally dangerous. Teams celebrate the number of calls made, features shipped, or campaigns launched without ever checking whether those inputs produce the intended output. This is what Amazon calls "activity without results". A team can hit every input target and still fail to move the output, which means the causal hypothesis linking input to output is wrong. Input metrics must always be paired with output metrics that validate them.
- 3
Choosing input metrics that are easy to game
When an input metric becomes a target, people will find ways to move it without creating real value. If "calls made" becomes the target, representatives will make short, low-quality calls to inflate the number. If "features shipped" becomes the target, engineers will ship small, inconsequential features. The best input metrics are hard to improve without doing genuinely valuable work. Amazon learned this lesson when it refined its selection metric from "number of product pages" to "percentage of viewed products available for two-day shipping". The latter is much harder to game.
- 4
Assuming the causal link between input and output is permanent
The relationship between an input metric and its output can weaken or break over time. Blog posts published might reliably predict organic traffic for two years, then stop working as the competitive landscape changes. Input metrics need periodic validation against their outputs. If the input is improving but the output is flat, the causal chain has a broken link and the input metric needs to be revisited.
- 5
Treating mid-level metrics as pure inputs or pure outputs
Metrics in the middle of the causal chain are both inputs and outputs simultaneously. Qualified pipeline is an output of sales development activity and an input to closed revenue. Treating it as only one or the other creates blind spots. A metric tree avoids this by showing the full hierarchy, making it clear that most metrics occupy a position between the pure inputs at the leaves and the ultimate output at the root.
Input and output metrics by function
Every function in the business has its own set of input and output metrics. The key is to select input metrics that genuinely predict and drive the outputs that matter, rather than simply measuring whatever is easiest to count. The examples below illustrate what well-chosen input/output pairings look like across common business functions.
Sales
Outputs: closed revenue, quota attainment, average deal size. Inputs: discovery calls completed, proposals sent, multi-threaded deals progressed. The best sales input metrics measure quality of activity, not just volume. Tracking "proposals sent" is better than "emails sent" because it is closer to the output it predicts.
Marketing
Outputs: pipeline generated, customer acquisition cost, marketing-sourced revenue. Inputs: content pieces published, campaigns launched, landing page tests run. The gap between marketing inputs and outputs can be long, which makes it tempting to over-index on vanity metrics like impressions. Tie each input to the specific output it is supposed to move.
Product
Outputs: feature adoption rate, retention, net promoter score. Inputs: user research sessions conducted, experiments run, onboarding flows iterated. Product input metrics should measure learning velocity, not just shipping velocity. Shipping features nobody uses is an input that does not connect to the output.
Customer success
Outputs: net revenue retention, churn rate, customer lifetime value. Inputs: QBRs delivered, health check reviews completed, at-risk accounts contacted. Customer success input metrics should be weighted toward proactive engagement rather than reactive support, because prevention is more valuable than rescue.
Engineering
Outputs: system uptime, change failure rate, user-facing defects. Inputs: code reviews completed, automated test coverage added, incident retrospectives conducted. Engineering input metrics work best when they measure practices that improve quality rather than just speed of delivery.
Continue reading
See how your inputs drive your outputs
Build a metric tree that connects the controllable activities your teams perform every day to the business outcomes they produce. Map the causal chain, assign ownership at every level, and stop managing by outputs alone.