Finding the right reference points for your KPIs
How to benchmark your metrics
Benchmarking gives your metrics meaning by placing them in context. But poorly sourced benchmarks or blind comparisons can mislead just as easily as they inform. This guide covers how to find reliable benchmarks, which types to use, and when to ignore them entirely.
9 min read
Why benchmarking matters
A metric without context is just a number. Knowing that your monthly churn rate is 3.5% tells you nothing until you understand whether that is typical for your industry, your stage, and your business model. A 3.5% churn rate might be excellent for a consumer subscription app and catastrophic for an enterprise SaaS company with annual contracts. The number is identical. The interpretation is entirely different.
This is the fundamental problem that benchmarking solves. It provides reference points that turn raw numbers into meaningful signals. Without benchmarks, teams default to one of two unhelpful patterns. Either they celebrate every improvement regardless of whether their performance is still below average, or they panic about every decline regardless of whether they remain above industry norms. Both responses waste energy and misdirect attention.
Benchmarking also serves a political function within organisations. When a product team claims their activation rate is strong, a benchmark transforms a subjective assertion into a testable claim. When leadership questions whether a 110% net revenue retention rate justifies further investment in customer success, industry data showing that top-quartile SaaS companies achieve 120% or higher provides the answer. Benchmarks do not eliminate judgement, but they ground it in evidence rather than opinion.
Key principle
Benchmarks exist to calibrate your judgement, not replace it. A metric that sits below an industry benchmark is not automatically a problem, and one that sits above is not automatically a strength. The value of benchmarking lies in the questions it prompts, not the answers it provides.
Three types of benchmarks
Not all benchmarks are created equal. The type you choose determines what you learn and how you should respond. Most organisations default to external peer benchmarks because they are the most intuitive, but the most useful benchmarking programmes draw on all three types and use them for different purposes.
| Type | What it compares | Best for | Watch out for |
|---|---|---|---|
| Internal historical | Your current performance against your own past performance | Tracking progress over time, identifying trends and regressions, evaluating the impact of specific initiatives | Anchoring to your own trajectory. If your past performance was poor, improving on it may still leave you well below market norms. Internal benchmarks cannot tell you whether your rate of improvement is fast enough. |
| Peer or industry | Your performance against companies of similar size, stage, industry, or business model | Calibrating expectations, identifying relative strengths and weaknesses, supporting board and investor conversations | Measurement inconsistency. Not everyone defines metrics the same way. One company might include free trial users in its churn denominator while another excludes them, making direct comparison misleading. |
| Best-in-class | Your performance against the top performers in your category or an adjacent one | Setting aspirational targets, understanding what is structurally possible, identifying capability gaps | Context blindness. The best-in-class company may have a fundamentally different cost structure, market position, or strategic focus. Benchmarking against them can set unrealistic expectations if you do not account for these differences. |
The most productive benchmarking exercises use internal historical data as the baseline, peer benchmarks to calibrate, and best-in-class benchmarks to inspire. Internal data tells you where you have been. Peer data tells you where you stand. Best-in-class data tells you what is possible. Each type answers a different question, and conflating them leads to poor decisions.
For example, if your customer acquisition cost has fallen 20% year-on-year, your internal benchmark looks positive. But if the industry median CAC for your segment has fallen 35% over the same period, your relative position has actually worsened. You are improving, but more slowly than the market. That distinction matters for resource allocation and strategic planning, and you only see it when you layer peer benchmarks on top of internal ones.
How to find reliable benchmarks
The quality of a benchmarking exercise depends entirely on the quality of the data you compare against. Unfortunately, most publicly available benchmark data is produced by organisations with an incentive to attract attention rather than ensure accuracy. Vendor reports cherry-pick flattering data points. Survey-based studies suffer from self-selection bias because the companies that respond tend to be the ones performing well. Aggregated data sets often lack the segmentation needed to make meaningful comparisons.
This does not mean benchmarking is futile. It means you need to be deliberate about where your data comes from and how you evaluate its reliability.
- 1
Start with your own data
Your internal historical performance is the most reliable benchmark you have because you control the definitions, the measurement methodology, and the data quality. Before looking externally, establish your own baselines across at least four to six quarters. This gives you trend lines that account for seasonality and one-off events. When you later compare against external data, your internal baseline provides a sanity check.
- 2
Prioritise platform-derived data over survey data
The best external benchmarks come from platforms that aggregate anonymised data from their user base rather than from surveys. Tools like ChartMogul, ProfitWell, and Benchmarkit draw on thousands of real companies and provide segmented benchmarks by ARR band, growth rate, and business model. This data is less prone to self-reporting bias because it is extracted from systems of record rather than questionnaires.
- 3
Segment aggressively
A benchmark that compares a £2M ARR startup against a £200M ARR enterprise is worse than useless. Always segment by company size, growth stage, business model, geography, and customer type. The more specific the comparison group, the more useful the benchmark. If a report does not offer segmentation, treat its headline numbers with scepticism.
- 4
Check the methodology
Before citing any benchmark, understand how the metric was defined, how the sample was selected, and when the data was collected. A churn benchmark that includes involuntary churn from failed payments is not comparable to one that only counts voluntary cancellations. A retention benchmark from 2020 may not reflect current market conditions. Transparent methodology is a signal of trustworthy data.
- 5
Build a peer network
Some of the most valuable benchmarking data comes from informal peer groups where founders or functional leaders share metrics confidentially. These groups provide context that published reports cannot: the story behind the numbers, the trade-offs that produced them, and the initiatives that moved them. If you do not have a peer group, consider joining an industry association or a benchmarking consortium such as those run by APQC or SaaS-specific communities.
Common benchmarking mistakes
Benchmarking is deceptively simple in concept and surprisingly difficult in practice. The gap between "compare your numbers to someone else's" and "draw a valid conclusion from the comparison" is where most organisations stumble. The following mistakes are pervasive enough to deserve explicit attention.
Comparing different definitions
The same metric name can mean very different things. Net revenue retention might include or exclude downgrades. Customer count might include or exclude free tiers. Gross margin might be calculated before or after hosting costs. If you do not verify that you and your benchmark source are measuring the same thing in the same way, the comparison is noise dressed up as signal.
Ignoring the context behind the number
A competitor with half your churn rate may also have twice your price point, locking customers into higher switching costs. A company with a lower CAC may be operating in a less competitive market or measuring a different stage of the funnel. Numbers alone tell you what someone achieved. They do not tell you how, or whether their approach is replicable in your context.
Benchmarking everything
Not every metric benefits from external comparison. Operational metrics that are highly specific to your product, architecture, or workflow have no meaningful external benchmark. Trying to benchmark them anyway wastes time and produces misleading conclusions. Focus your benchmarking effort on the metrics where external context genuinely changes how you interpret performance.
Treating benchmarks as targets
A benchmark tells you where the market sits. A target tells you where you want to be. These are different things. The industry median churn rate is not the right target for every company. A company investing heavily in customer success should aim well below the median. A company optimising for rapid growth might tolerate above-median churn temporarily. Benchmarks inform targets, but they should not dictate them.
Using stale data
Markets move. A benchmark from two years ago may reflect conditions that no longer exist. Customer acquisition costs have risen steadily across most digital channels. Retention benchmarks shifted significantly during and after the pandemic. If your benchmark data is more than twelve to eighteen months old, verify that it still reflects current conditions before drawing conclusions.
Cherry-picking flattering comparisons
It is tempting to benchmark against the segment where you look best. A company might compare its growth rate against mature enterprises while comparing its margins against early-stage startups. This produces a flattering but fictional picture. Honest benchmarking means choosing a consistent comparison group and accepting the full picture, including the metrics where you underperform.
Using metric trees to contextualise benchmarks
One of the most common benchmarking failures is comparing a top-level metric without understanding the drivers beneath it. Two companies can have the same revenue growth rate for entirely different structural reasons. One might be growing through new customer acquisition with high churn. The other might be growing through expansion revenue with minimal new logos. The headline number is identical, but the health of each business is radically different.
A metric tree solves this by decomposing each benchmarked metric into its component parts, letting you compare not just the outcome but the structure that produces it. When you benchmark revenue growth, the tree prompts you to also benchmark the drivers: customer acquisition rate, retention rate, expansion revenue, and ARPU. This structural comparison reveals where your performance is genuinely strong, where it is genuinely weak, and where a headline benchmark is masking an underlying problem.
The tree also reveals which metrics should and should not be benchmarked externally. Leaf-level metrics that are highly specific to your product, market, or operational model often have no meaningful external comparison. Your onboarding completion rate depends on your product complexity, your user base, and your onboarding design. Benchmarking it against a different product in a different category produces noise rather than insight.
Conversely, structural metrics like conversion rates, retention rates, and unit economics tend to be more comparable across companies within the same segment. These are the metrics where external benchmarks add real value because they reflect fundamental dynamics of customer behaviour and business economics that are broadly consistent within a category.
The practical rule is this: benchmark the trunk and main branches of your metric tree externally, where the metrics are well-defined and widely measured. Benchmark the leaves internally, against your own historical performance. And for every external benchmark, walk down the tree to understand which sub-drivers are pulling your performance above or below the reference point. A metric tree turns benchmarking from a surface-level comparison into a diagnostic exercise.
Practical tip
When presenting benchmarks to your team or board, always show the metric tree decomposition alongside the benchmark comparison. A slide that says "our NRR is 105% vs the industry median of 110%" is far less useful than one that shows whether the gap is driven by higher churn, lower expansion, or both. The decomposition turns a judgement into a diagnosis.
When to ignore benchmarks
Benchmarking is a tool for calibration, not a substitute for strategy. There are situations where the most productive response to a benchmark is to deliberately ignore it. Recognising these situations is as important as knowing how to benchmark well.
- 1
When you are pursuing a structurally different strategy
If your strategy depends on doing something fundamentally different from your peers, peer benchmarks may be actively misleading. A company that chooses to invest heavily in customer success to achieve best-in-class retention should not be concerned that its sales and marketing spend as a percentage of revenue is above the industry median. The elevated spend is the mechanism through which the strategy works. Benchmarking it against companies with a different strategic emphasis misses the point.
- 2
When the benchmark data is unreliable
If you cannot verify the methodology, sample size, or recency of a benchmark, it is safer to ignore it than to act on it. A benchmark based on a survey of 30 self-selected companies is not a benchmark. It is an anecdote with a sample size. Acting on unreliable data creates a false sense of certainty that is more dangerous than having no external reference point at all.
- 3
When you are creating a new category
Companies that are genuinely creating new markets or product categories have no meaningful peer group to benchmark against. In these cases, internal historical benchmarks and first-principles models are more useful than forcing a comparison with adjacent but fundamentally different businesses. The benchmark data that matters will emerge as the category matures.
- 4
When a benchmark would distort incentives
If hitting an external benchmark would require trade-offs that conflict with your values, your customer experience, or your long-term strategy, the benchmark is the wrong reference point. A company that prides itself on premium support should not cut its support costs to match an industry median that includes companies with no live support at all. The benchmark is accurate. It is just irrelevant to the strategic choice you have made.
- 5
When you are already an outlier for good reasons
If your metric significantly exceeds the best-in-class benchmark, that is usually a signal of genuine competitive advantage, not a reason to ease off. Conversely, if your metric is below the benchmark because you are investing in a capability that has not yet matured, the benchmark may cause premature alarm. In both cases, understanding why you are an outlier matters more than the fact that you are one.
“The most dangerous benchmarks are the ones that are accurate, well-sourced, and completely irrelevant to your situation. A good benchmark applied without judgement is worse than no benchmark at all.”
A practical benchmarking process
Benchmarking should be a recurring discipline, not a one-off exercise. Markets shift, your business evolves, and the comparison group that was relevant last year may not be relevant next year. The following process provides a repeatable framework that avoids the most common pitfalls while keeping the effort proportional to the value.
- 1
Select five to eight metrics that matter most
Do not try to benchmark everything. Focus on the metrics that are most important for your current strategic priorities and where external context would genuinely change how you interpret performance. For most companies, this includes some combination of revenue growth, retention, unit economics, and one or two operational efficiency metrics. Use your metric tree to identify which trunk and branch-level metrics are most comparable across companies in your segment.
- 2
Establish internal baselines first
Before looking externally, document your own performance over the past four to six quarters. Calculate trend lines, note seasonality, and flag any one-off events that distort the data. This gives you a clear picture of your own trajectory and prevents external benchmarks from overshadowing genuinely positive or negative internal trends.
- 3
Source two to three external reference points per metric
For each metric, find at least two independent sources of benchmark data. Compare them against each other. If they broadly agree, you can have reasonable confidence in the range. If they diverge significantly, investigate why before using either. Prioritise platform-derived data, well-documented industry reports, and peer group discussions over vendor marketing materials and unsegmented surveys.
- 4
Present benchmarks as ranges, not points
Industry data is inherently imprecise. Present benchmarks as ranges showing the 25th, 50th, and 75th percentiles, and plot your own performance within that range. This communicates the distribution of performance rather than suggesting a single "correct" number. It also helps teams understand whether closing the gap to the median is a realistic objective or whether they are already performing well within the distribution.
- 5
Review and refresh quarterly
Set a quarterly cadence for reviewing your benchmarked metrics. Update external data annually or whenever a major new report is published. Use each review to ask three questions: has our position relative to the benchmark changed? Do we understand why? And does the benchmark itself still represent the right comparison group? Over time, this builds an institutional understanding of where you sit relative to the market and why.
The goal of this process is not to produce a benchmarking report that sits in a shared drive. The goal is to build a habit of contextualised performance assessment. When your leadership team discusses a metric, the conversation should naturally include both the internal trend and the external reference point. When a team proposes a target, they should be able to articulate how it relates to the relevant benchmark. When you miss a target, the benchmark helps you distinguish between a team underperforming and a market shifting. That is the practical value of benchmarking done well: it makes every conversation about metrics a little more grounded and a little more productive.
Put your benchmarks in context
A metric tree decomposes every benchmark into its structural drivers, showing you not just where you stand but why. Stop comparing headline numbers in isolation. Start diagnosing what is actually driving your performance.