KPI Tree
KPI Tree

Most KPI dashboards display data. The good ones drive action.

How to build a KPI dashboard that drives decisions

The average organisation has dozens of dashboards, yet leadership still cannot answer basic questions about why a metric changed. The problem is not the tooling. It is the approach. Dashboards built without a structural foundation become walls of charts that nobody trusts and everyone ignores. This guide walks through how to build KPI dashboards that are grounded in a metric hierarchy, designed for a specific audience, and stripped of everything that does not drive a decision.

9 min read

Generate AI summary

Why most KPI dashboards fail

Building a dashboard is easy. Building one that people actually use to make decisions is remarkably hard. Most organisations discover this the painful way: they invest weeks in a beautifully designed dashboard, launch it with fanfare, and watch usage drop to near zero within a month. The dashboard is not broken. It is simply not useful.

The root cause is almost always the same. The dashboard was designed around data availability rather than decision-making needs. Someone asked "what data do we have?" instead of "what decisions does this audience need to make, and what information would support those decisions?" The result is a dashboard that reflects the structure of the data warehouse, not the structure of the business.

This distinction matters because data structures and decision structures are fundamentally different. A data warehouse organises information by source system: marketing data here, product data there, finance data somewhere else. But the decisions people make cut across those boundaries. An executive trying to understand why revenue dropped needs to see acquisition, conversion, and retention on the same screen, not on three different dashboards owned by three different teams.

The second failure mode is overloading. When the brief is vague, the natural response is to include everything. Every metric that might be relevant gets a chart. The dashboard grows to 30, 40, or 50 tiles, and the signal disappears in the noise. Research on cognitive load consistently shows that people can effectively process five to seven items at a time. A dashboard with 40 charts is not a dashboard. It is a report disguised as a dashboard, and reports serve a different purpose entirely.

No metric hierarchy

Every chart sits at the same level with no parent-child relationships. Revenue is next to email open rate. Customer lifetime value is next to page load time. Without hierarchy, there is no way to tell which metrics drive which outcomes, and the viewer is left to reconstruct the logic in their head every time they look at the screen.

Wrong audience

A single dashboard tries to serve executives, managers, and analysts simultaneously. Executives drown in operational detail. Analysts cannot drill into the data they need. The compromise satisfies nobody because each audience has fundamentally different questions, time horizons, and levels of detail.

No context for numbers

Metrics are displayed as isolated numbers without targets, trends, or comparisons. A conversion rate of 3.2% means nothing in isolation. Is that good? Bad? Improving? Declining? Without context, every number on the dashboard raises a question instead of answering one.

No path to action

The dashboard shows what happened but provides no pathway to understanding why or deciding what to do. A red number creates anxiety. A green number creates complacency. Neither drives a specific action because the dashboard does not connect the symptom to its cause or the cause to a responsible owner.

Start with structure, not charts

The single most effective thing you can do before building a KPI dashboard is to build the metric tree that sits behind it. A metric tree decomposes your most important outcome into its drivers, and those drivers into their sub-drivers, creating a hierarchy that shows how every metric in the business connects to the top-level result.

This matters for dashboard design because the tree answers three questions that most dashboard projects skip entirely. First, which metrics belong on this dashboard? The tree makes the answer obvious: pick the branch that corresponds to this audience. Second, how should the metrics be arranged? The tree defines the hierarchy, so the layout follows the parent-child structure rather than an arbitrary grid. Third, what is missing? The tree reveals gaps in your measurement that would otherwise go unnoticed until someone asks a question the dashboard cannot answer.

Consider the difference between two approaches. In the first, a product team sits down and brainstorms which metrics belong on their dashboard. They end up with 15 charts covering everything from daily active users to server response time. The dashboard is comprehensive but incoherent because the metrics were selected individually, not as a connected system.

In the second approach, the product team starts with their branch of the metric tree. Their North Star is monthly active users. That decomposes into new user acquisition, activation rate, and retention rate. Each of those decomposes further into operational metrics the team controls. The dashboard shows the branch, organised by level, with the outcome at the top and the drivers beneath it. Every metric on the screen has a reason for being there, and the hierarchy tells the viewer where to look when something moves.

The tree-first principle

Build the metric tree before you build the dashboard. The tree defines which metrics matter, how they relate to each other, and who is responsible for each one. The dashboard is simply a visual layer on top of that structure. If you skip the tree, you are designing a layout without a blueprint.

Design for your audience

A dashboard designed for everyone is a dashboard designed for no one. The most common mistake in dashboard design is building a single view that attempts to serve executives, department heads, team leads, and individual contributors simultaneously. Each of these audiences has different questions, different time horizons, and different thresholds for detail. Serving them all from one screen means compromising on everything.

The metric tree makes audience segmentation straightforward. Each level of the tree corresponds to a different audience. Executives look at the top of the tree: the North Star and its first-level drivers. Department heads look at their branch: the drivers they are responsible for and the sub-drivers that feed into them. Team leads and individual contributors look at the operational metrics at the leaves of their branch.

This is not about restricting access. Everyone should be able to see the full tree if they want to. It is about designing the default view for each audience so that when they open their dashboard, they see the five to seven metrics most relevant to their decisions, arranged in a hierarchy that makes the relationships obvious.

AudienceWhat they needMetrics (from the tree)Refresh cadence
Executive / BoardStrategic health at a glance. Can I see whether we are on track in 30 seconds?North Star + 3-4 first-level drivers with targets, trends, and year-over-year comparisonsMonthly or quarterly
Department headBranch performance. Which drivers in my area are on track and which need attention?5-7 metrics from their branch, with drill-down into sub-driversWeekly or fortnightly
Team leadOperational levers. Are the inputs I control moving in the right direction?Leading indicators and activity metrics at the leaf level of their branchDaily or weekly
Individual contributorPersonal performance. Am I making progress on the metric I own?1-3 owned metrics with targets and recent trendDaily

Notice that the number of metrics decreases as you move up the organisation, while the time horizon increases. This is intentional. Executives should not be looking at daily operational metrics because the noise-to-signal ratio at that cadence is too high for strategic decisions. Equally, individual contributors should not be tracking quarterly strategic outcomes because the feedback loop is too slow to guide their daily work.

The practical implication is that most organisations need three to four dashboards, not one. An executive dashboard, a department dashboard for each major function, and team-level dashboards for operational monitoring. Each one is a different slice of the same underlying metric tree, which ensures consistency. The numbers align because they come from the same model. The hierarchy is consistent because it is defined once in the tree and reflected in every view.

Dashboard design principles that matter

Once you have the right metrics for the right audience, the design of the dashboard itself determines whether people will actually use it. Visual design is not decoration. It is a communication system. Every layout choice, colour decision, and chart type either helps or hinders the viewer in extracting meaning from the data. The following principles are grounded in information design research and reflect what consistently works across organisations of different sizes and industries.

  1. 1

    Put the most important metric in the top-left corner

    Eye-tracking research consistently shows that viewers scan dashboards in an F-pattern or Z-pattern, starting from the top left. Your North Star metric or the single most important KPI for this audience belongs there. Everything else flows from it. If someone glances at your dashboard for five seconds and only registers one number, it should be that one.

  2. 2

    Show every metric with context

    A number without context is a number without meaning. Every metric on the dashboard should include at least two of the following: a target or target range, a trend line showing recent direction, a comparison to the same period last year, or a status indicator (on track, at risk, off track). The goal is to eliminate the question "is this good or bad?" before the viewer has to ask it.

  3. 3

    Use colour as a signal, not as decoration

    Colour should encode exactly one thing: status. Use a neutral palette for the dashboard chrome and reserve colour for meaning. Green for on track, amber for at risk, red for off track. Resist the temptation to make each chart a different colour for visual variety. When colour is used inconsistently, it becomes noise rather than signal, and the viewer has to work harder to extract meaning.

  4. 4

    Limit each dashboard to five to nine metrics

    Cognitive load research is unambiguous: people cannot effectively process more than about seven items simultaneously. A dashboard with 30 charts is a wall of noise. If you need more than nine metrics for an audience, split them across multiple views with clear navigation rather than cramming everything onto one screen. A dashboard that shows less but communicates more is always better.

  5. 5

    Match the chart type to the question

    Line charts answer "how has this changed over time?" Bar charts answer "how do these categories compare?" Scorecards answer "what is the current value relative to a target?" Pie charts almost never answer anything useful. Choose the chart type based on the question the viewer is trying to answer, not based on what looks visually appealing. A mismatched chart type forces the viewer to do mental translation, which slows comprehension and increases the chance of misinterpretation.

  6. 6

    Group related metrics visually

    Metrics that are related in the tree should be adjacent on the dashboard. If conversion rate is a driver of revenue, those two metrics should be near each other so the viewer can see the relationship at a glance. Use whitespace, borders, or section headers to create visual groupings that reflect the tree structure. The layout should make the hierarchy feel intuitive even to someone who has never seen the metric tree.

What to include and what to leave out

The hardest part of dashboard design is not deciding what to put on the screen. It is deciding what to leave off. Every metric you add dilutes the attention available for every other metric. The discipline of exclusion is what separates a useful dashboard from a data dump.

The metric tree provides the filtering mechanism. For any given audience, the relevant metrics are the ones on their branch of the tree, at the appropriate level of depth. Everything else is noise for that audience, even if it is perfectly valid data. The executive dashboard does not need page load time. The engineering dashboard does not need customer acquisition cost. Each audience gets the slice of the tree that supports their decisions and nothing more.

IncludeLeave out
Metrics with a clear owner who will act on themMetrics nobody is accountable for (they will be ignored regardless)
Leading indicators that provide early warning of changeLagging indicators that cannot be influenced at this level (put them on a higher-level dashboard)
Metrics with defined targets or target rangesMetrics you track "just in case" but have no target for
Metrics that change at the cadence of this dashboardMetrics that move too slowly to show meaningful change between reviews
Metrics that are direct drivers in the tree for this audienceVanity metrics that look impressive but do not inform decisions
Contextual elements: targets, trends, comparisonsDecorative elements: 3D charts, excessive colour, unnecessary animation

“The test for every metric on a dashboard is simple: if this number changes tomorrow, will the person looking at this dashboard take a specific action? If the answer is no, the metric does not belong here.

One common objection is "but what if someone needs that data?" The answer is that removing a metric from a dashboard does not mean removing it from the organisation. It means moving it to the appropriate place. Detailed operational data belongs in operational views. Exploratory analysis belongs in a query tool. Historical trends belong in a report. The dashboard is the top of the information hierarchy: it shows the metrics that demand regular attention from a specific audience. Everything else should be accessible but not prominent.

Common dashboard anti-patterns

Knowing what to do is useful. Knowing what to avoid is equally valuable, because anti-patterns are often invisible to the people practising them. These patterns appear in nearly every organisation and are so normalised that teams rarely question them until someone points out the cost.

The "everything" dashboard

Forty metrics, twelve chart types, no hierarchy. This dashboard was designed by committee, with every stakeholder adding "just one more metric" until the screen became unreadable. The owner is proud of its comprehensiveness. Nobody uses it because it takes twenty minutes to find anything and the viewer has no idea where to start. The fix: split it into audience-specific views of three to seven metrics each.

The "so what?" dashboard

Every metric is displayed as a number without a target, trend, or comparison. Conversion rate is 3.2%. Revenue is 1.4 million. There is no way to tell whether these numbers are good, bad, improving, or declining. Every single number raises a question instead of answering one. The viewer leaves the dashboard knowing less than when they arrived because the numbers without context create more uncertainty. The fix: add targets, trends, and status indicators to every metric.

The "orphan metrics" dashboard

A collection of metrics that have no relationship to each other. NPS sits next to server uptime sits next to marketing qualified leads. There is no model connecting them, no hierarchy, and no way to trace cause and effect. Each metric is an island, and the dashboard is an archipelago with no map. The fix: use the metric tree to define relationships before designing the layout.

The "pretty but useless" dashboard

Gradient backgrounds, animated transitions, 3D pie charts, and custom colour palettes. The dashboard won a design award but nobody can find the information they need. Visual appeal was prioritised over information density. Every design choice that does not aid comprehension is a design choice that hinders it. The fix: strip the dashboard back to essential elements and use design to serve clarity, not aesthetics.

The "stale" dashboard

Built six months ago with great enthusiasm, it now shows data that is three weeks old. The data pipeline broke and nobody noticed because nobody was looking at the dashboard. Stale dashboards are worse than no dashboards because they create a false sense of being informed. The fix: assign an owner to the dashboard itself, not just the metrics, and set up alerts for data freshness.

The "competing truth" dashboards

Marketing, product, and finance each built their own dashboard with their own definition of the same metric. Revenue is calculated three different ways. Conversion rate has three different denominators. When the numbers disagree, credibility collapses and meetings become arguments about data rather than discussions about strategy. The fix: establish a single metric tree as the authoritative source of definitions and derive all dashboards from it.

From dashboard to decision system

A well-built dashboard is necessary but not sufficient. The dashboard shows you what is happening. The decision system around it determines whether anyone does something about it. Without a system, even a perfectly designed dashboard becomes a passive display that people glance at occasionally and rarely act on.

The system has four components. First, a review cadence. Each dashboard needs a scheduled review where the audience examines the metrics, discusses deviations from targets, and decides on actions. Without a rhythm, reviews happen only when something goes obviously wrong, which means you are always reacting and never anticipating.

Second, an escalation path. When a metric moves outside its target range, who investigates? What is the threshold for escalation? These rules should be defined before the dashboard goes live, not invented in the moment when something breaks. The metric tree provides a natural escalation path: trace the issue down the tree to find the driver, then route it to the owner of that branch.

Third, action tracking. When the review identifies a problem and someone commits to an action, that action needs to be recorded against the metric it addresses. Over time, this creates organisational memory: a record of what the team tried, what worked, and what did not. Without action tracking, the same problems get investigated from scratch every time they recur.

Fourth, regular pruning. Dashboards accumulate metrics over time as new requests come in and old metrics are never removed. Schedule a quarterly review of the dashboard itself to ask: is every metric here still driving a decision? Are the targets still relevant? Has the audience changed? A dashboard that is not maintained degrades into the anti-patterns described above.

The complete picture

A KPI dashboard is one layer of a larger system. The metric tree provides the structural model: which metrics exist, how they connect, and who owns them. The dashboard provides the visual layer: the right metrics for the right audience in a format optimised for quick comprehension. The review cadence provides the operational layer: the rhythm of examining metrics, deciding on actions, and tracking results. All three layers are needed. A dashboard without a tree has no structure. A tree without a dashboard has no visibility. Either without a review cadence has no action.

Build dashboards that drive decisions, not confusion.

Start with a metric tree that defines your KPI hierarchy, assigns ownership, and connects every metric to the outcome it serves. Then design dashboards as views of that tree, tailored to each audience. The result is fewer dashboards that do more.

Experience That Matters

Built by a team that's been in your shoes

Our team brings deep experience from leading Data, Growth and People teams at some of the fastest growing scaleups in Europe through to IPO and beyond. We've faced the same challenges you're facing now.

Checkout.com
Planet
UK Government
Travelex
BT
Sainsbury's
Goldman Sachs
Dojo
Redpin
Farfetch
Just Eat for Business