KPI Tree
KPI Tree

Agentic analytics: fast answers, missing context

Every major analytics vendor now claims the "agentic" label. Gartner has published a Market Guide. The premise is compelling: AI agents that autonomously explore data, build queries, and surface insights without waiting for a human analyst. But speed without direction is just expensive noise. This guide examines what agentic analytics does well, where it falls short, and why the missing piece is not better models but better business context.

8 min read

Generate AI summary

What is agentic analytics?

Agentic analytics describes a class of AI systems where autonomous agents interact with data on behalf of a user. Rather than waiting for someone to write a query, build a chart, or configure a dashboard, an AI agent receives a natural language question, determines which data sources are relevant, constructs the appropriate queries, executes them, interprets the results, and presents a synthesised answer. The agent can iterate: if the first query does not answer the question, it refines its approach, explores adjacent data, and follows threads without human intervention.

This is a genuine step forward from the chatbot-over-a-database pattern that preceded it. Earlier generations of analytics AI could translate a question into SQL and return a result. Agentic systems go further. They plan multi-step investigations, maintain context across a conversation, and can orchestrate across multiple data sources. An agent might query your data warehouse for revenue trends, cross-reference with your CRM for deal pipeline changes, and check your product analytics for usage pattern shifts, all from a single question.

The promise is real. Analysts spend a significant portion of their time on routine investigative work: pulling data, joining tables, segmenting results, and formatting outputs. If an AI agent can handle the mechanical parts of analysis, human analysts can focus on the interpretive and strategic work that requires judgement, context, and creativity. The question is not whether agentic analytics is useful. It is whether it is sufficient.

The core idea

Agentic analytics uses AI agents that autonomously plan, query, and interpret data to answer business questions. The agent handles the mechanical work of analysis. The open question is what happens after the answer arrives.

Why every vendor is racing to claim this label

In the space of twelve months, virtually every analytics platform has rebranded around the word "agentic." Self-serve BI tools, semantic layers, collaborative notebooks, embedded analytics platforms: they have all adopted the label with varying degrees of substance behind it. This convergence is not coincidental. It is driven by three forces.

First, Gartner validation. When Gartner publishes a Market Guide for a category, it signals to enterprise buyers that the category is real, investable, and worth evaluating. Vendors that are not in the guide risk being excluded from shortlists. The 2026 Market Guide for Agentic Analytics created a land grab. Every vendor needed to be inside the category or risk being perceived as a generation behind.

Second, the AI arms race. Large language models have made it possible to build conversational interfaces over data that genuinely work. The underlying technology, retrieval-augmented generation over structured data, has matured enough that the basic capability is table stakes. If your competitor offers natural language querying and you do not, you look outdated regardless of what else your platform does well. The pressure to ship an agentic feature is existential, not optional.

Third, the dashboard fatigue narrative. Organisations have spent a decade building dashboards and are increasingly vocal about the limitations. Dashboard sprawl, metric inconsistency, the diagnostic gap between seeing a number and understanding it: these are well-documented problems. Agentic analytics positions itself as the answer to dashboard fatigue, replacing static charts with dynamic, conversational exploration. The narrative is appealing because the pain is real.

The result is a market where every vendor claims the same label but delivers different things. Some offer genuine multi-step agents that plan and execute complex investigations. Others have wrapped a chatbot around their existing query engine and called it agentic. Buyers face the challenge of distinguishing substance from positioning in a category where the definition is still forming.

None of this means the category is hollow. The underlying capability is real and valuable. But the rush to claim the label has outpaced the work of understanding what agents actually need to be useful beyond answering questions. Answering questions faster is an improvement. Driving better decisions is a transformation. The gap between the two is where the interesting problems live.

What agentic analytics does well

Before examining the limitations, it is worth being precise about what agentic analytics genuinely solves. The capabilities are real, and dismissing them would be dishonest. For organisations drowning in data but starving for insight, agentic analytics addresses several painful bottlenecks.

Faster time to answer

A question that previously required filing a ticket with the analytics team, waiting in a queue, and receiving a chart two days later can now be answered in seconds. The agent translates natural language into queries, executes them, and returns a synthesised response. This compresses the cycle from question to answer from days to minutes, which means decisions can be informed by data rather than intuition or stale reports.

Democratised data access

Business users who cannot write SQL or navigate a BI tool can now ask questions in plain English and receive meaningful answers. This removes the analyst bottleneck that constrains most organisations. Marketing managers, sales leaders, and operations heads can explore data directly without waiting for a technical intermediary. The data team shifts from answering routine questions to building the infrastructure that makes self-serve exploration reliable.

Multi-source exploration

Agents can orchestrate queries across multiple data sources in a single investigation. Rather than requiring a user to know which database contains which data, the agent navigates the data landscape autonomously. It can join insights from a warehouse, a CRM, and a product analytics platform without the user needing to understand the underlying architecture. This is particularly valuable in organisations with fragmented data stacks.

Iterative investigation

Unlike a static dashboard or a single query, an agent can follow a thread. If the first answer raises a follow-up question, the agent can refine its approach, explore a different dimension, or drill into a specific segment. This mimics the iterative process that good analysts follow but at machine speed. The conversation builds context over multiple turns, narrowing from a broad question to a specific insight.

These are meaningful improvements to the analytics workflow. Any organisation that adopts agentic analytics will see faster answers, broader data access, and fewer routine requests landing on the analytics team. The technology delivers what it promises at the query layer.

The question is what happens next. An agent tells you that revenue dropped 8% last week, concentrated in the enterprise segment, driven by a decline in expansion deals. That is a good answer. But now what? Who should act on it? Is the enterprise segment decline a symptom of a deeper problem or an isolated event? Has this happened before, and if so, what was tried? Did it work? The answer gets you to the starting line. It does not run the race.

The gap: answers without action

The fundamental limitation of agentic analytics is not technical. It is structural. AI agents operate on data. They query tables, compute aggregates, identify trends, and detect anomalies. What they cannot do is understand the business context that turns an observation into an action. This is not a limitation of current models that will be solved by the next generation. It is a limitation of what the agent has access to.

Consider a concrete example. An agent detects that churn rate has increased by 1.5 percentage points this month, concentrated in mid-market accounts that joined in the last six months. This is a correct and useful observation. A good analyst would reach the same conclusion, though more slowly. But the agent has no way to answer the questions that matter next.

  1. 1

    A causal model of how metrics relate

    The agent knows that churn went up, but it does not know that churn is driven by product engagement, which is driven by onboarding completion, which is driven by time-to-value. Without a causal model, the agent cannot trace the symptom to its root cause. It can correlate, but correlation without causation leads to interventions that treat symptoms rather than causes. A metric tree encodes these causal relationships explicitly, giving any system, human or AI, a map from outcome to driver.

  2. 2

    Ownership of who is responsible

    The agent surfaces the churn increase, but it does not know who should act on it. Is this the Customer Success team's problem? Product's problem? Sales's problem, because they are closing deals with poor-fit customers? Without an ownership model, the insight lands in a shared channel where everyone sees it and nobody owns it. The diffusion of responsibility that plagues dashboards is not solved by making the alert smarter. It is solved by connecting every metric to a named person who is accountable for it.

  3. 3

    History of what has been tried

    This is not the first time churn has spiked. Six months ago, the same pattern appeared and the team ran an intervention: a proactive outreach campaign to at-risk accounts. It partially worked. Three months before that, they tried adjusting the onboarding sequence. It did not work. This history is critical for deciding what to do next, but it lives in Slack threads, meeting notes, and the memories of people who may no longer be on the team. The agent has no access to it because no system captures it in a structured way.

  4. 4

    Verification of whether it worked

    Even when an intervention is taken, most organisations have no systematic way to verify whether it actually moved the metric. Did the outreach campaign reduce churn, or did churn decline for an unrelated reason? Without a feedback loop that connects actions to outcomes, the organisation cannot learn from its own experience. It repeats interventions that failed and abandons interventions that worked, because it never measured the causal impact of either.

These four gaps are not features that agentic analytics vendors have overlooked. They are outside the scope of what a query-and-answer system is designed to provide. A semantic layer tells the agent how metrics are calculated. A metric tree tells the agent how metrics drive each other. These are fundamentally different types of knowledge, and an agent without the second is fast but directionless.

“An AI agent without a causal model is like a doctor who can read every blood test but has never studied anatomy. The readings are accurate. The diagnosis is a guess.

The business context layer

The analytics industry has spent the last several years building the semantic layer: a shared definition of how metrics are calculated, which tables they come from, and how dimensions and measures relate to each other. This is essential infrastructure. It ensures that when an agent queries "revenue by region," it uses the same definition of revenue that the finance team uses. Without a semantic layer, agents hallucinate metric definitions and return plausible but incorrect answers.

But the semantic layer solves only half the problem. It tells the agent how to compute a metric. It does not tell the agent how that metric connects to the rest of the business. The semantic layer says "revenue equals price times quantity, filtered by confirmed orders." It does not say "revenue is driven by conversion rate and average order value, conversion rate is driven by checkout completion and add-to-cart rate, and checkout completion is owned by the Payments team who tried a one-click checkout experiment last quarter that lifted completion by 2.3 points."

This second type of knowledge, the causal structure, the ownership, the intervention history, is what we call the business context layer. It sits above the semantic layer and below the decision-maker. It is the missing middle between data infrastructure and business action.

LayerWhat it providesWhat it enables
Data infrastructureTables, columns, joins, pipelines, data qualityReliable, queryable data that agents can access
Semantic layerMetric definitions, dimensions, measures, business logicConsistent answers: every query returns the same number for the same question
Business context layerCausal relationships, metric ownership, intervention history, verification loopsActionable answers: the agent knows why a metric matters, who should act, and what has been tried
Decision-makerJudgement, priorities, strategy, valuesThe human choices that no layer of technology should automate

Most organisations today have invested heavily in the first two layers. Their data infrastructure is solid. Their semantic layer, whether through a dedicated tool or through well-maintained dbt models, ensures consistent metric definitions. But the business context layer is either absent or trapped in people's heads. The causal model lives in the intuition of experienced leaders. The ownership model lives in tribal knowledge. The intervention history lives in archived Slack channels.

When an agentic analytics tool queries data, it moves through the first two layers fluently. It accesses the infrastructure, applies the semantic definitions, and returns a correct number. Then it stops. It cannot traverse the causal model because no system encodes it. It cannot route the insight to an owner because no system maps ownership to metrics. It cannot reference past interventions because no system captures them. The agent is literate in data but illiterate in business context.

The distinction that matters

A semantic layer tells the agent how to calculate a metric. A metric tree tells the agent how that metric drives the business, who owns it, and what has been tried when it moved. Both are necessary. Only together do they give an agent enough context to be genuinely useful.

Making agentic analytics useful

The framing of agentic analytics as a replacement for existing tools misses the point. Agentic analytics is a powerful query and exploration layer. It makes the mechanical work of analysis faster and more accessible. The mistake is expecting it to also provide the structural understanding of how the business works. That is a different problem, requiring a different solution.

The pattern that works is layered. Agentic analytics handles the question-to-answer workflow: translating natural language into queries, exploring data across sources, and synthesising findings. The business context layer, encoded in a metric tree, handles the answer-to-action workflow: tracing observations to root causes, routing insights to owners, surfacing relevant intervention history, and closing the loop on whether actions worked.

Neither layer is sufficient on its own. Agentic analytics without business context produces fast answers that nobody acts on. A metric tree without agentic analytics requires manual investigation that scales poorly. Together, they form a system where AI handles the data mechanics and the causal model provides the business intelligence that turns observations into decisions.

Agent surfaces the observation

The agentic analytics layer detects that expansion revenue declined 12% this month, concentrated in the mid-market segment. It identifies the trend, quantifies the impact, and segments the data. This is the work that previously required an analyst and a day of investigation. The agent delivers it in seconds.

The metric tree provides the context

The metric tree shows that expansion revenue is driven by upsell conversion rate, which is driven by feature adoption depth, which is owned by the Product team. It shows that a similar decline occurred in Q3, when the team ran an in-app upgrade prompt experiment that lifted upsell conversion by 1.8 points. The tree transforms the observation into a navigable chain of cause, ownership, and history.

The owner takes informed action

The Product team, identified as the owner through the metric tree, reviews the observation and the historical context. They decide to rerun the in-app upgrade prompt with a revised targeting criteria, informed by both the agent's data and the tree's record of what worked previously. The action is specific, assigned, and grounded in evidence.

The system verifies the outcome

The intervention is logged against the metric. Over the following weeks, the metric tree tracks whether upsell conversion recovers. If it does, the intervention is recorded as effective. If it does not, the team iterates with the knowledge that this approach was insufficient. The organisation learns from its own experience rather than repeating experiments with no memory.

This is the workflow that closes the strategy-execution gap: not faster answers, but a complete loop from observation to action to verification. The agentic analytics layer contributes speed and accessibility. The business context layer contributes structure and memory. The human contributes judgement and priorities. Each element does what it does best.

Organisations evaluating agentic analytics tools should ask not only "how good are the answers?" but also "what happens after the answer?" If the answer lands in a Slack channel with no owner, no causal context, and no mechanism for follow-through, the speed of the answer is irrelevant. The bottleneck was never the query. The bottleneck is the space between insight and action, and that space requires structural understanding that no query engine, however intelligent, can provide on its own.

“The organisations that will get the most from agentic analytics are not the ones with the best AI. They are the ones that have already mapped how their business works, who owns what, and how they learn from their own interventions. The agent accelerates the loop. The business context layer defines the loop.

Give your AI agents the context they need

KPI Tree provides the business context layer that agentic analytics tools cannot: causal relationships between metrics, ownership at every node, intervention history, and verification loops. Connect your data stack to a metric tree and turn fast answers into effective action.

Experience That Matters

Built by a team that's been in your shoes

Our team brings deep experience from leading Data, Growth and People teams at some of the fastest growing scaleups in Europe through to IPO and beyond. We've faced the same challenges you're facing now.

Checkout.com
Planet
UK Government
Travelex
BT
Sainsbury's
Goldman Sachs
Dojo
Redpin
Farfetch
Just Eat for Business