From raw data access to actionable understanding
MCP servers for business performance: how to give AI agents real business context
The Model Context Protocol (MCP) lets AI agents talk to your data tools. Every major vendor now ships one. But when someone asks "Why is Revenue down and who should I talk to?", most MCP servers return a number and leave the AI to guess the rest. This guide explains what each MCP server can actually do, where each one stops, and what it takes to give AI agents enough context to deliver answers rather than data.
12 min read
What is MCP and why it matters for business metrics
The Model Context Protocol, or MCP, is an open standard created by Anthropic that lets AI agents connect to external tools and data sources. Think of it as a universal adapter. Instead of building custom integrations between every AI application and every data platform, MCP provides a single protocol that both sides can speak. An AI agent that supports MCP can connect to any MCP server and discover what tools are available, what data it can access, and what actions it can perform.
For business metrics, this matters because it determines what your AI agent actually knows when you ask it a question. If you ask Claude, ChatGPT, or any other AI assistant about your company's performance, the quality of its answer depends entirely on what context it can access. Without MCP, the AI is limited to its training data and whatever you paste into the conversation. With MCP, the AI can reach into your live data systems and pull real numbers, real context, and real relationships.
The question is not whether to use MCP. The protocol is becoming standard across the industry. The question is which MCP server your AI agents should talk to, because that choice determines whether the AI gets raw data or genuine business understanding.
The key question
Every major data vendor now ships an MCP server. But they are not equivalent. The MCP server you connect determines whether AI gets raw data or real business context: metric relationships, ownership, comparisons, root causes, and accountability.
The three layers of business context
To understand why different MCP servers produce such different results, it helps to think about business context in three layers. Each layer builds on the one below it, and most MCP servers only cover the first.
The first layer is the data layer. This is where your raw numbers live: tables, columns, rows, SQL queries, and warehouse connections. An MCP server at this layer lets AI agents browse your database schema and run queries. It can answer "What is Revenue?" by generating SQL and returning a number. BigQuery's MCP server operates primarily at this layer.
The second layer is the semantic layer. This is where metric definitions live: what "Revenue" means, how it is calculated, which table and aggregation to use, and what dimensions it can be grouped by. An MCP server at this layer lets AI agents query named metrics without writing SQL. dbt Cloud's MCP server and Snowflake's Cortex Analyst operate at this layer. The AI can ask for "Revenue" and get back the correct number, because the semantic layer knows the formula.
The third layer is the context layer. This is where business understanding lives: how metrics drive each other, who owns each one, what actions are being taken, whether the current value is normal or anomalous, how it compares to last month or last year, and what the statistical relationships between metrics actually are. No data warehouse and no semantic layer stores this information, because it is not about how data is structured. It is about how the business works.
Most MCP servers stop at layer one or two. They give AI agents access to numbers. They do not give AI agents access to understanding.
Relationships, ownership, comparisons, tasks, and data quality


Metric definitions


Raw SQL, tables, and warehouse connections
dbt Cloud: the data engineering MCP server
dbt Cloud offers the most feature-rich MCP server of the major data vendors, with over 50 tools across local and remote deployment modes. The local server runs dbt CLI commands such as dbt run, dbt test, and dbt build. The remote server connects to dbt Cloud with no local setup. It covers both the data layer and the semantic layer, which makes it the strongest option for data teams who want AI agents to interact with their dbt project.
The semantic layer tools are the most relevant for business performance. The key tool is query_metrics, which lets an AI agent query named metrics defined in your dbt project. You specify which metrics you want, optionally group them by dimensions such as time or geography, filter with a WHERE clause, and get back a JSON result.
This sounds comprehensive until you try to ask a business question. If you ask "How does Revenue compare to last month?", the AI needs to make two separate query_metrics calls, one for this month and one for last month, and calculate the difference itself. The dbt Semantic Layer API supports secondary calculations like period_over_period and rolling averages, but the MCP query_metrics tool does not expose these parameters. They are simply not available through MCP.
Beyond the semantic layer, dbt's MCP server excels at data engineering tasks. It can show model lineage, trigger dbt runs, retrieve test results, and even generate boilerplate code. These are genuinely useful for data engineers working with dbt projects. But they do not help when a business leader asks why a metric changed or who is responsible for fixing it.
| Capability | Available via dbt MCP? |
|---|---|
| Query a named metric value | Yes, via query_metrics |
| Period-over-period comparison (MoM, YoY) | No. Secondary calculations are not exposed through MCP |
| Metric relationships or trees | No. Table lineage only, not metric causality |
| Metric ownership or RACI | No. Exposures have owners, but metrics do not |
| Tasks or known issues | No |
| Correlation analysis | No |
| Root cause analysis | No |
dbt Cloud also charges for semantic layer usage. Every successful query_metrics call counts as at least one queried metric, and multi-metric queries count per metric. The Starter plan includes 5,000 queried metrics per month at $100 per user per month. Enterprise includes 20,000 per month. Overage pricing is approximately $0.075 per queried metric, plus your warehouse still executes the underlying SQL. An AI agent that is actively exploring metrics on behalf of a user can burn through this quota quickly.
Bottom line
dbt Cloud's MCP server is built for data engineers. It can pull a metric value and show you the SQL behind a model. It cannot tell you why a metric changed, who owns it, or what anyone is doing about it.
BigQuery: the SQL execution MCP server
Google launched its fully managed BigQuery MCP server in January 2026, and since March 2026 it is automatically enabled on every BigQuery project. It exposes eight tools at a single HTTPS endpoint with OAuth and IAM authentication.
Five tools handle metadata browsing: list_datasets, list_tables, get_dataset_info, get_table_info, and search_catalog. The remaining three handle analytics: execute_sql runs arbitrary SQL with a three-minute timeout, forecast produces time-series predictions using BigQuery ML, and analyze_contribution performs key-driver analysis to identify which dimensions contributed most to a change in a numeric column.
The analyze_contribution tool is worth noting because it is the closest any data-layer MCP server gets to root cause analysis. When Revenue drops, this tool can identify that the drop was concentrated in the EMEA region among enterprise accounts. But it requires the AI to specify which column to analyse and which segments to compare. It does not know what your KPIs are, how they relate to each other, or that "Revenue" is even a metric you care about. The AI must bring all of that context itself.
Google also offers a separate Looker MCP server with 33 tools that queries through Looker's semantic layer. This means the AI does not need to write SQL, because Looker generates it. But the Looker MCP server requires a full Looker deployment, which is a separate product with its own pricing. And even with Looker, there is no concept of metric hierarchies, ownership, or accountability.
BigQuery charges $6.25 per terabyte of data scanned per query. Every execute_sql and conversational_analytics call through MCP bills against your BigQuery usage. An AI agent exploring data can trigger dozens of queries per conversation, each scanning data. Costs are unpredictable.
| Capability | Available via BigQuery MCP? |
|---|---|
| Query data | Yes, via execute_sql (AI writes SQL) |
| Natural language to SQL | Yes, via conversational_analytics |
| Period-over-period comparison | No. AI must write date-windowing SQL |
| Metric definitions | No. AI must know which table and column to query |
| Metric relationships or trees | No |
| Metric ownership or RACI | No |
| Tasks or known issues | No |
| Key-driver analysis | Yes, via analyze_contribution (single column, not metric tree) |
| Forecasting | Yes, via BigQuery ML forecast tool |
Bottom line
BigQuery's MCP server gives AI a database connection with schema browsing, forecasting, and key-driver analysis. The AI brings all business knowledge itself, which means it guesses.
Snowflake: the natural-language-to-SQL MCP server
Snowflake offers two official MCP servers. The open-source version from Snowflake-Labs is self-hosted and configurable via YAML. The managed version runs natively inside your Snowflake account, created with a single SQL statement, generally available since November 2025.
The managed server supports six service categories: Cortex Analyst for natural language to SQL over semantic views, Cortex Search for RAG-style search over unstructured data, Cortex Agent as an orchestrator combining both, Object Manager for creating and managing Snowflake objects, Query Manager for executing SQL with permission controls, and Semantic Manager for discovering and querying semantic views.
Cortex Analyst is the standout feature. You define a Semantic View in Snowflake, a native DDL object that maps business concepts to physical tables, and Cortex Analyst translates natural language questions into SQL. It is the strongest natural-language-to-SQL engine among the major vendors.
But Cortex Analyst has important constraints. It is stateless: each request is independent, so it cannot reference prior query results. It can only answer questions that are resolvable with SQL. Semantic views have a practical limit of 50 to 100 columns before performance degrades. Self-referencing tables are not supported, so hierarchies within a single table cannot be modelled. And complex multi-table joins are fragile, often producing errors.
Snowflake Semantic Views define metrics as aggregation expressions and support derived metrics that combine metrics from multiple tables. But metrics are a flat list, not a hierarchy. There is no concept of metric-to-metric decomposition, no causal relationships, and no metric tree structure. There is no built-in time intelligence for period-over-period comparisons, no ownership model, no tasks, and no accountability.
Cortex Analyst costs 6.7 credits per 100 messages, roughly $0.20 per question just for the text-to-SQL generation. The generated SQL then runs against your warehouse at standard compute rates. One documented case saw a single poorly scoped Cortex AI query generate a $5,000 bill. Costs are highly unpredictable because you do not control what SQL Cortex Analyst generates.
| Capability | Available via Snowflake MCP? |
|---|---|
| Query data | Yes, via SQL execution or Cortex Analyst |
| Natural language to SQL | Yes, Cortex Analyst (strongest of the three vendors) |
| Period-over-period comparison | No built-in time intelligence. AI asks in natural language and hopes for correct SQL |
| Metric definitions | Partial. Semantic views define aggregation expressions with names and descriptions |
| Metric relationships or trees | No. Metrics are a flat list. Table joins are defined, but metric causality is not |
| Metric ownership or RACI | No. RBAC for access control only |
| Tasks or known issues | No |
| Correlation analysis | No |
| Unstructured data search | Yes, via Cortex Search (RAG) |
Bottom line
Snowflake's MCP server is the most AI-native of the data-layer servers. Cortex Analyst genuinely improves natural-language-to-SQL. But it still answers "what is the number?" rather than "why did it change, who owns it, and what are they doing about it?"
What AI agents actually need to answer business questions
The pattern across all three data-layer MCP servers is the same. They give AI agents access to numbers, not understanding. The AI can retrieve a metric value, but it cannot explain what drove the change. It can run SQL, but it does not know which metrics matter or how they relate to each other. It can return a result, but it cannot tell you who is responsible for fixing the problem or what actions are already underway.
This is not a limitation of the AI model. It is a limitation of the context the AI receives. A human analyst asking "Why is Revenue down?" does not just look at the Revenue number. They check which input metrics moved, look at period-over-period trends, talk to the metric owner, review ongoing initiatives, and consider the strength of relationships between drivers and outcomes. They bring business context that no data warehouse or semantic layer stores.
For an AI agent to answer business questions at that level, it needs context that exists above the data layer. It needs the third layer: the context layer.
How metrics drive each other
Revenue is driven by Signups, Deal Size, and Retention. Signups are driven by Ad Spend and Conversion Rate. This causal structure is the map of how the business works. No warehouse stores it. No semantic layer defines it. But without it, the AI cannot trace a revenue drop to its root cause.
The strength of each relationship
Knowing that Signups drives Revenue is useful. Knowing that Signups correlates with Revenue at r=0.82 while Retention correlates at r=0.41 is actionable. It tells the AI which lever matters most. Correlation analysis must be pre-computed and continuously updated, not generated ad-hoc with SQL.
Pre-computed period-over-period comparisons
Revenue is £2.72M. Compared to what? Same period last year, it was £3.09M. That is a 12 per cent decline. This comparison should be instant, not the result of two separate queries that the AI has to stitch together. Every period-over-period calculation should be pre-computed and ready for the AI to consume.
Who owns each metric
When Revenue drops because Signups are down, the AI needs to know that Sarah owns Signups and David is accountable. A full RACI matrix per metric, Responsible, Accountable, Consulted, Informed, turns the AI from a reporter into a router: it can direct questions and actions to the right person immediately.
What actions are already underway
Before the AI recommends investigating Signups, it should know that Sarah already has two active tasks: "Fix attribution model" and "Relaunch campaign." This prevents duplicate work and gives the AI the ability to report on what is being done, not just what went wrong.
Whether the data can be trusted
A metric that dropped 40 per cent might be a real crisis or a data quality issue. Outlier detection, gap detection, and staleness checks per metric tell the AI when to trust a number and when to flag it for investigation. No data-layer MCP server provides this.
The context layer: what a business performance MCP server looks like
A context-layer MCP server does not replace your semantic layer or your warehouse. It sits above them. It connects to dbt, Snowflake, BigQuery, Looker, and other data sources to sync metric definitions and pull raw data. Then it adds the business context that those systems structurally cannot provide.
When an AI agent connects to a context-layer MCP server and asks "Why is Revenue down and who should I talk to?", it gets back a structured response containing every piece of context it needs in a single call: the metric tree showing which inputs drive Revenue, correlation coefficients measuring the strength of each relationship, pre-computed period-over-period comparisons showing the 12 per cent decline, the RACI ownership matrix showing Sarah as the Signups owner, active tasks linked to the underperforming metric, and data quality signals confirming the numbers can be trusted.
The AI does not write SQL. It does not make multiple queries and calculate deltas. It does not guess at relationships or ownership. It receives structured business context and synthesises it into an answer that a business leader can act on immediately.
This is what KPI Tree's MCP server provides. It exposes six tools, each designed to give AI agents the business context they need:
The get_metrics_metadata tool returns all metrics with their position in the tree, RACI assignments, data source, and relationships. The get_metrics and get_metric tools return metric values with pre-computed comparisons, trends, and health signals. The get_metric_calculations tool returns pre-computed period-over-period comparisons at every granularity, correlation coefficients, outlier flags, and data quality signals. The get_users and get_user tools return user information including which metrics each person owns, is accountable for, or is informed about.
No per-query charges. No warehouse costs generated by MCP queries. All computation, comparisons, correlations, aggregations, root cause analysis, runs in KPI Tree's in-memory compute engine. Your warehouse bill stays flat regardless of how many AI queries hit the system.
The full comparison
The following table compares what each MCP server can provide to an AI agent when asked a business performance question. The differences are structural, not incremental. Data-layer servers are designed to give AI agents access to data. A context-layer server is designed to give AI agents access to understanding.
| Capability | dbt Cloud | BigQuery | Snowflake | KPI Tree |
|---|---|---|---|---|
| Query a metric value | Yes (Semantic Layer) | Via SQL | Via Cortex Analyst | Yes |
| Period-over-period (MoM, YoY) | Not via MCP | AI writes SQL | AI asks in NL | Pre-computed |
| Metric relationships / trees | Table lineage only | No | Flat metric list | Causal metric trees |
| Relationship strength | No | No | No | Correlation coefficients |
| RACI ownership | No | No | No | Full RACI per metric |
| Tasks & known issues | No | No | No | Yes, linked to metrics |
| Root cause analysis | No | Single-column only | No | Traces full tree |
| Data quality per metric | Model-level tests | No | No | Outlier, gap, staleness |
| OKRs & goals | No | No | No | Yes |
| Push notifications | No | No | No | Slack, Email, SMS, WhatsApp |
| Cross-warehouse | dbt warehouses | BigQuery only | Snowflake only | Any warehouse |
| Cost per MCP query | ~$0.075/metric + SQL | $6.25/TB scanned | ~$0.20/question + SQL | Included |
They are complementary, not competing
Data-layer MCP servers and context-layer MCP servers solve different problems. You do not choose between them. You choose which one your AI agents should talk to depending on what question is being asked.
If a data engineer asks "What is the SQL behind this model?" or "Trigger a dbt run for the staging models", the dbt Cloud MCP server is the right tool. If a data scientist asks "Run this forecasting query against our BigQuery tables", the BigQuery MCP server is the right tool. If an analyst asks "Show me total orders by region from our Snowflake data", the Snowflake MCP server is the right tool.
But when a VP of Sales asks "Why is Revenue down and who should I talk to?", when a CEO asks "Are we on track for Q3 and what are the biggest risks?", when a team lead asks "What is the status of the initiatives assigned to my metrics?" — these questions require the context layer. They require metric relationships, ownership, actions, comparisons, and correlations. They require the business understanding that sits above the data.
KPI Tree connects to dbt, Snowflake, BigQuery, Looker, Google Sheets, and more as data sources. It syncs metric definitions from your semantic layer and pulls raw data from your warehouse. The data-layer MCP servers continue to do what they do well. KPI Tree's MCP server adds the layer that makes AI agents genuinely useful for business performance questions.
“Your semantic layer tells AI how metrics are calculated. The context layer tells AI how they drive each other, who owns them, and what is being done about it. That is how AI goes from returning data to delivering answers.”
Getting started
If you are evaluating MCP servers for business performance, start by asking a simple test question: "Why is our North Star metric down, who owns the biggest contributing factor, and what are they doing about it?" Then see which MCP server can answer it.
A data-layer MCP server will return a number. A context-layer MCP server will return an answer: the metric tree with the root cause identified, the correlation strength that proves the relationship, the period-over-period comparison that quantifies the change, the RACI owner who is responsible, and the active tasks that show what is already in motion.
The difference is not incremental. It is categorical. One gives AI agents data. The other gives AI agents understanding.
- 1
Map your metrics as a tree
Before any MCP server can provide business context, you need the context to exist. Map your North Star metric down to its inputs, and their inputs, until you reach the leading indicators your teams control. This is the causal model of your business.
- 2
Assign ownership with RACI
Every metric in the tree needs a named owner. Assign Responsible, Accountable, Consulted, and Informed roles so accountability scales with your business and AI agents know who to route questions to.
- 3
Connect your data sources
Sync metric definitions from your semantic layer, whether that is dbt, Looker, or direct SQL. KPI Tree connects to your existing data stack in minutes, then its compute engine handles comparisons, correlations, and root cause analysis automatically.
- 4
Connect your AI agents via MCP
Point Claude, your Notion AI, or any MCP-compatible agent at KPI Tree's MCP server. From that point on, every business performance question gets answered with full context: the tree, the comparisons, the owners, and the actions.
Continue reading
AI and metrics: how machine learning changes measurement
From reactive dashboards to intelligent metric systems
Dashboards vs metric trees: which one changes behaviour?
What dashboards miss and metric trees solve.
Metric ownership: how to assign accountability that sticks
The most underrated lever in business performance
Give your AI agents real business context
KPI Tree's MCP server provides the context layer that data-layer servers cannot: metric trees, correlations, RACI ownership, active tasks, and pre-computed comparisons. Connect your AI agents to understanding, not just data.