Metric Definition
Average resolution time
Average resolution time measures the mean elapsed time from when a support ticket is created to when it is fully resolved and closed. It captures the end-to-end customer experience of getting an issue fixed, encompassing wait times, agent work time, escalations, and any back-and-forth exchanges required to reach a solution.
7 min read
What is average resolution time?
Average resolution time (ART, sometimes called mean time to resolution or MTTR in support contexts) is the average elapsed time between a ticket being opened and being marked as resolved. It is the broadest measure of support speed, capturing everything that happens during the life of a ticket: first response wait, agent investigation, customer replies, internal escalations, and final confirmation.
This metric matters because it represents the customer's total wait for a solution. A customer who creates a ticket on Monday and receives confirmation of resolution on Thursday experienced a 3-day resolution time regardless of whether the actual agent work time was 30 minutes. The customer's perception is shaped by the full window, not by the active work time alone.
Resolution time is also a critical capacity metric. Long resolution times mean more tickets remain open simultaneously, increasing the ticket backlog, raising context-switching costs for agents, and creating a growing backlog that can become self-reinforcing. When resolution times increase by even 10%, the cumulative effect on open ticket volume can be dramatic.
However, resolution time must be interpreted carefully. A team that resolves 90% of tickets in 2 hours but has a 10% tail that takes 5 days may show an average of 14 hours. The average hides the fact that most customers have an excellent experience while a small group has a poor one. Tracking percentiles (P50, P90, P95) alongside the average provides a much more accurate picture.
Measure resolution time in business hours rather than calendar hours if your support team does not operate around the clock. Calendar time penalises overnight and weekend gaps that are not within the team's control and obscures genuine efficiency problems during operating hours.
How to calculate average resolution time
For each resolved ticket, calculate the elapsed time from the moment it was created (or the customer's first message) to the moment it was marked as resolved. Sum those durations across all tickets resolved in the period, then divide by the count of resolved tickets.
Decide whether to measure in calendar time or business hours, and whether to include time spent waiting for the customer to reply. Many teams exclude "waiting on customer" status from the calculation, since the support team cannot control how quickly the customer responds. Others include it, reasoning that the customer still experiences the delay.
| Measurement variant | Includes | Best for |
|---|---|---|
| Calendar time (full) | All elapsed time from creation to resolution | Understanding the customer's actual wait experience |
| Business hours only | Elapsed time during operating hours only | Measuring team efficiency without penalising non-working hours |
| Agent time only | Time in agent-active statuses, excluding customer wait | Isolating support team performance from customer response delays |
Decomposing resolution time with a metric tree
Resolution time is the sum of every delay and every working period in a ticket's life. A metric tree breaks it into phases and reveals where the most time is consumed.
Each branch of the tree represents a phase of the ticket lifecycle. If first response wait is the dominant contributor, the fix is staffing and routing. If investigation takes too long, the fix is better tooling and knowledge resources. If back-and-forth exchanges are the problem, the fix is better intake forms and clearer agent communication. If escalation delays dominate, the fix is escalation process improvement or expanded frontline authority.
The tree also reveals interaction effects. Improving intake quality reduces both back-and-forth exchanges and investigation time. Expanding agent authority reduces both escalation delays and first response time wait (since fewer escalated tickets means shorter queues at higher tiers). These cascading benefits are visible in the tree but invisible in a single resolution time number.
Resolution time benchmarks
| Support context | Good | Typical | Needs attention |
|---|---|---|---|
| SaaS (all priorities) | 4 to 8 hours | 12 to 24 hours | 48+ hours |
| E-commerce | 2 to 6 hours | 8 to 16 hours | 24+ hours |
| Enterprise (P1 / critical) | Under 4 hours | 4 to 8 hours | 12+ hours |
| Enterprise (P3 / low) | 1 to 3 business days | 3 to 5 business days | 7+ business days |
| B2C general enquiries | 1 to 4 hours | 6 to 12 hours | 24+ hours |
Resolution time benchmarks must be segmented by priority. Blending critical outages with minor feature requests into a single average produces a number that is neither accurate for urgent issues nor meaningful for routine ones. Track and target each priority level independently.
How to reduce average resolution time
- 1
Improve first contact resolution rate
Tickets resolved on the first interaction have the shortest possible resolution time. Every percentage point improvement in FCR directly reduces average resolution time by eliminating multi-day back-and-forth tickets from the denominator.
- 2
Capture comprehensive information at intake
Require structured information in the ticket creation form: steps to reproduce, error messages, account identifiers, and screenshots. Each missing piece adds a round trip of clarification that can add hours or days to resolution time.
- 3
Reduce escalation delays with clear escalation paths
Define explicit criteria for when and how tickets should be escalated. Set SLAs for escalation response times. Ensure escalation queues are staffed and monitored with the same rigour as frontline queues.
- 4
Implement SLA-driven prioritisation
Not all tickets are equal. Assign priority levels based on impact and urgency, and ensure the queue is ordered by SLA proximity rather than arrival order. This prevents low-priority tickets from blocking high-priority ones.
- 5
Track and address the long tail
Average resolution time can improve while a subset of tickets takes increasingly long. Monitor P90 and P95 resolution times alongside the average. Investigate tickets that remain open beyond a threshold and implement processes to prevent them from stalling.
Tracking resolution time with KPI Tree
KPI Tree lets you model resolution time as the sum of its component phases: first response wait, investigation, back-and-forth, and escalation. Each phase can be decomposed further by channel, priority, team, and issue category.
Connect resolution time to its upstream drivers (staffing, routing, knowledge base coverage) and downstream impacts (open ticket volume, customer satisfaction score, and cost per ticket). When resolution time increases, the tree shows which phase expanded, in which context, and who is best positioned to address it. This transforms resolution time from a lagging indicator into an actionable diagnostic framework.
Related metrics
First Contact Resolution
Support effectiveness
Operations MetricsMetric Definition
FCR Rate = (Issues Resolved on First Contact / Total Issues Handled) × 100
First contact resolution measures the percentage of customer enquiries resolved during the first interaction without requiring follow-up contacts, transfers, or escalations. It is the single most influential metric for customer satisfaction in support operations.
Customer Satisfaction Score
CSAT
Product MetricsMetric Definition
CSAT = (Satisfied Responses / Total Responses) × 100
Customer satisfaction score measures how satisfied customers are with a specific interaction, product, or experience. Unlike NPS which measures loyalty, CSAT captures satisfaction at a moment in time, making it ideal for evaluating specific touchpoints in the customer journey.
Customer Effort Score
CES
Product MetricsMetric Definition
CES = Sum of All Effort Ratings / Number of Responses
Customer effort score measures how much effort a customer had to exert to accomplish a goal with your product or service. Research shows that reducing effort is more predictive of customer loyalty than increasing satisfaction, making CES a powerful complement to NPS and CSAT.
Net Promoter Score
NPS
Product MetricsMetric Definition
NPS = % Promoters - % Detractors
Net Promoter Score measures customer loyalty by asking how likely a customer is to recommend your product or service. It is the most widely used customer experience metric, providing a single number that captures sentiment and predicts growth through word-of-mouth.
Identify and eliminate resolution bottlenecks
Build a metric tree that decomposes resolution time by phase, channel, and priority. Connect it to customer satisfaction and cost metrics to see where faster resolution creates the most value.