Metric Definition
Predicted CSAT (P-CSAT)
Predicted CSAT is a machine-learning-generated satisfaction score that estimates how a customer would rate a support interaction before they respond to a survey. It transforms CSAT from a retrospective sample into a real-time, comprehensive quality signal across 100% of interactions.
7 min read
What is predicted CSAT?
Predicted CSAT (P-CSAT) is a score generated by a machine learning model that estimates the customer satisfaction rating a customer would give to a support interaction, without requiring the customer to complete a survey. The model analyses signals from the interaction itself, the customer's history, and the resolution outcome to produce a predicted score on the same scale as the traditional CSAT survey (typically 1 to 5).
Traditional CSAT has a fundamental limitation: survey response rates typically range from 5% to 25%. This means that 75% to 95% of support interactions have no satisfaction data at all. The responses that do come in are subject to selection bias: customers who had extreme experiences (very good or very bad) are more likely to respond, while those in the middle are underrepresented. P-CSAT addresses both problems by generating a score for every interaction.
The model is trained on historical interactions where both the interaction data and the actual CSAT survey response are available. It learns which interaction patterns, such as long wait times, multiple transfers, negative sentiment in customer messages, or unresolved outcomes, correlate with low satisfaction, and which patterns, such as fast resolution, positive language, and first-contact resolution, correlate with high satisfaction.
Once deployed, P-CSAT enables real-time quality management. Supervisors can identify interactions that are likely to result in low satisfaction while they are still in progress, enabling proactive intervention. Quality assurance teams can prioritise review of low-P-CSAT interactions. And leadership can track satisfaction trends across 100% of volume rather than a biased sample.
P-CSAT is a complement to traditional CSAT surveys, not a replacement. Continue collecting actual survey responses to validate the model, retrain it as customer expectations evolve, and capture the qualitative feedback that open-ended survey questions provide.
How P-CSAT models work
P-CSAT models are typically supervised learning classifiers or regressors trained on historical ticket data paired with actual CSAT responses. The model ingests a range of features from each interaction and learns the relationship between those features and the survey outcome. The most impactful input features vary by organisation but generally fall into consistent categories.
| Feature category | Example signals | Impact on prediction |
|---|---|---|
| Timing features | First response time, total resolution time, time between messages, business hours vs out-of-hours | Longer wait times and extended resolution timelines strongly predict lower satisfaction |
| Conversation features | Number of agent replies, customer message count, number of transfers or escalations, channel switches | Higher message counts and transfers predict lower satisfaction due to increased effort |
| Sentiment and language | Customer sentiment trajectory, use of negative keywords, agent empathy markers, tone shifts | Declining customer sentiment through the conversation is a strong predictor of dissatisfaction |
| Resolution features | Whether the issue was resolved, resolution method (self-service, agent, escalation), reopened tickets | Unresolved or reopened tickets are the strongest negative predictor of satisfaction |
| Customer context | Account tenure, plan tier, number of previous tickets, recent product usage decline | Customers with declining product usage and repeated tickets tend to rate lower regardless of interaction quality |
Predicted CSAT in a metric tree
P-CSAT decomposes into the same drivers as traditional CSAT, but with the advantage of being available for every interaction. This makes the metric tree actionable at a granularity that survey-based CSAT cannot achieve.
The tree shows that P-CSAT is influenced by factors both within and outside the support team's control. Responsiveness and interaction quality are directly manageable through staffing, training, and tooling. Resolution quality depends on agent capability but also on product stability and engineering responsiveness. Customer context reflects the broader customer relationship.
Because P-CSAT is available for every interaction, you can decompose it by agent, team, channel, issue type, and customer segment. This granularity enables targeted interventions: if one channel consistently shows lower P-CSAT scores, it points to channel-specific issues in staffing, tooling, or process.
Predicted CSAT benchmarks
| Model metric | Acceptable threshold | Strong performance |
|---|---|---|
| Model accuracy (classification) | 70% to 75% | 80%+ correct predictions when classifying satisfied vs dissatisfied |
| Mean absolute error (regression) | 0.6 to 0.8 points on a 5-point scale | Under 0.5 points deviation from actual survey scores |
| Correlation with actual CSAT | 0.55 to 0.65 | 0.70+ Pearson correlation between predicted and actual scores |
| Coverage | 100% of interactions scored | Compared to 5% to 25% survey response rate for traditional CSAT |
P-CSAT model performance should be validated monthly against incoming actual CSAT responses. Model drift is common as customer expectations evolve, product changes alter interaction patterns, and support processes are updated. Retrain the model quarterly or when correlation with actual CSAT drops below your threshold.
How to improve predicted CSAT scores
- 1
Use P-CSAT for real-time supervisor intervention
Configure alerts when an in-progress interaction's predicted satisfaction drops below a threshold. This allows supervisors to offer assistance, suggest approaches, or take over interactions before they result in a negative outcome, rather than discovering the problem days later in a survey.
- 2
Prioritise quality assurance reviews by P-CSAT
Instead of reviewing a random sample of interactions, focus QA efforts on the lowest P-CSAT interactions. This concentrates coaching on the interactions that most need improvement and provides agents with specific, actionable feedback on their most challenging cases.
- 3
Identify and address systemic drivers of low P-CSAT
Segment low-P-CSAT interactions by issue type, channel, and time of day. If certain issue types consistently generate low scores, the problem may be in the product or documentation rather than agent performance. If scores drop during specific shifts, staffing levels may be insufficient.
- 4
Improve model accuracy with richer feature engineering
The quality of P-CSAT predictions depends on the quality of input features. Invest in sentiment analysis, conversation summarisation, and customer health scoring to provide the model with richer signals. Better inputs produce more accurate and more actionable predictions.
- 5
Close the loop between P-CSAT insights and agent training
Use interaction patterns that correlate with high P-CSAT scores to build training materials. Show agents what effective empathy, clear communication, and efficient resolution look like in practice. Turn the model's learned patterns into coaching opportunities.
Related metrics
Customer Satisfaction Score
CSAT
Product MetricsMetric Definition
CSAT = (Satisfied Responses / Total Responses) × 100
Customer satisfaction score measures how satisfied customers are with a specific interaction, product, or experience. Unlike NPS which measures loyalty, CSAT captures satisfaction at a moment in time, making it ideal for evaluating specific touchpoints in the customer journey.
Net Promoter Score
NPS
Product MetricsMetric Definition
NPS = % Promoters - % Detractors
Net Promoter Score measures customer loyalty by asking how likely a customer is to recommend your product or service. It is the most widely used customer experience metric, providing a single number that captures sentiment and predicts growth through word-of-mouth.
First Contact Resolution
Support effectiveness
Operations MetricsMetric Definition
FCR Rate = (Issues Resolved on First Contact / Total Issues Handled) × 100
First contact resolution measures the percentage of customer enquiries resolved during the first interaction without requiring follow-up contacts, transfers, or escalations. It is the single most influential metric for customer satisfaction in support operations.
Customer Effort Score
CES
Product MetricsMetric Definition
CES = Sum of All Effort Ratings / Number of Responses
Customer effort score measures how much effort a customer had to exert to accomplish a goal with your product or service. Research shows that reducing effort is more predictive of customer loyalty than increasing satisfaction, making CES a powerful complement to NPS and CSAT.
Score every support interaction with predicted CSAT
Build a metric tree that connects P-CSAT to responsiveness, resolution quality, and agent performance so you can manage support quality in real time across 100% of interactions.