← Back to news
ai

AI in CX: How to Build Empathy with Your Agentic Workforce

The tension between deploying agentic AI and maintaining customer empathy has become the defining challenge for CX leaders in 2025. A Stanford-backed study reveals that whilst 71% of consumers doubt AI can replicate human connection, 61% will pay premium prices for genuinely empathetic service—and 43% have already abandoned brands perceived as lacking empathy. This creates a paradox: over 90% of service leaders feel compelled to deploy AI this year, yet doing so carelessly risks damaging the very differentiator that drives customer loyalty and lifetime value. The resolution lies not in choosing between AI and empathy, but in architectural clarity. AI agents should absorb high-volume, transactional work—password resets, appointment scheduling, order tracking—freeing human agents to concentrate on complex cases requiring emotional intelligence and judgment. Gartner projects agentic AI will handle 80% of common service issues autonomously by 2029, yet simultaneously predicts that half of companies initially planning workforce reductions will reverse course by 2027, recognising that empathy and discernment remain irreplaceable. The question for teams already operating Zendesk or Salesforce Service Cloud is whether their current QA and coaching infrastructure can evolve quickly enough to support this hybrid model, particularly as internal monitoring agents become critical for real-time CSAT estimation and supervisor intervention.

Implementation discipline separates success from backlash. Airbnb's phased rollout—launching to 50% of US users, then expanding to full coverage, then geographic expansion—demonstrates that velocity without validation breeds internal resistance and customer alienation. The telecom client cited achieved a 41% NPS lift by replacing manual QA with agentic monitoring that analyses interactions in real time, surfaces training needs, and alerts supervisors to critical calls. This suggests that the empathy multiplier isn't the agent itself but the infrastructure that lets human agents perform better. For support team leads and CX consultants, the operational implication is clear: select one high-volume, low-risk use case, instrument it thoroughly with KPIs spanning time-to-resolution, contact deflection, CSAT, NPS, churn, and employee turnover, then scale only after validating both customer and employee outcomes. The financial case compounds over time—reduced churn lowers customer acquisition costs, extended tenure increases lifetime value, and lower employee turnover creates budget headroom for further capability investment. The risk isn't deploying AI; it's deploying it without the monitoring, coaching, and human-agent support systems that transform it from a cost-cutting tool into a genuine empathy enabler.