Talkdesk's VP of AI has articulated a problem that most contact center leaders are quietly experiencing but rarely discussing openly: they cannot reliably measure whether their AI investments are actually working. The company's critique centres on three interconnected failures that plague current AI deployments—visibility gaps, data quality issues, and misaligned metrics—rather than technological limitations. Andrade's argument is deliberately provocative: vendor hype inflates expectations before implementation, organisations deploy AI against data that was never designed for machine consumption (his colour-coded knowledge base example is particularly damning), and teams then measure success using metrics that actively obscure performance. The newly launched CXA Operations Center addresses the visibility problem directly through session-level tracing and failure diagnostics, but this is fundamentally a response to a governance vacuum that should never have existed in the first place.
The implications for CX teams are substantial and uncomfortable. If Andrade is correct—and the evidence suggests he is—then many organisations currently reporting AI success may actually be operating blind. This raises a critical question: how many teams have justified continued AI investment based on metrics that are themselves broken? The average handle time problem he identifies is particularly revealing. As AI absorbs routine interactions, human agents inherit the harder cases, driving up AHT. Without recalibrating how you measure agent performance in a hybrid workforce, you risk misinterpreting operational success as failure, potentially triggering cost-cutting decisions that undermine the very automation you've invested in. For administrators and team leads already running AI agents—whether through Talkdesk, Salesforce Agentforce, or other platforms—this suggests an urgent audit is needed: do you actually know what your agents are doing, and are your success metrics measuring the right things?
The broader implication is that AI adoption in CX has been proceeding on faith rather than evidence. Talkdesk's positioning as the vendor offering observability is commercially convenient, but the underlying diagnosis appears sound. The real cost of AI deployment isn't the software licence; it's the data preparation, governance frameworks, and operational discipline required to make it work at scale. Organisations that have treated AI as a plug-and-play efficiency play—rather than a fundamental shift in how contact centre work is structured and measured—are likely discovering that their ROI story doesn't hold up under scrutiny. The question now is whether observability tools alone can fix this, or whether the problem runs deeper into how organisations have fundamentally misunderstood what AI implementation actually demands.
Ask most contact center leaders how their AI is performing, and there’s a decent chance they genuinely don’t know. Not because they aren’t paying attention, but because the tools to answer that question properly haven’t really existed… until now. In a recent conversatio
Talkdesk Calls Out the AI Hype Machine – And Offers a Way Out CX Today