The AI capability gap in support reveals a fundamental misalignment between what vendors promise and what organisations can actually execute. Implementation consultants across the sector are encountering the same pattern: tools like Agentforce, Einstein, and comparable AI-native platforms deliver genuine operational value, yet they consistently fail to compensate for upstream failures in process design, knowledge management, and data architecture. The problem isn't the technology itself—it's that AI amplifies existing dysfunction. A poorly structured knowledge base doesn't become useful when wrapped in an LLM; it becomes confidently wrong at scale. Teams deploying these tools without first addressing broken processes and data disconnection are essentially asking AI to solve problems that require organisational discipline, not computational power.
This creates a critical inflection point for CX leaders evaluating their AI roadmap. The question isn't whether to adopt AI—it's whether your foundation can support it. Teams with mature knowledge management, clean data, and documented workflows will see immediate ROI from these tools. Teams without them will watch their AI investment stall, then blame the vendor. For Zendesk administrators and support leads, this means the real work happens before implementation: auditing whether your processes are actually documented, whether your knowledge is actually accessible, and whether your data actually reflects reality. The vendors have solved the hard technical problem. The hard organisational problem—the one that determines whether AI succeeds or fails—remains entirely yours.
Is everybody (in the implementation consultant game at least) battling the AI perception/capability gap? There are tools that can really help to get stuff done, but it can't cover up for broken processes, missing knowledge, data disconnection. You need to get the rocket pointed the right way before