Every support person has dreamed of controlling their own customer service policies, but AI bots are able to do it for real. What can recent, public, failures of AI customer service bots tell us about generative AI and the future of service?
When a customer service policy is first dreamed up, often way up in the airless heights of the org chart, it looks stunning. It’s sleek and perfect and has incredible clarity, like the fancy televisions at the big box store that I won’t let my children go near.
When that same policy is implemented in the murky reality of the support queue, things look very different. That beautiful, clear policy seems to be squishy and vague, full of loopholes and inconsistencies that customers constantly crash into.
It’s not malicious or even surprising. It’s the nature of policies and of support. There will always be edge cases, judgement calls, and unintended, unpredictable consequences. That’s alright; it’s (partly) what your support team is there to do. It’s their job to figure out how and when to apply those policies, taking into account the particular customer’s experience and the company’s needs. They make sensible decisions…but only if they’re allowed to.
We’ve all dealt with the companies who hold that decision-making ability back, outlawing any application of leeway. These are companies where the strict “letter of the law” is applied even when it makes no sense. It’s a frustrating experience for both customer and support agent, who are somehow equally powerless to make things better.
The obvious alternative is to allow support teams a little more power and authority. It’s not a choice without risk. The wrong call will sometimes be made, and boundaries may be crossed. It makes support a job most suited to experienced, skilled, trustworthy people. Even when mistakes inevitably happen, customer-centric organizations can respond quickly, make things right, and get the customer back on track.
Generative AI has made a third path possible, a path where support professionals and their decision-making are removed from at least some levels of customer service. The benefits are clear: lower costs and more scalable service, including across languages and timezones. But there are risks, too. In one case, Air Canada’s chatbot invented a version of their bereavement travel policy that ended up in a court requiring Air Canada to honor the offer made by the bot.
Then recently a chatbot for Cursor (an AI coding assistant) invented an entirely new business policy, a ban on simultaneous logins, which quickly led to mass confusion and customers cancelling accounts. Eventually a company founder scrambled to correct the message. He carefully placed the blame on their use of “AI-assisted responses as the first filter for email support,” an explanation which doesn’t make completely clear whether a human was involved or not. I think there is an implication embedded in his explanation, an idea that “first filter” is meant to minimize the scope of the problem. But for customers, the first filter might be their first, or even only, communication with your company. That first interaction can set a negative experience that is never redeemed.
Yes, AI can do a good job handling lots of questions, and many of us are taking advantage of AI capabilities already, but let’s not hide from the risk. Cursor, at least, didn’t take the Air Canada route of forcing their customers to take them to court before admitting fault and taking corrective actions. “Any AI responses used for email support are now clearly labeled as such,” said cofounder Michael Truell, finally taking what would seem the most obvious of first steps toward rebuilding trust.
Customer service really is built on a foundation of trust. Customers often find themselves in a position of deep informational asymmetry — they know they have an issue, but they have no access to any of the internal company tools or information to verify the cause of the issue or how it might be resolved. They need to trust the support person to collect that information and share it with them and to do so honestly.
Generative AI tools are not trustworthy — or at least, they cannot be trustworthy in the same way a person can. They can’t differentiate reality from hallucination because everything they produce is, from the perspective of generative AI, a hallucination. We as humans just prefer the AI dreams that happen to line up with our perceived reality.
If you’re generating artwork or music, that distinction may not matter at all. Art can’t be “wrong” in the same way the application of a policy can be objectively wrong. Art might be derivative, or ugly, or feature an unsettlingly high fingers-per-person ratio, but it can’t be factually incorrect.
So am I, a person writing on behalf of a customer service platform that sells generative AI tools, saying that AI cannot be used safely in support? No, I absolutely am not. Day by day we are all finding new ways to apply AI tools to the very broad spectrum of tasks that customer support work comprises. We’re saving time, we’re extending our service hours, we’re learning more quickly. These are very real benefits.
What I am saying is this: The stakes are high when you’re talking directly to customers. Things can go much more wrong much more quickly than people realise — especially people who aren’t on the front lines of support every day — and you may not get a second chance to provide correct information to your current or prospective customers.
When dealing with people under stress and with money on the line, a little more caution and thoughtfulness is called for. It’s very easy to graph out the reduction in cost and the improvement of using AI bots as “the first filter for email support.” Those numbers look great in a board meeting. What’s much harder to measure is the loss of trust when an AI bot gives out a confidently incorrect answer.
You’re not always going to have a viral HackerNews post to tell you when your AI has invented a policy. How many customers will be turned away with the wrong information before someone is noisy enough or popular enough to be heard by a responsible human on your team?
Your support team is doing so much more than you think they are. They’re building customer trust, they’re putting a human face on a corporate brand, and they’re applying customer-centered judgment to complex situations. They’re sending nuanced responses to your VIPs, your prospects, and your loyal customers.
That’s the sort of work it might be hard to notice until it stops occurring. Don’t wait until it is too late to understand what’s really happening in your support inbox. Seek that knowledge now, before you decide where and how to deploy your AI tools.
