AI has quickly become a big part of daily life for many business people. They use it to research challenges, write emails, organize information and even draw up contracts. In most applications, the results have been positive, if somewhat dull (I can already see messages I receive starting to exhibit a cookie cutter quality as AI naively makes everybody’s writing look exactly the same).
Customer service, however, has not enjoyed the same benefits. Huge tech companies like Meta seem to have gleefully turned over customer service to AI and sent all the humans home before testing the results. Google seems to be especially committed to this policy. I knew it was coming of course, but even I was stunned at how quickly Google set everything to “auto” without any human backup.
Historically, even Google (who has had legendarily bad customer service for at least a decade now) would start your customer service experience off with an automated system for answering a question, but eventually provide you with the option to contact a human if the “knowledge base” failed (which it did – a lot). Now, for systems like Google Ad rejections, they seem to have eliminated the ability to reach a human altogether.
The big problem with AI is that it simply doesn’t understand context – at least not yet.
That’s why creating a public service ad that discourages underage alcohol use will commonly be rejected based on policies designed to prevent advertisers from promoting alcoholic beverages to minors. The AI that is evaluating the campaign simply cannot understand the wildly different context of being for or against something.
Here’s a specific example. Google Ads has a policy of not showing too many logos in an image that accompanies an ad. I’m not sure why this is, but it’s an actual policy. Maybe they are neat freaks who hate clutter. At any rate, I’ve had Google’s AI use this policy to reject a photo of a grocery store shelf filled with products. This photo was important to the ad because it showed the product being advertised in use, but the AI just said “too many logos” and prevented it from running.
That’s frustrating, but it’s nothing compared to the migraine-inducing customer service loop that follows as you try to remedy this mistake. Even a year or so ago, Google would allow you to appeal the automated system’s decisions by asking for a review by a human (including a specific message you would type in explaining the problem). That human would look at the image, read your explanation, say to themselves, “oh, I see, there’s no way to show that product being used except for in a photo that looks like this,” and reverse the decision. Problem solved.
Today, there is still an “appeal” button, but when you hit it, your appeal is rejected instantaneously – far, far too quickly for any human to have even seen the image. This results in a comedic loop of the AI saying, “this is rejected,” then the user saying, “check again,” then the AI instantly saying, “still rejected,” then the user saying, “look again,” etc., etc.
It’s not really an appeal if you are just asking the same decision maker to make the same decision again with the exact same information, Google.
Whether it is a human or an AI agent, the definition of insanity is doing the same thing again and again but expecting a different result. That’s the problem here in the early days of AI customer service. AI works best with a replicable problem and a replicable response. So for 75% of people whose tech support issues can be solved by switching their laptops off and then on again, AI is a boon. For the other 25% of users whose issue lies in a problem they can’t quite articulate (“it’s making a sound like my dog Archie when he wants to go outside”), AI needs human backup to ask insightful follow-up questions, to form crazy hypotheses and suggest weird troubleshooting techniques that may not make much sense.
Unfortunately, big companies have too much faith in their new AI employees, and when they decide that human interaction isn’t necessary – even as a last resort – customer service fails for those who need it the most. That’s a shame, but I’m not sure how we get back now that we’ve come so far.
Here’s some good news. We are still very, very early in the age of AI customer service. AI agents that help us and do things for us are only going to get better at their jobs. I predict that in just a few years, the number of customer support problems that AI can’t solve will go down from 25% to 5%, meaning that most of the time, we’re all going to be grateful for the software that is helping us to overcome our challenges.
That other 5% of the time? Those are the times we are going to wish there was a human we could call and speak with. Here’s an idea – perhaps an industry will evolve that picks up where AI leaves off when it comes to problem solving. Maybe the next evolution of elite customer service will be human after all.