fbpx

Don’t cull the bots! Here’s 7 myths we should bury first

Don’t cull the bots! Here’s 7 myths we should bury first 1456 816 Ben McCulloch

Some myths linger in the public consciousness for too long.

People said that Yoko Ono broke up the Beatles (and it took 51 years to prove that she didn’t).

They thought that President Kennedy had mistakenly told the world “I’m a jelly doughnut” when he spoke German in a speech in West Berlin in 1963, but he didn’t. Non-German speakers believed that myth, but Germans knew that his words “Ich Bin ein Berliner” meant he felt a bond with Berlin.

Hearsay and rumours should never beat objective evidence.

In conversational AI we also have some lingering myths that deserve re-evaluation.

Ian Collins, Founder/CEO and Jay Athia, Senior Director, Customer Success of Wysdom AI recently appeared in a VUX World webinar where they proved why we should dispel 7 lingering myths!

Myth #1 – Your AI platform is the issue, you should change it

Perhaps you use Dialogflow, Watson, Microsoft, Salesforce Einstein, Amazon Lex or a third party like Kore, Cognigy, Boost etc.. Each of them are excellent in their own way.

No platform does everything though! If you dislike certain features or UI elements on the platform you use, switching to another platform could make you feel better, but you may discover something with the new platform that isn’t exactly perfect for your needs either.

Don’t get us wrong. We’re not saying that you should never change platforms. What we’re saying is you need bulletproof rationale to do so, because whatever platform you use will have strengths and weaknesses. Don’t forget the time, staff and money that will be tied up in the shift either.

What you should do first, is some proper digging into your AI assistant performance. Gaining visibility and then optimising performance through effective bot management tools and techniques might get you the outcome you’re looking for without the headache.

“A mediocre platform managed well is going to deliver better results all day long than the best platform in the world managed badly.”

Myth #2 – In order to save money, customer experience must suffer

There’s a good chance you already have a website or app that customers use to resolve different needs. It’s expected now; every brand has a site and app.

As the benefits of conversational AI are becoming better known brands are starting to migrate some of that traffic to this new option. Bots can provide 24/7 customer service that quickly resolves specific customer needs, and they can be cheaper than live agents. However, that doesn’t mean that you should instantly divert all traffic to your new bot! What about those users with needs that are currently better served by the website or app that you already have?

“If the bots are well managed they’re 1/5 to 1/10 of the price of live chat, so the savings are significant. You can see why everybody is motivated to drive into this world! Many times our clients said, ‘You know what, I just assumed I’m gonna save a bunch of money, but my customers aren’t super happy.’”

While starting small and focusing on several use cases is valuable, it is crucial to understand that automation may not be suitable for every interaction.

You could start with an evaluation process to determine which conversations are better handled with automation and which are not. By monitoring and continuously improving the quality of experience for the selected use cases, the automation is effectively applied where it adds value while still leaving room for human intervention when necessary.

Analytics play a crucial role in this process, helping identify new automation opportunities. Slowly, the bot can absorb more and more traffic as its use cases expand. This balanced approach ensures that automation enhances customer experiences while maintaining the human touch when needed.

Myth #3 – Customer surveys are a good measure of truth

“The assumption is that you can measure the satisfaction users have with an assistant or a bot using surveys. Everyone uses surveys. But I think most people don’t want to admit that surveys are better than nothing, but not much better.”

You can expect roughly 3-5% participation from surveys. It’s human nature. If the experience went well people might be inclined to say something. If it was mediocre, they usually just want to move along and get on with their life. If it was terrible, you’ll definitely hear about it!

So isn’t there a better way?

Sure there is! In automated conversations users reveal how things are going at every moment. Are they saying confused things such as ‘what do you mean,’ or ‘how do I do that’ to the bot? Those are clear signs. Does the bot have to repeat itself often, or do users drop off at specific moments in the conversation? Those are clear signs too.

When asked, users might say “yeah, it’s fine” because a survey forces people to objectively evaluate something. They may not want to participate for all sorts of reasons. But when you see them struggle while they are doing it, that can reveal all sorts of useful feedback for you to improve the conversations.

The best thing about this is that you can analyse every single conversation, and not just the responses of people who were inclined to respond to the survey!

“There’s two things you care about. One is how effectively did I solve the customer’s problem? Number two, is how did the customer feel? I gotta manage these two things together. I can’t do one in isolation or the other – I need to understand them both.”

 

Myth #4 – Containment’s a solid measure of bot performance

As Jay says, one of their clients was measuring containment (the amount of conversations the assistant resolves without input from a human agent). They had great results, and thought that should reflect a great experience. However they were getting heat from their management because customers were complaining. “So you get this metric that says nearly half of your chats are contained, but they’re all going somewhere else and getting higher cost support. There’s more to it!”

And that’s exactly it! When you check your containment results, you need to consider them in the wider context.

Were you imprisoning users so they couldn’t leave or get what they wanted? That’s a terrible experience. Or was their need resolved by the bot alone? That may well have been a great experience! The thing is that both these positive and negative experiences would have contributed to a high containment rating. The only way you’ll know more is if you dig deeper and research what was actually happening in those conversations that the bot contained.

If the only way you measure your bot is how many calls were deflected from the contact centre, then you’ll have no idea of how many customer problems the bot is solving. You’ll not be utilising these incredible tools to their advantage.

Read more: why containment is the wrong metric.

Myth #5 – NLU scores are directly related to success

Let’s be clear about this – the bot needs to understand the user. Your NLU (Natural Language Understanding) needs to be properly trained and maintained.

But that’s the starting point. You need that just to begin doing great work in customer experience!

When you’re sure the NLU understands your users well, the question is how well are you solving the customer’s needs? That’s the true measure of success.

Myth #6 – You need a team of data scientists to measure AI systems

Bots need regular iterations. Once you’re sure each update has rationale you should act on it quickly, and then analyse whether it worked, and possibly tweak it again.

That’s the rhythm you want to establish; regular analysis and conclusions which lead to updates and improvements.

So then, does it make sense to hand analysis off to a team of data scientists who may take months to check over the bot’s results and recommend the next step to resolve an issue? How much better can they analyse and make conclusions compared to your team, who knows the bot’s inner workings intimately?

The longer it takes to analyse and come to conclusions, the harder it is to have a sense of what’s really happening in your bot, and all the while your customers have been suffering from the same issue while it was being analysed.

Analysis needs to be done at the bot management level with the team that builds and maintains it and they need the right tools to do that (Wysdom).

Myth #7 – Chatbot usage always stalls or declines

“I probably talked to 100 bot teams in the last year, and most of them will say ‘my volume is pretty flat, or declining’. And they assume ‘well – that must be because our chatbots are a painful experience’. But we’ve seen the opposite. If it’s managed well, then customers start to come back. We’ve seen many of our clients double their volume over the last year. You’ve got to make it more prominent! You’ve got to make it easy to find the bots!”

Customers need to be able to find the bot, and you need to constantly work to improve it. If it’s effective, and delivering a good experience, people do come back and use it!

Let’s make better assistants

The process of creating AI assistants is really one of continuous improvement. Challenging assumptions, evaluating and iterating… that’s how we get great results.

Challenging the commonly accepted ‘truths’ is one part of that process.

Thanks to Ian, Jay and Wysdom AI for laying these 7 myths to rest!

You can watch the full webinar to find out more.

Learn more about Wysdom AI.

    The world's most loved conversational AI event is back
    This is default text for notification bar
    Share via
    Copy link
    Powered by Social Snap