Why should you measure bots like humans?

Why should you measure bots like humans? 1120 840 Ben McCulloch

You’ll get a variety of answers if you ask anyone with a call centre bot how they measure its success. They might measure containment rates (how many callers were serviced by the bot without any help from a human agent). They might also measure CSAT (customer satisfaction score) and NPS (net promoter score).

Those are flawed success metrics, as Frank Schneider explains to Kane Simms in his VUX World interview.

How do you measure any conversation?

We’re too focused on the numbers these days. You could focus on the accuracy of your NLU (Natural Language Understanding) for example – it’s a very important component of your conversational AI system – but what about resolving customer problems?

Bear with me here. I want to look at this from a different angle. Let’s turn this around and take the focus off AI for a moment. How would you measure the success of a human-to-human conversation between customer and brand in any context?

We’ll boil it down to a simple everyday scenario.

Let’s say you go into your neighbourhood bakery to buy a loaf of bread. You don’t know the staff personally, but you’ve been there before. The first baker says “what would you like?”, so you say you want that sesame loaf they bake so well. Another baker overhears you and places your loaf of bread on the counter. They say nothing and return to what they were doing. As you pay, the first baker says “I know you love this sesame seed loaf – we’ve been trying different recipes so you should come back next Monday to try our improved version.”

You leave the shop with a smile on your face, and then someone appears with a clipboard and asks you to rate each baker. How would you do it? Although the total experience was just fine, if you put each interaction under a microscope you could say the bakers were underperforming.

Baker one asked what you wanted, but didn’t give it to you. Baker two said nothing but gave you what you wanted. Finally, baker one teased you with a new product which you might never buy. So, baker one was pretty ineffective really? They only asked you a question and floated an advert to you but delivered nothing. How high would you rate them? Probably quite low. Baker two could be seen as antisocial because they said nothing, but in fact they gave you exactly what you wanted.

The combined result was that everything went smoothly and you got what you came for with minimal friction! Measuring each baker’s effectiveness gives a slightly different perspective though doesn’t it?

That’s the challenge

It’s not so easy to measure any customer experience whether it’s humans or bots. In the above scenario baker one didn’t need to get the bread because baker two did it. Baker two didn’t need to say anything ­– would “here’s the loaf of bread you just asked for” or “there you are” have helped the situation at all?

You could say some call centre bots are like baker one – asking questions and then escalating to an expert. But bots can also be baker two – delivering what you need so seamlessly you barely notice, in the background, without any communication.

And when someone talks to a call centre, they don’t stay on the same subject. They might suddenly remember something else they wanted, or get distracted and ask to leave so they can return to the conversation later. Can you measure the success of conversations in those scenarios?

image of a robot in a shopping mall

Measure bots the same way you’d measure humans

Speakeasy are thinking differently

As you see, we’re not dealing with situations that can be assessed on simple criteria. You really should check out Speakeasy AI’s whitepaper. Rather than focusing on metrics such as containment (which is a poor metric because proper bot integrations allow both live agent and bot to work as a team rather than competitors) they propose measuring the ‘correctness’ (how closely the response aligns with the truth) and ‘fluency’ (how smooth and effortless the flow of conversation is) of conversational AI.

It’s all about measuring your bots as you would measure your live agents, rather than focusing on out-of-date metrics:

“If you introduce measures around business KPIs, contextual awareness, courtesy, and generosity, then you’ll train your AI to be so much more than just error-proof. When done right, conversational AI can significantly improve the quality of your customer service — we just need to assess and train virtual assistants with as much enthusiasm as we invest in human agents.”

You can download the whitepaper – it’s baked perfectly and full of goodness 😎

You can hear Frank’s full interview here where he gives some great insights into conversational AI.

This article was written by Benjamin McCulloch. Ben is a freelance conversation designer and an expert in audio production. He has a decade of experience crafting natural sounding dialogue: recording, editing and directing voice talent in the studio. Some of his work includes dialogue editing for Philips’ ‘Breathless Choir’ series of commercials, a Cannes Pharma Grand-Prix winner; leading teams in localizing voices for Fortune 100 clients like Microsoft, as well as sound design and music composition for video games and film.





    The world's most loved conversational AI event is back
    This is default text for notification bar
    Share via
    Copy link
    Powered by Social Snap