One amazing thing about the conversational AI ecosystem is the level of innovation we see every week. Here’s a ground-breaking innovation for human-in-the-loop AI.
There’s been many successes and failures with conversational AI because there are just so many ways to approach it. Practically every service provider has their own process.
The way that human agents and AI agents work together is one area that’s ripe for innovation. AI assistants are great at rapidly dealing with predictable tasks. Getting the same user request which can be answered with a generic response? Use an AI assistant.
But if the requests require analytical thought or interpretation, then humans are best suited for that job.
It’s great when you discover an extremely novel approach to this challenge that makes perfect sense. For example, Interactions has a unique way of utilising AI and live agents, each to their own strengths.
It uses live agents to ensure that the AI assistant works effectively, as soon as it’s deployed, even if it leaves room for improvement from an NLU training perspective.
Lisa Michaud from Interactions spoke to VUX World about it.
What came first – the assistant or the human?
It’s often said that training an AI assistant’s NLU is a chicken and egg scenario. You need data about what people ask for to know what training data to use, but you need a live, functioning AI assistant in the first place to obtain that data. So where do you start?
Interactions has an innovative approach to this problem, and it’s utterly genius.
The real-time human-in-the-loop scenario
In the early days of an AI assistant going live, it’s monitored by a team of ‘human intent analysts’ who are ready to step in if the assistant isn’t confident about a customer’s need. In other words, they have people behind the scenes to make sure the assistant matches the user’s utterance to the correct intent, should it be thrown a curveball.
They’re not live agents, and customers don’t speak to these analysts. They simply monitor conversations, behind the scenes, and in real time, waiting for the AI assistant to trip-up. When it does, they quickly step-in to assign the utterance to an intent and then let the assistant crack on with the conversation.
From the customer’s perspective, it’s the AI assistant doing all the work. They’ll never know there’s a real human-in-the-loop.
A data gold mine
Meanwhile, Interactions collects data from these human-assisted conversations to train the AI assistant. It’s real data from customer conversations – the best kind you can get. No hypotheses from stakeholders in the organisation about what they think people will say.
AI assistants have to be able to understand the terminology people will use – that’s why you need real data from real customers. As Lisa says, with human intent analysts “you’re building it on that user community, you’re getting that feedback right away.”
Isn’t that an incredible approach? It’s about using AI and humans to their strengths, but specifically at one key moment in the conversation design process.
During the early days of AI assistant deployment, teams expect to learn things they’ll need to fix. You need real conversations with real users to understand where your assistant falls short. Usually, though, when your assistant falls short, it has a poor impact on the customer experience. The interaction breaks down. The conversation can fail there and then, which doesn’t encourage customers to try that channel again and fails to meet their needs.
Having a real human-in-the-loop is a brilliant way to make sure the assistant is delivering to a high level from day one, and meanwhile collecting data that will make it stronger in time.
As Lisa says, “it takes a village to bring these applications to life.” Interactions has found a brilliant way to ensure everyone in the village plays to their strengths.