It’s easy to approach every new problem in conversational AI as if you’re the first person who ever faced it. But the truth is that many organisations are trying to solve the same problems at the same time.
For example, most banks offer mortgages. They can expect their AI assistant will be asked “when’s my next mortgage payment due?”
Kore AI saw an opportunity to get automated conversations up and running quickly, but still allow organisations to create an assistant that’s uniquely their own.
Raj Koneru, Kore AI CEO and Prasanna Arikala, CTO explained how it works to Kane Simms during their recent VUX World webinar.
Build a great AI assistant in less time
Kore AI’s XO Platform rapidly speeds up the time it takes to create an assistant. As Raj Koneru says, “the amount of work gets cut down anywhere between 30% to 70%.”
The various stages a product team goes through can now be progressed through rapidly. Any team creating an assistant would identify the use case (what common need customers have that can be automated), design the flows (the pathways users will follow when talking to the assistant), train the NLU so that the assistant understands users, and then test the assistant (making improvements through iterations).
Whereas those stages would have taken months in the past, with Kore AI’s XO Platform an assistant can be up and running within weeks. That’s incredible.
Using LLMs smartly
One of the hardest challenges every team has is obtaining good data. You need training data for the NLU. It needs to be diverse so that it reflects the various ways customers can say something, and you need a lot of it.
Kore AI have implemented LLMS (Large Language Models) which rapidly generate a variety of semantically similar training phrases. That allows the system to be trained incredibly quickly.
Once the assistant is live, you can collect real examples from actual users to improve the data.
Experts are still in demand
However, identifying use cases, designing conversations, training the NLU and testing assistants still require expertise. It’s just that now experts can create a functional assistant in far less time than before.
As Raj Koneru explains, “The skills that are required are no different from what was required before, except there’s less work to be done. So, if I’m building a banking assistant, I need to know the banking domain. The LLM may generate some use cases and may generate some sample dialogues. I need to be a product manager to understand how that would work for my bank.”
We can have more great assistants
There’s not much conversational AI talent around (but now’s a great time to find them if you’re looking for experts).
With the XO Platform, our rare conversational AI talent can make more great assistants, and they can improve them faster.
It used to be hard to get started, but the lion’s share of the work always happened after the assistant was built. That was when the feedback loop began – the constant iterations and improvements as you learn from the customers who talk to it, add the conversations you hadn’t considered, and incorporate the unexpected things they might ask the assistant.
Now we can get started faster and focus more on improving assistants.
It’s always been about people
And what’s more than that – we can focus more on people. We can ensure we’re building for the best use cases, connecting the assistant so its fully functional and training it so that it always performs well.
With Kore AI’s tool you can have an assistant up and running faster, and they achieved that by using tools such as LLMs. It’s a little ironic that synthetic data generated by LLMs such as ChatGPT allows us to quickly build our assistant and then focus more on the people who use it.
But that’s just the way things are now. The industry moves fast. With products like XO Platform Kore AI are pushing us towards ever faster speeds and better products in conversational AI.
You can watch the full webinar with loads of insightful demos here.