In this episode, Kane Simms is joined by Katherine Munro, Conversational AI Engineer at Swisscom, for a deep dive into what might sound like an odd pairing: using LLMs to classify customer intents.
Presented by NLX
NLX is a conversational AI platform enabling brands to build and manage chat, voice and multimodal applications. NLX’s patented Voice+ technology synchronizes voice with digital channels, making it possible to automate complex use cases typically handled by a human agent.
When a customer calls, the voice AI guides them to resolve their inquiry through self-service using the brand’s digital asset, resulting in automation and CSAT scores well above industry average. Just ask United Airlines.
Large Language Models (LLMs) are powerful, multi-purpose tools. But would you trust one to handle the precision of a classification task?
It’s an unlikely fit for an LLM. Classifiers typically need to be fast, accurate, and interpretable. LLMs are slow, random black-boxes. Classifiers need to output a single label. LLMs never stop talking.
And yet, there are good reasons to use LLMs for such tasks, and emerging architectures and techniques. Many real-world use cases need a classifier, and many data and product development teams will soon find themselves wondering: could GPT handle that?
If that sounds like you, then check out this extended episode to explore how Switzerland’s largest telecommunications provider tackles this issue while building a next-generation AI assistant.
AVAILABLE ON ALL PODCAST PLAYERS.
Show notes
“The Handbook of Data Science and AI: Generate Value from Data with Machine Learning and Data Analytics” – Available on Amazon.
Subscribe to VUX World.
Subscribe to The AI Ultimatum Substack.