Balancing AI and Human Insight

Balancing AI and Human Insight 1920 1080 Kane Simms

Unless you’ve been orbiting in a space station with no Wi-Fi you’ll know that there’s a lot of hype around LLMs such as ChatGPT. However, what everyone craves are best practices for how to use them (and not another ‘here’s 1000 ChatGPT prompts to try before you die’ list on our LinkedIn feed).

In the latest episode of the VUX World podcast, Kane had the pleasure of hosting Philipp Heltewig, CEO of Cognigy, a trailblazer in the conversational AI landscape. Their discussion revolved around the cutting-edge of AI technology, its integration into the business ecosystem, and the balance between human intuition and automated efficiency.

Where you can use LLMs now

There’s risks in using LLMs to generate responses. They can hallucinate. While that’s frustrating for users (because they don’t get the information they need), it can also damage the brand. In some high-risk scenarios such as providing medical advice, the wrong response could be fatal.

But we already have deterministic bots based around NLUs. As anyone who has worked with NLUs knows, it’s virtually impossible to train them to parse every user input. While it’s financially prohibitive to build an NLU-bot that can talk about anything, they’re great at giving the right response when they’re sure about the user’s need. Here LLMs can help. An LLM can be utilised to try to classify the user’s intent in front of the NLU. That would allow your system to understand more people because your NLU can’t be trained on an infinite number of user utterances. The LLM can do a great job at pinpointing the user’s need when the NLU struggles.

As Philipp said, “You can also use large language models to classify information, or to extract information from text, etc. So you can [have the LLM] do all these underlying tasks, but the output is actually still deterministic.

Language can be complex too. It evolves. While you could train your NLU to understand as many people as possible, that’s going to go on forever until the cows come home. You should maintain a robust NLU, but LLMs can do a great job at supporting them.

Bot vs legal department

Another challenge when using generative AI is compliance. Most people are used to carefully navigating the legal boundaries of what’s acceptable at work (the jury’s still out on those office chair races though). Considering some of the absurd and unexpected things LLMs have come out with, adding one could be a bit like having Machine Gun Kelly answer your customer’s requests.

As of now, organisations are still working this out. How can they utilise LLMs in a safe way? Philipp had some recommendations.

You need to consider the risk level of your intended approach.

Using an LLM to classify a user’s intent is low risk – it’s a background process that is basically just augmenting the NLU.

Using an LLM on top of a knowledge base (a process known as RAG) is a little riskier. While you can do your best to curate the resources, you’re still using an LLM to generate a response, with the same potential for hallucinations and wrong assumptions that are known when working with LLMs.

Another popular use case is to have an LLM work as an assistant for live agents. It summarises conversations, retrieves answers, and suggests next steps for a human. The risk here is minimised too. It’s all down to the skill and knowledge of the human agent – they only rely on the assistant to speed up their job, but they can choose to ignore it if they know it’s wrong.

Bot + human

It’s that final example that deserves a little more attention. AI is being used to supplement human skill, intuition and empathy. Philipp voiced a caution against oversimplification, asserting that AI, for all its advancements, cannot entirely replace human processes.

What we’re aiming for is a harmonious balance between AI and humans. That’s crucial for crafting more effective AI solutions.

No matter how much you incorporate AI into your systems, and allow it to communicate with your customers, you’re always going to need intermediaries between AI systems and end-users to ensure accurate and secure interactions.

If you don’t want to be left behind you should start looking into the possibilities of AI now. It’s likely that you’ve already found at least one way that ChatGPT speeds up your daily tasks. Imagine if everyone in your organisation was using it to enhance every job they did, and you also had an army of bots doing those tasks that you can’t do, or don’t want to do. That’s the potential.

As Philipp said, “You might as well get started now, because in two years time all of your competitors are going to be doing it, and everyone’s going to be having a great time, and you won’t have even started yet, because you were too scared. Then you’re trying to scramble and you’re gonna make mistakes.

Listen to the full conversation on Apple Podcasts, Spotify, YouTube, LinkedIn or wherever you get your podcasts.

    Share via
    Copy link
    Powered by Social Snap