fbpx

Don’t blame the bot! 10 learnings on safe use of AI

Don’t blame the bot! 10 learnings on safe use of AI 1024 573 Ben McCulloch

VUX World hosted a panel on ethics during the European Chatbot Summit in Edinburgh 2023.

Kane Simms hosted the guest speakers Oksana Dambrauskaite, E-commerce Operations Leader at Decathlon UK, and Somnath Biswas, Head of Product – Conversations, Totaljobs Group.

So what came up in that discussion? Here’s 10 hot takes!

1 – Nobody got fired yet

It’s no secret that AI and humans have different skills. We know that the best way to use conversational AI is to focus it on those tasks which your human team do repetitively and can be automated. This allows your human team to focus on those things that humans do well such as making deductions from a small dataset, empathising and serving customers with complex needs.

It’s all about finding the right balance to get the best of both.

As Oksana said, “(AI) allows humans to do something different, with more added value. Something more interesting. Our employees now actually like it. That was one of the biggest concerns that we had.”

2 – Generative AI can reduce workloads but also amplify biases

In amongst the swarms of posts on social media about ChatGPT there’s been people pointing out the ways that this new tool has inherent biases. For example, when asked to attach an emoji to job titles it suggested white males for all the C-suite roles, and white females for those working in HR and marketing.

Here’s the dilemma: while you may save time and cost with generative AI you’re potentially making the user experience worse or damaging your brand. You could be offending users. Your brand could appear like it’s stuck in the past. There’s huge risks attached to it.

As Oksana says, “there’s a bias that is already built in the model, in terms of the data that exists.”

Those Large Language Models (LLMs) such as ChatGPT are only as good as the data they contain. The data within them is historical – how can we have conversations with AI that reflect modern attitudes when it only expresses crowdsourced views from history?

3 – Be aware of the laws

Two regulatory changes were mentioned during the panel – the European Union’s AI Act, and a new ruling in New York that affects algorithmic recruitment.

What can you take from this? We need to be accountable for our use of AI. We’re going to need to be able to audit our activities so that, when scrutinised, we can explain what happened and why.

What’s the issue? Well, we now have evidence that AI is getting very skilful at generating a response. It’s far less capable at saying why it said whatever it said. When the brand is ultimately responsible for whatever they say to customers (whether it’s a human-human conversation or human-bot) then they need to be able to rewind through a conversation to see why certain things were said.

Which leads us to number 4…

4 – Explore ‘chain of thought’ models in AI

One way that you can attempt to understand why AI came to certain conclusions is to use chain of thought models.

Consider this, GPT3 had 175 billion parameters. GPT4 has 100 trillion parameters. How convinced are we that even the creators of those models know what’s happening inside them?

When we’re relying on these models for advice they must be absolutely confident about what they say. When issues arise we need to be able to drill down on the issue and understand where things went wrong so we can fix them.

So with chain of thought models, instead of asking for the final result straightaway, you want to see the reasoning as the model progresses from initial request to final result.

As you may imagine, chain of thought models could be slower than the instant magic of asking ChatGPT something and getting the result straight after, but it has the benefit of accountability.

A photo of the panelists

5 – Don’t be creepy

It’s now possible to track a user’s progress on a website so that, if summoned, the bot knows what to talk about before the user even asks. That can reduce the user’s effort so their experience feels better, but for some it may feel creepy.

Oksana mentioned that Decathlon have a feature where live agents take control of a user’s screen, to help them with any issue they have on the Decathlon website. As Oksana says, “people used to freak out so much! Even though we were asking them ‘can we take control of your screen because you’re struggling to enter the postcode’, for example. Oh, it was creating so much panic!”

According to Somnath, “there’s a fine balance between being a stalker and personalising it.” People want a better experience but not to feel they’re being spied upon. We have the means to improve experiences – to help people do what they want with as little friction as possible. If our methods feel creepy then we’ve failed.

Decathlon learned fast, and changed their approach. As Oksana says, “we are really careful in how we explain to customers what we can actually do. So in this kind of scenario, we would very carefully introduce the topic to them. So first, we would ask, ‘we see you’re struggling – do you need any help?’, they would say, ‘yes, of course we need help.’ And then, ‘if you don’t mind, we will take control of your screen, we cannot access any of your devices, it is all done through connection virtually. So it is absolutely safe, your data is safe.’ She added, ‘I think it’s important that agents are able to explain it in a very user friendly way, human to human, so customers understand that you are there.”

6 – Are APIs the weakest link in the chain?

Every single party that processes customer data must be compliant with the rules, otherwise you’re compromising the security of customer data. And that’s that.

Decathlon will drop suppliers if they’re not up to the mark, and you probably should too. Standards must be very high when it comes to customer data.

7 – Don’t train your model on my data

According to Somnath, keeping customer data secure starts with ensuring that any third-party involved won’t train their model on it. Then, you can anonymise Personally Identifiable Information (PII) so that customer data is hidden from anyone who handles it.

8 – LLMs could help, once we’ve allayed any fears

This does present a challenge though – we occasionally need to use customer data in conversations, for example when authenticating users. In order for that system to work, the language model needs to be trained on examples of user ID so that it knows what to look for.

LLMs could potentially be utilised there – for example if it was trained on a few user IDs it could generate a multitude of fake ones. But could this be done safely?

According to Oksana, “It’s a tricky question, because it’s very new technology. And obviously, it has a lot of capacity, but to use it to the full capacity, the same as everything, it needs data. The question is where the data is going afterwards.”

9 – There’s two front doors to ChatGPT

One very interesting point that came up is that ChatGPT is available both from OpenAI (the creators of the model) and Microsoft (one of OpenAI’s primary investors).

What this means is that while there may be uncertainty about where OpenAI stores data, Microsoft Azure stores data within the EU domain. That should make it GDPR compliant.

And also, to put it simply, many brands already trust Microsoft. According to Somnath, “with the contracts you have with Microsoft, there are guardrails put in place in terms of data privacy.”

10 – It’s an evolving relationship

Can you believe it’s only been 8 months since ChatGPT was released to the public? Since November 2022 it’s cropped up everywhere. Everyone seems to be using it. People who said ‘I’d never trust a voice assistant because they’re always listening’ seemed to suddenly forget their inhibitions when they got chatty with ChatGPT. Interviews are being faked. Even governments have started banning it because it’s so unclear what threat this new tech represents to people’s privacy and data.

This relationship is going to evolve. There will be many more LLMs like ChatGPT. Every interaction we have with them is building or eroding trust.

According to Oksana, that day will come when we trust this new technology. The same happened with Google Pay and Apple Pay; “It is pretty much clear for everyone that they can trust Google, they know that they can trust Apple. The same will surely happen at some point with technology like GPT3, GPT4 and GPT 25 probably in the future. It just takes a bit of time and it will become more structured. I really believe in that.”

The conversational AI industry is a major element in that relationship – we’re helping people to form their early relationships with technologies such as LLMs. There’s a responsibility on our shoulders to ensure it’s done safely, to the benefit of all.

Thanks to Oksana and Somnath for sharing their thoughts during the ethics panel!

    The world's most loved conversational AI event is back
    This is default text for notification bar
    Share via
    Copy link
    Powered by Social Snap