fbpx

Why CAI needs skin in the game

Why CAI needs skin in the game 1792 1024 Ben McCulloch

The notion that AI will take our jobs and then destroy us is frustratingly hard to kill. It lacks scientific basis but it caught the public’s imagination. That’s perhaps because films like Terminator and 2001: A Space Odyssey were hugely popular long before actual AI gained widespread use. Somehow the public accepted narratives about killer robots as a prediction of the future.

On AI

But let’s consider what AI is.

AI is just statistics. You give it a problem to solve, and you give it training data from which to learn how to solve the problem. That’s the process. Within the training data are all the raw materials the AI will use to construct a prediction for how to solve the problem. It makes a hypothesis from the building blocks you give it. So, if an AI chatbot is trained by users on hatespeech, then it becomes a hatespeech bot. As far as that manifestation of AI is aware, it has done its job. It did what it was asked to do. It has no idea about the social implications of saying horrific things to strangers.

Let’s consider a different narrative about AI – an example that’s a million miles from killer robots. Imagine someone who is shockingly lazy. They’re always looking to put in the least effort possible. They only act on the information they’re given (rather than doing their own research), they never question whether they’re doing the right job to solve the problem, and they have absolutely no sensitivity to context – how the thing they’re doing will interact with the world. If AI was a person, that’s who they would be!

They’ll be a whizzkid when the stars align. If their training is great, and they’re doing a job that’s not beyond their abilities, and the job they’re doing will solve the actual problem, and they didn’t secretly cheat by doing a job that looked like it solved the problem, then they’re golden. They’ll be Employee Of The Decade when the stars align because they’ll do the job far quicker than a human ever could, and you could have a limitless AI army doing this job for you, over and over. That doesn’t happen often though.

The rest of the time, the success of AI fully depends on you – the human operator.

Humans in the loop

AI is nothing without humans. In every manifestation of AI you’ll find the fingerprints of humans throughout. AI is just an algorithm that is trained on data to make predictions. It needs both data and a problem to solve. Humans provide the data (including the biases within), and they provide the problem to solve. Without our inputs, AI would have nothing to do. It has no motives because it has no interests. AI is not going to seek to do anything that a human didn’t tell it to do. We give it a purpose, and it sees the challenges we give it through the lens of our own perspectives, biases and prejudices.

It’s dangerous to over-emphasize the potential of AI, and it’s dangerous to underplay the importance of humans. Whatever we give AI becomes amplified in the results. If the training data is biased, the end result will also have those biases. If we try to solve the wrong problem with AI, we’ll get a useless solution. If we don’t check the AI’s homework, we might discover it has hoodwinked us.

AI can do incredible things if humans know how to work with it. We have to remind ourselves of that. You’ll learn a lot about the relationship between AI and humans if you read about AI failures. You’ll learn about human errors when creating AI that were costly to brands, and you’ll also see malicious ways humans have used AI. In each case, the AI was simply doing the job it was given (without any awareness of the implications). The humans were the factor that affected the results.

Get yourself connected

Let’s not get too carried away with the tech. It’s often far too easy to fall into the trap of thinking we can find the perfect technical solution to a problem, and then the problem will be solved forever. You will always need skilled people to navigate the challenges.

People aren’t limited to the dataset they’re presented with. We bring all our experiences and knowledge of the world to each new challenge. We have biases but we have the potential to be aware of them and navigate around them too. We can conceptualise things, and when we’re trying to create a solution we can imagine how it will work in the real world. In other words, we can quickly eliminate crap ideas because we know they wouldn’t really work.

Humans are part of a living network too. We thrive on our relationships (I probably wouldn’t be writing this article for VUX World if I hadn’t met Kane Simms at a conference afterparty years ago). Conversational AI has a thriving network of humans who continually discuss, meet and share their workings.

The analysis of AI is how we make better AI – it’s got nothing to do with machines teaching themselves. Think about that! It’s not ironic that our industry is known for great conversations between the creators. Generally speaking, we’re good at being respectfully honest with each other. We need to uphold that. We need to ensure these relationships are kept alive and feed our discussions with new and diverse viewpoints, especially from people who are new to the field.

We are the context in which AI exists, and it’s up to us to make sure it’s thriving there.

In the flesh

Come along to Unparsed if you want to meet humans who are keeping AI in check. The conference will focus on conversational AI, with talks on design and tech. But what’s more, there will be afterparties and many discussions in between the talks where the most vital component of AI – the people – will get to share their thoughts and keep this industry moving.

We’d love to see you there.

    The world's most loved conversational AI event is back
    This is default text for notification bar
    Share via
    Copy link
    Powered by Social Snap