In the last few months, I’ve spoken to a handful of companies that are stalled on moving forwards with their AI plans, or even putting those plans together, because they’re having to put together an AI policy first. And, from what I’ve seen, these policies have the potential to prevent innovation and inhibit growth. If your organisation is working on its AI policy, make sure you include scope for experimentation and learning.
(As an aside, when I say AI, I’m mostly referring to generative AI. These policies mostly seem to omit traditional machine learning systems, for some reason.)
How your policy can backfire
The people working on your AI policy, of course, are acting in what they view as the best interest of the organisation. However, depending on how the policy turns out, it can actually have the opposite effect.
One large European enterprise I spoke with over the summer has a blanket ban on any kind of gen AI use internally and customer facing, including behind closed doors access to gen AI tools for experimentation. The same is true for a large American enterprise I spoke with recently. The reason? Legal and Risk don’t trust the technology and don’t want anything bad to happen as a result of its use.
That is completely understandable, but this stern knee-jerk reaction will hinder innovation and may well lead to a loss of competitive advantage if you can’t even experiment. Generative AI could add $2.6-4.4 trillion to the economy across just 63 use cases identified by McKinsey. These rigid policies make sure you won’t see any of that benefit any time soon.
I’m not saying that everyone needs to deploy generative AI right now, but a blanket ban stifles the possibility of even investigating what it might be used for in future.
Why are AI policies so heavy handed?
Generative AI has spooked many people across legal, risk and compliance and IT, due to one main thing: hallucinations. Closely behind it; data and security.
Nobody wants to risk giving your users wrong information. Understandably. But things change. Technology improves. Mitigating hallucinations is one of those things that’s improving. Is it perfect? No. Has it changed since 2022? Absolutely. Is it good enough for some low risk use cases? Probably. Heavy handed policies opt you out of tracking these improvements and growing with the technology.
Nobody wants to send their proprietary data to a big tech company for it to be ingested into their models and regurgitated to users on the front end. That’s also understandable. But there are ways of guarding against that which are also improving. There are ways today for you to leverage both large and small language models in a way that protects your data. This wasn’t as much the case 18 months ago. Rigid policies make sure you’ll never know about that.
What your AI policy should look like
Counter-intuitively to typical policy documents, which state what you can and can’t do, I recommend that you make your AI policy a living document. We’re far too early, and things are moving far too quickly, to rule anything out indefinitely.
It should be something that highlights the risks you see with AI and the things you’re not going to do today. But it should state these as assumptions, not facts. It should also state what you need to see in order for you to validate or invalidate these assumptions, change your mind and update the policy. What you’ll then have is space and grounds for experimentation and learning.
Rather than blocking innovation, you’ll give teams a licence to generate hypothesis, to experiment, to prove you wrong, to innovate and to learn. They don’t have to deploy anything to the public at scale, but they do need to be able to get their hands dirty and play with this stuff. That’s the only way you’re going to develop the maturity and skills you need to leverage this technology when the time is right.
If you then hold quarterly review sessions where teams can bring their learnings and educate you on what’s changed, you can make changes, move with the times and keep on top of developments. Here, you’ll in a position to leverage the technology and benefit at the right time, rather than waiting for hearsay rumours of where the tech’s at, or for your competitors to go first.
Then, your policy will be a helpful resource for guiding your organisation’s AI strategy, rather than a blocker that’s full of heavy handed rules based on assumptions and paranoia.
You need to be able to innovate and experiment, track technology advancements, and assess the readiness and impact for your business. Your policy should facilitate this, not hinder it.
I know many of you are working on these policies today and I’d happily offer my time for free to cast an eye over them if you’d like. Just send it over to me.