fbpx

What happens after you’ve designed and built your AI Assistant?

What happens after you’ve designed and built your AI Assistant? 1456 816 Ben McCulloch

An illustration of two people standing in front of the large display with some charts and analytics on it.

Before you built anything, it needed intensive research to decide the use case (the thing it helps people do), and to research the people who use it, as well as the needs of the business who were paying for it.

But you knew that already, right? Let’s say you made all the right moves before you started building (click here if not).

So what happens after that is continual tuning and improvements while the assistant is live. Forever. This is not ‘set and forget’. Those who act on assumptions without validation fail.

A portrait photo of Dr. Joan Palmiter BajorekOne of the absolute best people to ask about this is Dr. Joan Palmiter Bajorek, CEO Clarity AI.

She’ll always ask, “What are we measuring? What is the research question? Just because we can build the system, does it really move the needle? Does that really transform the user experience? And if it does, and there’s ROI, we should implement that!

She gave me more great insights into improving assistants during her Conversations2 interview.

Here’s some more I think you should hear.

First you must capture the data

One of the most striking things about assistants is that they need data to thrive, and yet many organisations aren’t collecting data.

I’ve worked on other projects before OneReach.ai where they’re like ‘we didn’t even collect that.’ Oh no! I’m like, let’s start tomorrow by collecting those data points, and looking at them on different timescales.

It’s startling. How can you possibly make the right incremental improvements if you don’t know what’s going on in your assistant? You need data. You need analytics tools that present that data to the various people on your team who should constantly dig into the results to understand what’s going on. That’s the only way to know what to improve.

The quantitative data shows you the lay of the land, so you can find which mountain or hill needs to be tackled. Then you get qualitative – read transcripts and interview users. That gives you the view from the ground to really understand how the issues are occurring.

Regular small steps are better than giant leaps

Don’t over-react when the data starts coming in. Don’t decide to tear everything up just because the analytics look bad. Most conversational AI systems take time to truly shine after release.

As Joan says, it’s better to take your time and ensure you understand the problem before you act on it.

“There’s completely different protocols about how sensitive and how fast some companies want to move. We have one client that’s like, ‘oh, we see a problem. Let’s talk about that in the next meeting.’ There’s a trust that the system is stable enough. There’s another client we have who’s so nervous and reactive. As soon as we see a problem they’re like, ‘what’s the fastest we can fix that problem? Frankly, I really prefer the first approach; the system is stable, we absolutely can optimise it.

Little steps are faster

The MVP (minimum viable product) is key. Take small steps forward, but make sure you’re taking them in the right direction.

You analyse the data, make a choice and then act. After you’ve done that, analyse the results. Did it get better or worse? That knowledge should make each new step more certain.

As Joan says, you’re always aiming to move the needle in the right direction.

But really, when we can move the needle, when the user experience and the ROI are directly connected, you know, companies are really eager to support users in a completely different way.

Thanks to Dr Joan Palmiter Bajorek for these insights in her Conversations2 interview!

    The world's most loved conversational AI event is back
    This is default text for notification bar
    Share via
    Copy link
    Powered by Social Snap