fbpx

Your bot’s not an expert, it’s a toddler, and that’s OK

Your bot’s not an expert, it’s a toddler, and that’s OK 1800 1200 Kane Simms

Standford University found that chatbots and voicebots that are positioned as toddlers fair better than those positioned as experts.

Every time we create any kind of voicebot of any description, it’s always an expert, isn’t it?

“Welcome to such and such bot. I’m the expert in this subject matter. You can ask me anything you like about this subject and I’ll help you like a human would do.”

Stanford University found in a study that users are more tolerant and actually rate bots better if you describe them as a toddler, not an expert.

If any of you have ever built any sort of conversational agent before, then you’ll know that it is quite challenging to build a complex bot. Something that is so knowledgeable about so many subject areas, that can actually answer your queries like a human would do. It is a challenge.

It’s not insurmountable, especially if you can integrate with some kind of knowledge base. Your conversational experience might not be human-like but you might be able to at least point the user in the right direction and give them the right kind of content.

But more often than not, when you’re designing conversations, we’re not designing truly sophisticated human-like conversations.

However, our expectations as humans is that, when we have a conversation, we know what it’s like, and we can’t help but bring that mental model into our conversational experiences with technology, so our expectations are absolutely huge.

And so what happens when you call your bot an expert is that you set potentially unrealistic expectations.

People expect to have the same kind of conversation that they’re able to have with a human and expect your bot to be able to answer any questions that they have, and actually handle a conversation like a human would. But when you do that, that is inevitably going to lead to some potentially unsatisfied customers because any conversation with any bot isn’t really like a true conversation with an actual human because they’re not as smart as a human is, yet.

And so this study at Stanford found that if you introduce your bot and tell the user that it is a child or it is a toddler, and it is very basic and it’s always learning and that the user actually is going to help them learn throughout the course of the interaction, users not only have more tolerance, but they also rate the experience as better.

Now that’s hard for us because we want to create sophisticated conversations. We want to create human-like conversations. We want to create compelling, high quality customer experiences.

But are we setting users up to fail when doing that?

Why don’t we just introduced our bots as basic toddlers, set the parameters and the constraints that it can operate within and let the user train and teach it and give the user a chance to help it get better, to help it make it work.

Thoughts from the community

“Setting the toddler tone adds realism to the function of the bot. We don’t really want to chat with a person, we want accurate answers to our questions, which we know a computer ‘should’ be able to locate quickly.”

Sheryl Coggins, Co-Founder, Ask My Buddy

“Every BOT has to start as a toddler… Start narrowing down use cases to automate and gradually enhance the capabilities. It is important for the bot to keep learning and maturing as time passes like any toddler would grow.”

Jayesh Nambiar, Founder at SlashRTC

“Our team developed the chatbot for the government of Estonia and in the press release they introduced the VA like this “Suve the chatbot can still be compared to a young puppy who is yet to learn all the tricks, but dozens of volunteers are working to improve it”. As you said, not so easy to digest when your team works hard to create a sofisticated conversational experience. Thanks to you Kane Simms it makes more sense to me now.”

Marcos Daniel Martinez, CoFounder, Wize AI

“Spot on. When AnnA was in a pepper robot (Pepper’s voice is very childlike) users were MUCH more forgiving than normal. Most are unlikely to be aggressive, rude or even mean to a bot that says: “I’m still a young bot and I’m learning…””

Jason Gilbert, Lead Conversation Designer, CoCo Hub

“An excellent point! During UX testing with our voice bot, people immediately reacted with disappointment to hear, “I’m sorry, but I’m unable to help you with that, but I can ….” even when they knew they were asking it to do something they were aware it couldn’t do. The moment the dialogue changed to “This is a bit embarrassing, but I’m still very new, and I’ve requested a training session with my supervisor to see if I can help you with ….. in the future. Right now, I can …..” the feedback became more humorous and upbeat.”

Justin Randall, Chief Innovation Officer, Comwave

“Managing expectation…. Great tip, thanks for the video Kane.”

Andi Munsie, ‘The “Alexa, Where Is My Period” 28 Day Challenge’

“The labels “expert” and “toddler” are vague for assistants. Both labels also indirectly blame the user for not meeting expectations and the assistant less accountable. For an expert bot, you may not have asked the question correctly, and with the toddler, it may not know the answer. We shouldn’t shift responsibility or remove the anticipated likelihood of a well designed conversational experience. I believe for young bots, examples or stating what they can do adds more clarification than underselling its value with an associated noun. Another word that probably underwhelms is “smart” how often do we use this word to introduce ourselves?”

Austin Bedford, UX Designer

“My only concern is when we might carry that over to assuming that we should interact with a bot as though it’s a toddler. Not everyone has this same mental model, but I see a bot as medium between humans. We are communicating through a bot with each other. When we talk to toddlers, we use a reduced language, but we also assume generally less of a cognitive ability and world experience. For our users interacting with us and our organizations through bots, I’m expecting (well, at least HOPING) they think we have similar cognitive abilities. We just currenlty need to use a more reduced language (on both sides). As such, I wonder whether the mental model should be more like 2 adults who are conversing with each other in a second language neither knows to an expert level yet?”

Richard Warzecha, Conversational UX Designer

    The world's most loved conversational AI event is back
    This is default text for notification bar
    Share via
    Copy link
    Powered by Social Snap