fbpx

Why we need a no-code multimodal orchestration tool

Why we need a no-code multimodal orchestration tool 1120 840 Ben McCulloch

During my chat with Jason F. Gilbert, he expressed frustration at how hard it is to orchestrate robotic movements with voice, sound effects, lights and other modes of expression.

This matters, and not just for Jason who is the Lead Character Designer at Intuition Robotics, where he works on ElliQ – ‘the sidekick for healthier, happier aging.’

There are many more designers working in robotics. The same challenge will face conversation designers who work on digital humans as well – they also use various modalities to express themselves.

How can we design characters that use all their modes of expression to convey meaning? That’s what humans do. Words are only one component of how we communicate – we use much more to express ourselves.

No tools for the job

Here’s what Jason said (edited for brevity):

“There’s not a single platform for designing multimodality experiences on robots. I’ve talked a lot about this. I’ve asked a lot of people about this – people from different robotics companies. No one has this. Ideally, we would have some kind of Voiceflow, or some kind of no-code tool, where you can just go ‘okay, this is where the lights come in’. And now when [ElliQ] says this line, she also has this sound effect, this gesture and this thing, and that would be amazing. But you don’t have that [tool] right now.”

That’s a huge challenge for designers like Jason. Conversation designers should be able to orchestrate every facet of a bot’s expressive repertoire. That’s how we communicate! We don’t use words with a little garnishing of something else. Sometimes words are the garnishing in what someone means.

In order to design a bot’s expressive language, we surely not only have to consider what’s natural for a human to understand, but also what new possibilities a bot has for expression that humans don’t have (such as lights or the buzzes, whirs and clicks a robot’s motors make while they move).

Where are the solutions?

It’s also a huge opportunity for anyone who wants to design such a tool. It would need to allow designers to orchestrate everything a bot can do together, so that we can combine different modalities that let bots express themselves in various ways.

Our industry needs this. Bots and digital humans will be incredibly dull conversation partners if their facial expressions and body language express nothing. Without this tool the other modalities could easily just become window dressing if they’re not expressing what the bot means.

It could be worse if their various modalities contradict each other. Imagine if a user asks whether it’s time to take their daily medicine, and the bot nods it’s head (rather than shaking it) while saying “no”. Then they’ll confuse users, and it could be dangerous.

How do other industries do it?

It’s so easy to fall into the trap of thinking every challenge in conversational AI is brand new. That’s not always the case.

Animated films and videogames have workflows where every facet of a character is considered and brought to life, with varying results.

Check out this video on the making of Rango. Whereas actors will usually just be brought in to replace temporary voices for animated films, on Rango the actors were filmed together on a motion-capture stage acting out every scene. Those performances were the source materials for the animation, so each actor’s body movements, facial expressions and voice were captured before being applied to their CGI character. As you can see, it brings the characters to life and they’re so expressive!

Compare Rango to Fireman Sam. Both are CGI, but watching Fireman Sam is like watching wooden marionettes. Their body language is often redundant. Their facial expressions say very little. The character’s potential for expression with their bodies and faces hasn’t been exploited at all.

Why can’t we make bots and digital humans that are as expressive as a character from Rango?

What’s suitable for our industry?

Of course, most of us aren’t making entertainment products. Our bots often have important roles to play. They have to empathise. They have to build trust. They have to sell things. They have to represent a brand and its values.

You could say in those cases the stakes are higher than with entertainment. When someone watches a film they don’t like, they can ask for a refund or moan on social media, and then the story is over. On the other hand, if someone has an underwhelming conversation with an AI assistant, then they might never talk to it again or stop dealing with that brand.

For a companion bot such as ElliQ trust is paramount. The user and bot communicate with each other, and a relationship grows from those exchanges.

So, where’s the tool to help conversation designers orchestrate a bot’s multimodal expressions? We’re not animators and we don’t have mo-cap studios or actors. We shouldn’t have to learn every trick a CGI animator knows to do this, and yet it’s our job to create excellent communicators.

Someone get on it! We need this.

Here’s my full interview with Jason – he gives many great insights.

You can also check out Kane’s interviews with Stefan Scherer and Danny Tomsett for more on designing for robots and digital humans.


This article was written by Benjamin McCulloch. Ben is a freelance conversation designer and an expert in audio production. He has a decade of experience crafting natural sounding dialogue: recording, editing and directing voice talent in the studio. Some of his work includes dialogue editing for Philips’ ‘Breathless Choir’ series of commercials, a Cannes Pharma Grand-Prix winner; leading teams in localizing voices for Fortune 100 clients like Microsoft, as well as sound design and music composition for video games and film.

    The world's most loved conversational AI event is back
    This is default text for notification bar
    Share via
    Copy link
    Powered by Social Snap