The future is multi modal

The future is multi modal 1144 762 Kane Simms


A good digital assistant will take context into consideration when providing a user experience.

Now that context can be related to the device that you’re using, could be related to the environment that you’re in, could be related to how much time and attention you have available at any given time.

So for example, if I’m in the kitchen washing up, I might have a bit of time but you might not have my attention and so the experience might need to be different to if I’m sitting in the front room watching the TV, where I do have time and I do have attention or if I’m out for a run wearing headphones and I don’t have either and so in the headphone example, maybe your interactions need to be really short and sharp and transient. In the living room example maybe you use visuals a little bit more and you lean on visuals more and in the kitchen, maybe you use audio first and you try and emphasize using earcons and things like that to make more of an audible experience.

Now, those are just real high-level examples and it’s difficult enough to create one conversation that’s intuitive. That’s natural. That’s easy to use.

Now think about doing that for all of these different devices and think about doing that not just for one third party app that you create but if you are the designers behind Google Assistant, it exists on over a billion devices, in over 90 countries and 30 different languages.

How do you create conversations that, yes, adapt to the different devices that you create as Google, but also the any number of devices that could be created by third-party manufacturers putting Google Assistant in their own hardware.

That is a very complex, very big task but it has to be the task for someone, and that someone is Daniel Padgett, Head of Conversation Design at Google.

He and his team work on creating consistent conversations across modalities for Google Assistant and we had the opportunity to interview Daniel and chat multi modal design for Google Assistant on the VUX World podcast this week.

We talked to Daniel about just how you go about creating genuine multimodal conversations that change depending on the device and context the user is in and where the future of multimodality is going from Google’s perspective.

    Share via
    Copy link
    Powered by Social Snap