Google’s Head of Conversation Design, Daniel Padgett, shares how his team approach multi modal design across all Google Assistant-enabled devices.
Multi model design for Google Assistant
We first spoke about multi modal design with Jan König of Jovo on one of the very first episodes of the VUX World podcast. Back then, Jan described Jovo’s vision for a multi modal future, where the best interface is the closest interface you have to hand, whether that’s your watch, your headphones, your speaker or you phone. And that the experience you have with your assistant should depend on the device you’re using. Context should be carried across devices and modalities so that your experience remains personalised, yet tuned to the device you’re using.
In 2018, this was merely a vision. Google Assistant existed on Android and in a smart speaker and almost all design was contained to the audible conversation.
Since then, Google Assistant has exploded. It’s on over 1 billion devices of all shapes and sizes. Yes, it still runs on Android, and on Google’s line of Nest smart speakers. But it’s also now on iOS, on Nest Hub smart displays, car head units, headphones, smart home objects, watches, TVs, all in over 30 languages. And it’s expanding into new environments with new languages seemingly every couple of month.
Jan’s vision has been brought to life by Google.
How, then, does Google make sure that the experience of using Google Assistant is consistent across device types? How does a screen change the dynamics of the interaction? How does the context of someone being outside wearing headphones impact design choices? And how should the experience differ and grow over time?
Then there’s the fact that Google doesn’t control where Google Assistant lives. Any manufacturer can put Google Assistant into any device and potentially create new contextual environments and new multi modal dynamics. How do you manage that?
Daniel Padgett, Head of Conversation Design at Google, joins us on the show this week to explain.