For those of you who’ve listened to the podcast, you’ll know that I’m a complete advocate for sonic branding and sound design. Episodes with Joel Beckerman, Eric Seay and Ben McCulloch dive deep into the topic.
And I vlogged recently about the voice branding conundrum and the challenge of having a consistent brand voice that you can use across all channels, not just voice.
However, after speaking with Jon Bloom, Senior Conversation Designer at Google, on the show last week, he got me thinking.
Jon spoke about how Google Assistant already has a persona that you can leverage in your actions.
Leveraging trust in the platforms
Think about it, people are building habits and trust in their digital assistants. People are speaking to Google Assistant every day and investing in its persona.
Brands can leverage the trust that users have, and are building, in their voice assistant by piggybacking on the persona of the assistant.
Adam Cheyer, co-founder of Siri, Viv Labs, and VP R&D at Samsung Mobile, spoke at Project Voice in January about how users want one assistant. Not a million.
Creating applications that use the persona of the assistant, rather than your brand, helps keep in line with this ‘one assistant’ experience.
Changing persona can have negative effects
Changing the voice and persona of your voice applications to something different from the standard persona of the assistant can, in some cases, have a negative effect.
Adva Levin, founder of Pretzel Labs, joined us on the podcast recently to share how to create a brand persona for voice applications. We discussed the value and merit in using voice actors to record dialogue.
The audio quality of recorded dialogue is obviously much better than text-to-speech, and Voicebot found that recorded dialogue has a better effect on call to action retention than text-to-speech.
Even so, when Adva swapped out the standard Alexa voice for recorded dialogue in one of her skills, users were caught off guard, asking ‘what’s happened to Alexa?’
It depends on the experience
I’m still fairly bullish on sonic branding and sound design, yet I’m also convinced that utilising the trust that users already have in their voice assistant and keeping the persona of your app in line with that could be a good idea.
Perhaps it depends on the experience. If you’re creating an interactive story, game or content based experience, then perhaps working on a character is worth it.
However, for service based voice experiences, such as restaurant reservations, taxi bookings and the like, don’t users care more about getting a job done over and above the persona or brand doing it for them?
Peter Nann, Senior Consultant at Cognigy, thinks so. He said “Users don’t care about your branded voice on a daily basis, they just care about getting shit done.”
And “I also don’t want my assistant to behave as a mere receptionist, directing my call to the best ‘expert’. I want to talk to my assistant, who is the expert at everything, and who talks to my bank for me.”
The need for research
All of this is entirely speculative. The truth is, this really needs research. Research to figure out whether users do want one assistant to act on their behalf, with all interactions reflecting the persona of the assistant, or whether they expect each brand to have their own unique persona and branded presence on their assistants.
To collaborate on this research, please reach out.
Spotify is releasing a voice assistant into its app and the result will be an example of voice and screens working together to make user journeys more streamlined.
Jane Manchun Wong, found this screenshot in the Spotify app that reads:
“Hey Spotify. If enabled, Spotify will listen for Hey Spotify when the app is open and on your screen.”
Spotify is working on “Hey Spotify” voice activation pic.twitter.com/PqZI01WZre
— Jane Manchun Wong (@wongmjane) March 4, 2020
This is where I see huge opportunities for almost every organisation that has any kind of internet or app presence: using voice to streamline customer journeys and make experiences frictionless.
When you’re playing a song in the Spotify app, to switch to a different playlist or a specific song requires quite a bit of clicking.
Your starting position is usually a screen showing the artist hero image and the play/pause button.
From here, you’d have to:
- minimise the currently playing track
- hit the search bar
- type a search (between 3 and, say, 15 taps, depending on who you’re searching for)
- browse the results (more swiping)
- tap a new playlist or track
That’s 5 screens and quite a lot of tapping and swiping.
The Spotify in-app voice assistant will eradicate all of that by letting you just say ‘Hey Spotify, play the Blackbyrds’.
Using voice to streamline a user journey
In this case, voice isn’t the whole deal like with Spotify on Alexa. Voice is simply filling a gap in order to streamline one part of a user journey. It’s just making one interaction simpler.
It’s the same application of voice that we see in TVs, on set top boxes like SkyQ, TiVo or Comcast.
Rather than using the remote to tap your way through the apps or randomly browse through a list hoping to find something interesting, you can say “Find me a funny film” and you’re half way there.
The user need is the same, the journey is also the same, but a chunk of it has been removed thanks to a VUI.
Following the footsteps of mobile
Although I don’t like to compare voice to mobile, there is a parallel here.
When mobile was in its infancy, before it was the main device for accessing the internet, before we built our internal triggers that now have us literally addicted to our devices, before we picked it up 100 times a day, right when that behaviour was being born, we were talking about ‘micro moments’.
People used to use their phone, not constantly throughout the entire day, but sparingly in certain moments: waiting for the train, sitting on a bus, maybe on the toilet, before a meeting.
And we wouldn’t spend 20 minutes at a time on it like we do now, we’d spend 2 or 3.
At that time, brands were clambering to make connections with users during those micro-moments.
Our usage of voice interfaces seem to be travelling the same way as our usage of mobile did, starting with micro moments. Little times throughout the day where a voice interaction makes sense makes a journey more efficient. Not to replace mobile, but to enhance it.
This is something to consider for brands and organisations, when thinking about your customer experience and your customer journey across your touch points, can voice play just a small role in helping the whole overall journey be simpler by filling in those little micro moments?
Jonathan Myers and Dave Grossman, founders of Earplay, join us to share how they create world-leading interactive stories. read more
This week, we’re joined by VAICE co-founder, Sina Kahen to discuss the importance of emotional intelligence and how you can design EQ into your voice experiences using the 6 ‘First date’ principles. read more
This week, we’re finding out how content creators can have their podcasts and YouTube content indexed and searchable on voice, with Bryan Colligan of Alpha Voice. read more
This week, we take a look at the similarities between VUI design for IVR and VUI design for voice assistants. We also explain what VUI tuning is and why it’s important, whilst giving you some tips on how you can tune your voice user interface. We also discuss PinDrop and voice first security. read more
Today we’re taking a close look at the Voysis platform and discussing transitioning from GUI to VUI design with VP of Design, Brian Colcord.
We’ve covered plenty of voice first design and development on this podcast. Well, that’s what the podcast is, so we’re bound to! Most of what we’ve discussed has largely been voice assistant or smart speaker-focused. We haven’t covered a huge amount of voice first application in the browser and on mobile, until now.
You’ll have noticed the little mic symbol popping up on a number of websites lately. It’s in the Google search bar, it’s on websites such as EchoSim and Spotify are trialing it too. When you press that mic symbol, it enables your mic on whatever device you’re using and lets you speak your search term.
Next time you see that mic, you could be looking at the entry point to Voysis.
On a lot of websites, that search may well just use the website’s standard search tool to perform the search. With Voysis, its engine will perform the search for you using its voice tech stack.
That means that you can perform more elaborate searches that most search engines would struggle with. For example:
“Show me Nike Air Max trainers, size 8, in black, under $150”
Most search engines would freak out at this, but not Voysis. That’s what it does.
Of course, it’s more than an ecommerce search tool, as we’ll find out during this episode.
In this episode
We discuss how approaches to new technology seem to wrongly follow a reincarnation route. Turning print into web by using the same principles that govern print. Turning online into mobile by using the same principles that govern the web. Then taking the practices and principles of GUI and transferring that to VUI. We touch on why moving you app to voice is the wrong approach.
We also discuss:
- Voysis – what it is and what it does
- Getting sophisticated with searches
- Designing purely for voice vs multi modal
- The challenge of ecommerce with a zero UI
- The nuance between the GUI assistant and voice only assistants
- How multi modal voice experiences can help the shopping experience
- Making the transition from GUI to VUI
- The similarities between moving from web to mobile and from mobile to voice – (when moving to mobile, you had to think about gestures and smaller screens)
- Error states and points of delight
- The difference between designing for voice and designing for a screen
- Testing for voice
- Understand voice first ergonomics
Brian Colcord, VP of Design at Voysis, is a world-leading designer, cool, calm and collected speaker and passionate sneaker head.
After designing the early versions of the JoinMe brand markings and UI, he was recruited by LogMeIn and went on to be one of the first designers to work on the Apple Watch prior to its release.
Brian has made the transition from GUI to VUI design and shares with us his passion for voice, how he made the transition, what he learned and how you can do it too.
Voysis is a Dublin-based voice technology company that believes voice interactions can be as natural as human ones and are working intently to give brands the capability to have natural language interactions with customers.
Check out the Voysis website
Follow Voysis on Twitter
Read the Voysis blog
Join Brian on LinkedIn
Follow Brian on Twitter
Listen to the AI in industry podcast with Voysis CEO, Peter Cahill
Read Brian’s post, You’re already a voice designer, you just don’t know it yet
Where to listen
Today, we’re getting deep into the biggest challenges facing designers and developers on the Alexa platform: being discovered and making money. And who better to take us through it, than one of the most experienced developers on the voice scene, Jo ‘the Oracle’ Jaquinta. read more
Today, we’re getting into detail about what it’s like to be a full-time VUI designer. We’re discussing the details of the role, the day to day duties and the skillsets that are important to succeed in designing voice user interfaces. read more