vux design

The difference between chatbot and voice search refinements

The difference between chatbot and voice search refinements 876 657 VUX World

What’s the difference between how people use chatbots and search bars vs voice user interfaces and what does that mean for how you design interactions for each?

One of the big differences between designing for a voice user interface versus a chat user interface and one of the big kind of striking differences between how people use chat and text based interfaces including search boxes compared to voice is all to do with search refinements.

If your search on a retailer website, if you use natural language search on a retailer website and you search for something like “I’m looking for men’s summertime clothes” or “I’m looking for something to wear this summer.” “I’m looking for something to wear on my holiday” or any kind of natural language search like that.

If you don’t find anything off the back of doing that search then your search refinement will end up shortening your search phrase and you’ll make it more keyword-based: “men’s summer clothes”. You will refine it down to something shorter because we’ve been trained over decades about how to use search engines and how search engines work.

If I have an actual conversation, if I’m in a shop talking to a sales assistant and I say “I’m looking for some clothes” and they say “what do you mean?”, what I’m likely to do in that situation is refine my search, refine my phraseology.

But if I’m in person having a conversation, it’s likely to be a hell of a lot longer. And so instead of me just saying “men’s summer clothes”. I’m likely to say something like: “Well I’m going on holiday in a couple of weeks time, you know, it’s supposed to be really hot weather. I’m looking for some shorts and t-shirts that kind of stuff.”

So the utterance there is incredibly long because I’m adding a whole load more context to the discussion. I’m saying that we’re going on holiday. There’s some context. I’m saying it’s going to be hot weather. That’s inferred that I’m looking for hot summertime clothing. I give examples by saying shorts and t-shirts and I don’t need to say ‘mens’ because it’s implied by the subtext of the conversation given the person who’s actually having the conversation.

And so not only is there are additional information underneath the utterance but there’s also a hell of a lot more information in the utterance.

We’ve been trained over the years, lifetimes, of having conversations that if someone doesn’t understand you, you then elaborate so that you can add more context, more information, to help them understand.

In the voice context, if you’re using a shopping application or a shopping voice user interface and it asks you a question like “Do you want to know more about the red t-shirts or the blue t-shirts?”

With voice, you might say “Both”. Right, the utterance starts out being narrow and short, but if the system doesn’t understand you and it says, “I’m sorry. I didn’t understand that. Do you want red or blue?” You over-elaborate again because you’ve been trained in conversation to add more information so that the other person can understand you.

And so instead of saying “both” again, you’ll say “I need both the red and the blue”, “I want to know more about both the red and the blue” and your utterance becomes longer.

And so that’s one of the real things to pay attention to when you’re designing voice user interfaces is:

1) be clear about the way that you phrase the question and anticipate those kind of nuanced responses
2) be prepared, when you do have to repair a conversation, that sometimes the utterances that you’ll get in response might be a little bit longer and contain a little bit more information.

Of course, it does work the other way around. Sometimes people will start with a long search phrase, then realise the system’s not quite functioning properly. It doesn’t understand them. And therefore they’ll refine something to be a little bit shorter, but it’s not always the case and sometimes it is the inverse.

Conversational ear worms

Conversational ear worms 1800 1200 VUX World

What is the conversational equivalent of an ear worm?

An ear worm is a song that you just cannot get out of your head. It doesn’t matter how hard you try it just sticks in there.

If any of you have got kids then you’ll know exactly what it’s like to wake up at five o’clock in the morning, busting for the loo and you just cannot get that Peppa Pig song out of your head!

Musicians and music writers all over the world strive to create ear worms because if you can create an ear worm, then that’s job done!

My latest ear worm, I don’t see any reason why you should be immune to this is, Thomas the Tank Engine.

So I was thinking about that and I was thinking what’s the conversational equivalent of an ear worm?

We’ve all had conversations that we remember, some of us have had conversations that might have even been life-changing.

Does the same logic tie into conversations that we have with our voice assistants?

I remember the first time I asked Google Assistant for a football score and it played the sound of crowd cheering in the background. I still remember that today. It’s one of the best interactions I’ve had on Google Assistant.

And so we have the tools to create memorable experiences through a combination of conversation design and sound design and it doesn’t matter whether you’re a boring old insurance company or whether you’re a cutting-edge media outfit.

We all have access to the same tools and we all have the potential to create memorable and meaningful conversations.

So what’s the most memorable conversation you’ve had with your voice assistant, or the most memorable conversation you’ve had at all, and why?

Think conversation design is complex? You aint seen nothing yet

Think conversation design is complex? You aint seen nothing yet 1800 1200 VUX World

If you think conversation design is complex, you ain’t seen nothing yet. read more

The voice design sprint with Maaike Coppens

The voice design sprint with Maaike Coppens 1800 1200 VUX World

Maaike Coppens returns to share how you can go from zero to hero in one voice design sprint. From nothing at the beginning to a a validated use case and prototype at the end, with fun in the middle.  read more

Voice at VaynerMedia, situational design and sound with Claire Mitchell

Voice at VaynerMedia, situational design and sound with Claire Mitchell 1800 1200 VUX World

VaynerMedia are one of the leading global digital agencies. Director of Innovation at VaynerSmart, Claire Mitchell, joins us to share how VaynerMedia approach voice, as well as sharing some insights on situational design and sound design. read more

International VUX design best practice workshop

International VUX design best practice workshop 1920 1080 VUX World

If you’re looking to create voice applications for Alexa and Google Assistant, check out this run through of VUX design best practice from across the globe.

These insights were taken from over two years and one hundred podcast recordings with voice AI and conversational AI industry thought leaders and experts, as well as two year’s worth of designing and developing voice applications ourselves.

We cover:

  • Why VUX design is similar to service design
  • Stage theory and the component parts of VUX design
  • Conversational design definition and mental model
  • Dialogue design techniques and how to build trust with dialogue
  • Sonic branding and sound design
  • Persona design and psychology

Whether you’re just starting out trying to build your first Alexa skill or Google action, or whether you’re a seasoned pro VUI designer, there’ll be insights in here that you can take into your work.

This workshop was recorded live at Project Voice, Chattanooga, Tennessee in January 2020.

Persona design and voice actors with Adva Levin

Persona design and voice actors with Adva Levin 1800 1200 VUX World

Adva Levin joins us to share why you should design personas for your voice apps, how you can create them, as well as how to work with voice acting talent.
read more

A voice design workshop blueprint for Alexa skill building

A voice design workshop blueprint for Alexa skill building 1800 1200 VUX World

Voice design can be a challenge for those who’ve never done it. Designing an Alexa skill is completely different to designing a website or app.

In this video, we run through a blueprint for a voice design workshop that you can use with your team or clients to find a use case, design, prototype and test a voice app. read more

Emotional intelligence with Sina Kahen

Emotional intelligence with Sina Kahen 1800 1200 VUX World


This week, we’re joined by VAICE co-founder, Sina Kahen to discuss the importance of emotional intelligence and how you can design EQ into your voice experiences using the 6 ‘First date’ principles. read more

VUI design best practice from user testing with 120 brands, with Abhishek Suthan and Dylan Zwick

VUI design best practice from user testing with 120 brands, with Abhishek Suthan and Dylan Zwick 1800 1200 VUX World


Pulse Labs founders, Abhishek Suthan and Dylan Zwick share their advice on VUI design best practice that they’ve learned from conducting voice first usability testing with over 120 brands.


Where to listen

Apple podcasts

Spotify

YouTube

CastBox

Spreaker

TuneIn

Breaker

Stitcher

PlayerFM

iHeartRadio


The search for VUI design best practice

In web design, there are standards. Common design patterns and best practice that you’ll find on most websites and apps.

The burger menu, call to action buttons, a search bar at the top of the page. These have all been tried and tested and are par for the course on most websites.

In voice, that best practice is still to be worked out. And today’s guests have begun to uncover it.

Pulse Labs is a voice first usability testing company. They conduct global remote user research by testing voice experiences for brands. Think of it almost like usertesting.com, but specifically for voice.

After working with over 120 brands, the founders; Abhishek Suthan and Dylan Zwick, have stumbled upon some of the most common mistakes that designers and developers make in their Google Assistant Actions and Alexa Skills.

Through design iterations and further testing, they’ve worked out what some of that best practice looks like.

In this episode

Over the course of this episode, we hear from Abhishek and Dylan about some of the most common mistakes designers make when it comes to voice user experience design.

We discuss how these issues can be fixed, as well as further best practice when designing for voice, including:

  • How to architect your voice app and design flat menus
  • How to handle errors and recover from failure
  • Framing experiences and handling expectations
  • When to apply confirmations and when to make assumptions
  • And a whole host more

This episode is one to listen to again and again. No doubt the standards will change as and when the tech advances and usage grows, but for now, this is probably the best start there is in defining best practice in voice.

Links

Visit the Pulse Labs website

Email Dylan Zwick

Follow Pulse Labs on Twitter

Follow Dylan on Twitter

Follow Pulse Labs on Facebook

Follow Pulse Labs on LinkedIn