Flash Briefing

Using website search data to find a use case for voice

Using website search data to find a use case for voice 1388 763 VUX World


One of the common problems people have when getting started in voice is finding a use case. What should your voice app do? It needs to be useful. It needs to serve a purpose, how can you find a place to start?

We had a great episode with Ben Sauer on finding a use case for voice and here’s another tip that you can use to hone in on a place to start.

If you’re running website analytics on your website, then make sure you’re tracking site searches. That’s searches that people are making in your website’s search bar. 

You can review what people are searching for in order to find common questions that people are having trouble finding the answer to or common transactions that people are having difficulty with. 

Then you can use that information as a starting point for your voice app. 

It’s quick, it’s free and it’s genuine user insight based on real customer struggles. 

The most noble startup ever?

The most noble startup ever? 1800 1200 VUX World


Kane 0:03
Happy Monday. For those of you in the UK on bank holiday. And for those of you that aren’t in the UK, happy Monday anyway.

I want to tell you about a company today, a very special company called Voiceitt. It probably one of the most nobel companies that have come across in recent times. Voices is working on solving a very important problem. There’s all this talk about voice systems being more accessible than a screen interface. However, there are millions of people all over the world with non standard speech or speech impairments. This is people who’ve had, could be a stroke, could be suffering with cerebral palsy or any other condition, that means that you don’t have what you would call standard speech.

If you have speech impairments, Voiceitt is working on an advanced speech to text system solution that enables people who have non standard speech or speech impairments – it gives them their voice back. It’s part of the Amazon Alexa accelerator program which is ongoing right now. It’s based out of Tel Aviv in Israel. And Here’s a little example of Voiceitt in action:

Kane 1:26
There you go. Absolutely fantastic.

Today on the VUX World podcast, we’re speaking to Sara Smalley who is one of the co founders of Voiceitt and VP of strategy over there. It’s based out of Tel Aviv, Israel, as I said, it’s part of the Amazon Alexa accelerator program. I couldn’t quite get out of Sara whether or not that means that Voiceitt could become part of Alexa in future, seems like it would be a sensible idea.

Check it out. We get really into detail about Voiceitt. We discuss the accelerator program, we talk about some of the challenges that Voiceitt are facing and how the idea come around and getting into some detail on the whole accessibility situation in the voice space.

Listen to the VUX World and Voiceitt podcast.

Make voice core to what you do, not an add-on

Make voice core to what you do, not an add-on 1800 1200 VUX World


Online, you’ve been able to get away with doing the bare minimum. For years hotels just had a website with pictures, you had to call to book a room.

Same for restaurants. Websites used to just show you the place and the menu. To book a table, you needed to call.

In some restaurants, that’s still the case.

The mobile revolution forced some companies to open up their systems. You needed to go a step beyond the shop window and let people actually do stuff. You had to let people transact. That meant creating APIs.

But not everyone did that. Not everyone opened up their APIs. Those that haven’t got APIs or that have old, legacy line-of-business systems provided by incumbent suppliers will have a hard time realising the true value of voice.

Voice has the potential to create huge efficiencies and streamline countless processes, but you need to make it core to what you do and integrate it with the rest of your line of business systems. Will Hall discusses this ‘systems thinking’ approach in detail on the podcast.

Just as a webform causes hassle if it doesn’t have validation or if it doesn’t shoot the request straight into your line-of-business system to kick-off a process, voice will cause just as much hassle for you if it’s an add-on.

You’ll never have enough time to focus on it. You’ll complain that it isn’t working and you’ll either abandon the idea or you’ll keep it ticking over, never truly gaining any benefit from it.

Voice will force you to organise your data like never before. Those who haven’t even thought about what data they have and what state it’s in will have a lot to do.

Is Facebook getting a Voice Assistant?

Is Facebook getting a Voice Assistant? 800 450 VUX World

I first come across this on VoiceBot. Jane Manchun Wong has found some hidden code inside Facebook Messenger that would suggest that Facebook is about to launch a voice assistant called Aloha.

She also found hints of audio messaging within direct messages in Instagram.

Is it really an assistant?

For me, this doesn’t actually look so much like an assistant. It’s more of a demonstration of a speech to text engine. And audio messaging in Instagram is interesting. LinkedIn already rolled that out.

The bigger picture

Even if it’s not evidence of an assistant, and if it is simply a voice interface, it’s certainly a step in the right direction and will help users get more comfortable speaking to technology. That’s good for us all, right?

Cognilytica Voice Assistant Benchmark 1.0

Cognilytica Voice Assistant Benchmark 1.0 1024 576 VUX World

Today, we’re discussing the Cognilytica Voice Assistant Benchmark 1.0 and it’s findings on the usefulness and capability of smart speakers.

The folks at Cognilytica conducted a study where they asked Google Assistant, Alexa, Siri and Cortana 100 different questions in 10 categories in an effort to understand the AI capability of the top voice assistants in the market.

What they found, broadly speaking, was a tad underwhelming.

All of the assistants didn’t fair too well

Alexa came out on top, successfully answering 25 out of 100 questions and Google Assistant came second with 19. Siri answered 13 and Cortana 10.

The real question is, what does this mean?

Well, if you take a closer look at the kind of questions that were asked, it’s difficult to say that they were helpful. They weren’t typically the kind of questions you’d ask a voice assistant and expect a response to.

Things like: “Does frustrating people make them happy?” and “If I break something into two parts, how many parts are there?“ aren’t necessary common questions that you’d expect a voice assistant to answer.

Granted, they would test whether assistants can grasp the concept of the question. If they can grasp the concept, then perhaps they have the potential to handle more sophisticated queries.

What the study did well was starting out with simple questions on Understanding Concepts, then worked through more complex questions in areas like Common Sense and Emotional IQ.

The trend, broadly speaking, was that most of the voice assistants were OK with the basic stuff, but flagged when they come up against the more complex questions.

Cortana actually failed to answer one of the Calibration questions: “what’s 10 + 10?”

Slightly worrying for an enterprise assistant!

Google gave the most rambling answers and didn’t answer many questions directly. This is probably due to Google using featured snippets and answer boxes from search engine results pages to answer most queries. It’s answers are only as good as the text it scrapes from the top ranked website for that search.

It’s not a comparison

This benchmark wasn’t intended to be a comparison between the top voice assistants on the market, though it’s hard not to do that when shown the data.

Whether the questions that were asked are the right set of questions to really qualify the capability of a voice assistant is debatable, but it’s an interesting study non the less and it’s worth checking out the podcast episode where they run through it in a bit more detail.