Kane Simms and Dustin Coates are joined by Elaine Lee, Principal Product Designer at Twilio, to discuss the ins and outs of Twilio’s Autopilot bot builder and how you can build trust with users through dialogue design. read more
Why do all skills start with ‘Welcome to xyz’? Is an ‘assistant’ the right mental model for voice experiences? Mark Webster of Adobe XD joins us to tackle some of the biggest challenges in voice and discusses how design can play a role in solving them. read more
Invocable announced today that it’ll be closing its doors, just 5 months after pivoting from Storyline and putting itself behind a pay wall.
VoiceFlow are working with Invocable to offer a migration service for users wanting to port their existing Invocable skills to the VoiceFlow platform.
That’s the news, but the question is:
Why is invocable closing?
If you don’t have time to read the full article, then Vasili summarises it as:
- The market of a tool for creating voice applications relies on the success of voice applications, which is not there yet.
- A voice app works well as an integration — very short, concise request that is correctly recognized and processed. “play the latest album from Eminem” is a good example. But there’s nothing to design here; all these applications are custom integrations, sometimes made by vertical players (like DoorDash skill for ordering).
- There are voice apps that need to be designed, but NLP and NLU quality are not good enough now to support their growth. They’re like IVRs from the 90s, but on Alexa.
There are a couple of other reasons why I’m not that surprised by this news.
What went wrong?
Aside from what Invocable report, there are three other reasons why I think Invocable struggled.
- Premature monetisation
- Target market pivot
- Tree-based design
Firstly, without significant funding, it’s extremely difficult to sustain a startup. Invocable landed a $770k of funding in July 2018 back when it was Storyline, but that’s not a huge amount considering they were powering about 10% of the Alexa skill store at the time.
Its core target market at the time was hobbyists. Bedroom skill builders creating and publishing Alexa skills in Storyline. Some of them, like Kids Court, won competitions and cashed in on developer rewards.
It’s understandable then, that Storyline would want to cash in on their product. If it’s making money for the people using it, then why shouldn’t it make money for the founders?
However, what I suspect the team found is that hobbyists are less likely to pay $60 per month for software. Far less likely than full-time designers and agencies.
So the company pivoted into providing prototyping tools for designers.
The problem I think the team might have found is that there just isn’t enough full-time VUI designers or agencies out there willing to pay that kind of price for that kind of tool. Yet.
Maybe it’s too early.
Target market shift
The second thing that might not have helped, in hindsight, is that pivoting the company might have alienated some of their core customer base.
I have no doubts that the tool was used heavily by designers for prototyping purposes. I was one of them. But I wouldn’t have classed myself as a core user. I didn’t actually use it that much. Just in workshops and here and there for ideation.
I wasn’t someone who published a skill, nurtured a skill, updated it and had success with it.
I’m not sure of the numbers, but I’m sure there would have been a split between people publishing skills through the platform and people logging in regularly, but not hitting publish. Furthermore, I’m sure there would also be a difference between those who publish one-time skills and those who’re creating things so good that they’re getting developer rewards for it.
I’m not saying that things would definitely have turned out differently, and I’m sure the guys knew what they were doing at the time, I just wonder whether they might have picked the wrong group to hone-in on.
The last thing that hinders Invocable, and many other prototyping tools for that matter, is the tree-based nature of its design. It forces you to think about your voice app as a decision tree.
With a voice experience, a user should be able to ask for whatever they want, whenever they want it and have the app respond to them. With a decision tree style design, you’re only ever going to be able to provide the answer that matches your specified next steps.
I’ve written about the Virgin Trains skill and the perils of decision tree style design recently in our 4 ways to take your voice strategy to the next level article.
Tree-based designs are fine for games and interactive stories where the assistant is in control and leading the interaction, but in situations where the user is in the driving seat, tree structures don’t work as well.
I would have thought that this would have been one of the first things Invocable would have fixed after going behind the paywall. But to my knowledge, it didn’t. Nor did it update much else, aside from multi-modal support.
Or am I wrong?
I’m not writing this with any motivation other than trying to understand how the top voice design tool of 2018 has ended up folding. And this is just my thoughts on it. Maybe others can learn from it, or set me straight if I’ve missed something.
I’m more than aware that I could be completely wrong. It’s entirely possible that Vasili and Maksim have figured out the eventual end game for all of the prototyping tools out there. Maybe they’re all destined to become interactive story or game design tools.
Or perhaps the guys are just ahead of the game and they’ve kept the tool backed-up until NLP and NLU advances to the point where they feel it’ll be useful again.
Time will tell.
For now, though, I certainly need a prototyping tool.
This week, Dustin and I catch up with John Kelvie, CEO and founder of Bespoken, and learn all about the three types of testing that can help you create and sustain great voice experiences.
- Unit testing: how to test your code locally without having to deploy into the cloud and test through your smart speaker or phone. This can save developers a whole load of time and effort in the development phase.
- End to end testing: how to automate testing of utterances and intents to make sure you’re returning the correct response to the various utterances that can be fed through your skill or action. This saves the QA folks time as you no longer need to fire up your skill or action and physically test every possible utterance.
- Continuous testing: making sure that your continue to keep on top of the ever-changing AI operating systems and ensuring your skill or action is always operating as intended.
We also discuss the convergence of usability testing and technical testing and how they can play together, as well as hear John’s take on the future of voice.
Where to listen
This week, we’re finding out how content creators can have their podcasts and YouTube content indexed and searchable on voice, with Bryan Colligan of Alpha Voice. read more
This week, we’re getting deep into voice analytics and will help you learn more about how you can understand the performance of your voice first experience.
One of the biggest benefits that technology has given us is the ability to understand. To understand whether our latest PPC campaign had an impact on sales. To understand whether our new website increased our leads. To understand whether our pricing tweak made a difference on click through rates. To understand whether our foray into Facebook is sending more traffic. To understand whether our customers are satisfied.
Tools such as Google Analytics have been providing this kind of value to website owners for years. Tracking where your users come from (Google, Facebook etc), what they do when they arrive and whether they convert are the cornerstones of understanding website performance.
What about voice analytics?
With the introduction of new mediums such as conversational chatbots and voice first applications on platforms such as Alexa and GoogleAssistant, how do you understand the performance of these things?
Can you apply the same rules as the web? Can you even access the same data? Is there some new metrics that matter more? And how can you use all of this to understand and improve the performance and use of your product?
Well, that’s what you’re about to find out.
In this episode
We’re speaking to Dashbot.io CEO Arte Merritt all about the conversational analytics platform and how you can understand whether your conversational experience is working for your users.
We discuss the kind of metrics Dashbot provide including:
- No. users
- Repeat users
- Time per session
- Sentiment analysis
- Message funnels
- Intent funnels
- Top exit messages
- AI performance
- Behaviour flow
- Conversation flow
Arte tells us some case studies of how the tool has been used to understand and then improve conversational experiences.
We discuss some of the challenges with conversational analytics and how they relate to the voice first space and we hear about where voice analytics are heading in the future.
Arte Merritt has worked in mobile and analytics for 20 years. He built an analytics platform which he sold it to Nokia before turning his attention to fill a gap in the market when he realised that Slack didn’t have any analytics. Dashbot was born and its been serving conversational designers ever since, helping them understand and improve their chatbots and voice applications. Since its creation, Dashbot has analysed 32 billion messages and counting!
Where to listen
- iTunes/Apple podcasts
- Any other podcast player you use or ask Any Pod to play V.U.X. World on Alexa
This week, we’re digging into how you can create an Alexa Skill using BotTalk and we give you a template for running a voice first discovery workshop, with SmartHaus Technologies CEO and BotTalk co-founder, Andrey Esaulov. read more
Today we’re taking a close look at the Voysis platform and discussing transitioning from GUI to VUI design with VP of Design, Brian Colcord.
We’ve covered plenty of voice first design and development on this podcast. Well, that’s what the podcast is, so we’re bound to! Most of what we’ve discussed has largely been voice assistant or smart speaker-focused. We haven’t covered a huge amount of voice first application in the browser and on mobile, until now.
You’ll have noticed the little mic symbol popping up on a number of websites lately. It’s in the Google search bar, it’s on websites such as EchoSim and Spotify are trialing it too. When you press that mic symbol, it enables your mic on whatever device you’re using and lets you speak your search term.
Next time you see that mic, you could be looking at the entry point to Voysis.
On a lot of websites, that search may well just use the website’s standard search tool to perform the search. With Voysis, its engine will perform the search for you using its voice tech stack.
That means that you can perform more elaborate searches that most search engines would struggle with. For example:
“Show me Nike Air Max trainers, size 8, in black, under $150”
Most search engines would freak out at this, but not Voysis. That’s what it does.
Of course, it’s more than an ecommerce search tool, as we’ll find out during this episode.
In this episode
We discuss how approaches to new technology seem to wrongly follow a reincarnation route. Turning print into web by using the same principles that govern print. Turning online into mobile by using the same principles that govern the web. Then taking the practices and principles of GUI and transferring that to VUI. We touch on why moving you app to voice is the wrong approach.
We also discuss:
- Voysis – what it is and what it does
- Getting sophisticated with searches
- Designing purely for voice vs multi modal
- The challenge of ecommerce with a zero UI
- The nuance between the GUI assistant and voice only assistants
- How multi modal voice experiences can help the shopping experience
- Making the transition from GUI to VUI
- The similarities between moving from web to mobile and from mobile to voice – (when moving to mobile, you had to think about gestures and smaller screens)
- Error states and points of delight
- The difference between designing for voice and designing for a screen
- Testing for voice
- Understand voice first ergonomics
Brian Colcord, VP of Design at Voysis, is a world-leading designer, cool, calm and collected speaker and passionate sneaker head.
After designing the early versions of the JoinMe brand markings and UI, he was recruited by LogMeIn and went on to be one of the first designers to work on the Apple Watch prior to its release.
Brian has made the transition from GUI to VUI design and shares with us his passion for voice, how he made the transition, what he learned and how you can do it too.
Voysis is a Dublin-based voice technology company that believes voice interactions can be as natural as human ones and are working intently to give brands the capability to have natural language interactions with customers.
Check out the Voysis website
Follow Voysis on Twitter
Read the Voysis blog
Join Brian on LinkedIn
Follow Brian on Twitter
Listen to the AI in industry podcast with Voysis CEO, Peter Cahill
Read Brian’s post, You’re already a voice designer, you just don’t know it yet
Where to listen
This week, we’re speaking to Storyline founder, Vasili Shynkarenka all about how you can create an Alexa Skill without coding. read more
Find out all about the Jovo framework that lets you create Alexa Skills and Google Assistant apps at the same time, using the same code! read more