Voice first games are one of the most popular Amazon Alexa skill categories. So what type of voice games are available? And how do you create them? We speak to game developer and reviewer, Florian Hollandt, to find out. read more
Today, we’re following the story of the inspirational Bob Stolzberg of VoiceXP, and giving you some deep insights into how you can turn Alexa for Business into a business. read more
Today, we’re discussing the findings of Martin Porcheron’s study, ‘Voice interfaces in everyday life’. We uncover insights into how people actually use Amazon Alexa in the home. We find unique user behaviour, new technology challenges and understand what it all means for voice UX designers, developers and brands.
Voice interfaces in everyday life
Imagine if you could eaves drop into someone’s house and listen to how they interact with their Amazon Echo. Imagine, whenever someone said “Alexa”, you were there. Imagine being able to hear everything thing that was said for an entire minute before the word “Alexa” was uttered, and then stick around for a whole 60 seconds after the interaction with Alexa was over.
Well, that’s exactly what today’s guest and his associates did, and his findings offer some unique lessons for VUX designers, developers and brands that’ll help you create more natural voice user experiences that work.
In this episode, we’re discussing:
- How people use digital assistants in public
- The background of Voice interfaces in everyday life
- The challenge of what you call your Alexa skill
- The issue of recall
- How Amazon can improve skill usage
- The inherent problem of discoverability in voice
- How Echo use is finely integrated into other activities
- The implications of treating an Echo as a single user device
- The challenge of speech recognition in the ‘hurly burly’ of moderns life
- How people collaboratively attempt to solve interaction problems
- What is ‘political’ control and how does it apply to voice first devices?
- Pranking people’s Alexa and the effect on future Amazon advertising
- Designing for device control
- Why these devices aren’t actually conversational
- The importance of responses
Key takeaways for designers and developers
- Give your skill a name that’s easy for recall
- Make your responses succinct, fit within a busy and crowded environment
- Make sure your responses are a resource for further action – how will the user do the next thing?
- Consider designing for multiple users
- Don’t use long intros and tutorials, get straight to the point
- Don’t design for a conversation, design to get things done
Martin Porcheron is a Research Associate in the Mixed Reality Lab at the University of Nottingham and has a PhD in Ubiquitous Computing, a sub-set of Computer Science. Martin has conducted several studies in the field of human-computer interaction, including looking at how people make use of mobile phones in conversations i.e. how people use something like Siri mid-conversation and how those interactions unfold.
Martin’s angle isn’t to look at these things as critical or problematic, but to approach them as an opportunity to learn about how people make use of technology currently. He believe this enables us to make more informed design decisions.
The study we discuss today has won many plaudits including Best Paper Award at the CHI 2018 conference.
- Read the Voice interfaces in everyday life study
- Follow Martin on Twitter
- Read Martin’s blog post on the study
- Read Martin’s colleague, Stuart Reeves’ post on the study on Medium
- Visit Martin’s website
Where you can listen:
Today, we’re getting deep into the biggest challenges facing designers and developers on the Alexa platform: being discovered and making money. And who better to take us through it, than one of the most experienced developers on the voice scene, Jo ‘the Oracle’ Jaquinta. read more
Today, we’re getting into detail about what it’s like to be a full-time VUI designer. We’re discussing the details of the role, the day to day duties and the skillsets that are important to succeed in designing voice user interfaces. read more
We’re talking to ex-Googlers, Konstantin Samoylov and Adam Banks, about their findings from conducting research on voice assistants at Google and their approach to building world-leading UX labs.
This episode is a whirlwind of insights, practical advice and engaging anecdotes that cover the width and breadth of user research and user behaviour in the voice first and voice assistant space. It’s littered with examples of user behaviour found when researching voice at Google and peppered with guidance on how to create world-class user research spaces.
Some of the things we discuss include:
- Findings from countless voice assistant studies at Google
- Real user behaviour in the on-boarding process
- User trust of voice assistants
- What people expect from voice assistants
- User mental models when using voice assistants
- The difference between replicating your app and designing for voice
- The difference between a voice assistant and a voice interface
- The difference between user expectations and reality
- How voice assistant responses can shape people’s expectations of the full functionality of the thing
- What makes a good UX lab
- How to design a user research space
- How voice will disrupt and challenge organisational structure
- Is there a place for advertising on voice assistants?
- Mistakes people make when seeking a voice presence (Hint: starting with ‘let’s create an Alexa Skill’ rather than ‘how will
- people interact with our brand via voice?’)
- The importance (or lack of) of speed in voice user interfaces?
- How to fit voice user research into a design sprint
Plus, for those of you watching on YouTube, we have a tour of the UX Lab in a Box!
Konstantin Samoylov and Adam Banks are world-leading user researchers and research lab creators, and founders of user research consultancy firm, UX Study.
The duo left Google in 2016 after pioneering studies in virtual assistants and voice, as well as designing and creating over 50 user research labs across the globe, and managing the entirety of Google’s global user research spaces.
While working as researchers and lab builders at Google, and showing companies their research spaces, plenty of companies used to ask Konstantin and Adam whether they can recommend a company to build them a similar lab. Upon realising that company doesn’t exist, they set about creating it!
UX Study designs and builds research and design spaces for companies, provides research consultancy services and training, as well as hires and sells its signature product, UX Lab in a Box.
UX Lab in a Box
The Lab in a Box, http://ux-study.com/products/lab-in-a-box/ is an audio and video recording, mixing and broadcasting unit designed specifically to help user researchers conduct reliable, consistent and speedy studies.
It converts any space into a user research lab in minutes and helps researchers focus on the most important aspect of their role – research!
It was born after the duo, in true researcher style, conducted user research on user researchers and found that 30% of a researchers time is spent fiddling with cables, setting up studies, editing video and generally faffing around doing things that aren’t research!
Konstantin Samoylov is an award-winning user researcher. He has nearly 20 years’ experience in the field and has conducted over 1000 user research studies.
He was part of the team that pioneered voice at Google and was the first researcher to focus on voice dialogues and actions. By the time he left, just 2 years ago, most of the studies into user behaviour on voice assistants at Google were conducted by him.
It’s likely that Adam Banks has more experience in creating user research spaces than anyone else on the planet. He designed, built and managed all of Google’s user research labs globally including the newly-opened ‘Userplex’ in San Francisco.
He’s created over 50 research and design spaces across the globe for Google, and also has vast experience in conducting user research himself.
We’re getting into detail on the voice first ecosystem; the opportunities, challenges and future, with curator of the Hearing Voices newsletter, Matt Hartman.
In this episode, we’re discussing:
- All about Betaworks
- A strategic vision of the voice first scene
- Changing user behaviour
- On-demand interfaces
- Friction and psychological friction
- How context influences your design interface
- The 2 types of companies that’ll get built on voice platforms
- Differences between GUI and VUI design
- Voice camp
- The Wiffy Alexa Skill
- Lessons learned building your first Alexa Skill
- Text message on-boarding
- Challenges in the voice space
Our Guest, Matt Hartman
Matt Hartman has been with Betaworks for the past 4 years and handles the investment side of the company. Matt spends his days with his ear to the ground, meeting company founders and entrepreneurs, searching for the next big investment opportunities.
Paying attention to trends in user behaviour and searching for the next new wave of technology that will change the way people communicate has led Matt and Betaworks to focus on the voice first space.
Matt has developed immense knowledge and passion for voice and is a true visionary. He totally gets the current state of play in the voice first space and is a true design thinker. He has an entirely different and unique perspective on the voice scene: the voice first ecosystem, voice strategy, user behaviour trends, challenges and the future of the industry.
Matt curates the Hearing Voices newsletter to share his reading with the rest of the voice space and created the Wiffy Alexa Skill, which lets you ask Alexa for the Wifi password. It’s one of the few Skills that receives the fabled Alexa Developer Reward.
Betaworks is a startup platform that builds products like bit.ly, Chartbeat and GIPHY. It invests in companies like Tumblr, Kickstarter and Medium and has recently turned its attention to audio and voice first platforms such as Anchor, Breaker and Gimlet.
As part of voice camp in 2017, Betaworks invested in a host of voice first companies including Jovo, who featured on episode 5 of the VUX World podcast, as well as Spoken Layer, Shine and John Done, which conversational AI guru, Jeff Smith (episode 4), was involved in.
This week, we’re joined by the Mycroft AI team, and we’re getting deep into designing and developing on the open source alternative to Amazon Alexa and Google Assistant. read more
This week, we’re speaking to Storyline founder, Vasili Shynkarenka all about how you can create an Alexa Skill without coding. read more
Find out all about the Jovo framework that lets you create Alexa Skills and Google Assistant apps at the same time, using the same code! read more