Looking at the Apple and Google partnership that could enable tracking of coronavirus cases and shares why you should think about voice early in projects.
I have some goodies.
This is ready wrapped up.
How cool is that?
Apple and Google are helping to fight coronavirus by working together on a framework that will help track your contact with potential Coronavirus… So first I wanted to say victims but that sounds a bit harsh. People with coronavirus who have symptoms.
It’s unclear exactly how this will look, but essentially they’ve published some documentation that shows what will happen.
So every person’s phone will be giving off essentially Bluetooth kind of signals and every phone will have a unique identifying number that apps can use to essentially the log whenever one phone picks up a signal from another phone.
If you’ve been in contact with someone it, will just log all of those numbers stored and then if anyone ever reports to their phone that they have symptoms or they have been diagnosed with coronavirus, that then alerts their app, which will then go back and alert any other phone that has any other app that has their number stored in it that you’ve had contact with.
They’re not building the app. They’re building the framework to allow apps to be built, so I don’t know whether or not it’s going to be a whole lot of people building different kind of apps or whether they’ll leave it to likes of the NHS to create their own app that everyone will use.
It’s at this stage that you need to be thinking about voice user interfaces.
It’s in the creation process that it helps to think about what a voice user interface might be able to do.
For this kind of app, the issue is that if you don’t tell the app every single day that you’re fine, there’s apps out there right now that allow you to do that. You just login you say I guess I’m fine today.
If you don’t get into the routine of doing that then you might not get into the routine of actually telling it that you do have Coronavirus, and half of the problem with half of the apps is that people never get around to opening them.
And if they do download them and open them once then coming back and getting people to come back is a challenge.
Now, yes, there is more of a need for people to return. However, there’s still the inherent friction in the mobile device that will make people need to remember to get out the phone or open up the app and then report a symptom and when they do that usually with apps that you use your get a signal back at it’s like with Facebook or Instagram the signal that you get back is discovering something new whereas with this the transaction is going to be one where you just going to tell it that you’re fine or whatever that you have symptoms.
You’re not going to get anything back from it. And so this is where a voice interface will be really helpful.
So for example, if the NHS either added a voice interface into the app a Siri shortcut into the app, an in app action for Google assistant or an action or Alexa s kill, then that means that you would just be able to work that into your routine and just say tell the NHS I’m fine to Google assistant or to Alexa and then that can then link back to the app.
That way you don’t need to get into the mental model of remembering to go and use this app where you get nothing back in return from. You just need to say Alexa tell the NHS I’m fine or Alexa tell the NHS that I’ve got symptoms and if you use account linking then they will be able to alert everyone else who’s been in contact with you or with your phone that they have done so.
And so thinking about how to incorporate voice early on in a project means that you can start to understand the friction with the product that you’re just about to develop and then use voice has a solution or a way of reducing that friction and improving the overall experience.
If you like these blogs you can subscribe. I’m going to put the link below and if you want to check out the rest of them just look for the vuxworld hashtag.