Radar sensing and voice match in home pod mini. Does Apple hold the ambient computing keys?
People often talk about ambient computing and the dream of ambient computing. Computing that’s just sitting there ambiently in the background, waiting for your beckoning call.
Nine times out of ten, that’s going to have a voice interface as part of it.
You can think of smart speakers as being a form of ambient computing.
The challenge with true ambient computing is that, for it to be truly ambient, and accessed from anywhere by anybody, it needs to have some kind of a way of authenticating who’s who.
If you look at Google Assistant with voice match and you look at Amazon Alexa with voice profiles, that’s using the sound of your voice to authenticate that you are who you are.
The challenge with that is that, if I go to your house and use your Amazon Echo, and my voice profile is used to authenticate me to be able to transact and do things on my Amazon account using your Echo, the risk there is that you’re only using the voice as one type of authentication.
Ironically, even though people, kind of, you know, bash Apple for being a little bit slow on the uptake of voice and letting go their early adopting lead when they introduced Siri in 2011, and not quite keeping up the pace of where Amazon and Google are at, it could be Apple that holds the key to ambient computing, and the key is the iPhone.
If you look at the home pod mini, it uses a patent that Apple had approved back in March, and I put a video, I mentioned it the other day, put a video up in March that explains about this patent.
Essentially, any home pod mini that’s in any room, you could walk up to it, and it uses a combination of radar sensing the proximity of your phone to the speaker, plus voice matching your own personal voice to be able to show you your calendar when you ask it for a calendar and send messages to your contacts when you ask for your contacts, but then if your wife, child, whoever wants to do the same thing, if their iPhone is close by and their voice is activating it, it will enable access to their content based on what’s on their phone and their voice.
This has huge potential. Think about that.
In theory, it’s not too much of a step for any home pod mini to be able to recognize the presence of any other iPhone, match that with the voice of the iPhone user and then authenticate access to voice first services based on two factor authentication: the presence of a phone and the sound of your voice.
And so if Apple were to release that capability to other hardware manufacturers or even branch out themselves into enabling access to Siri from more places, then that ambient computing; authenticated ambient computing, becomes very very real indeed.
And not too far away, depending on if Apple put their foot on the gas or whether they decide to keep everything to themselves.
You probably suspect where they might go with it, but here’s to hoping.