A new study by Uswitch has found that voice assistants struggle with Irish accents… and Welsh, and Scottish.
Try adding the North East of England to that as well.
And it’s no wonder really because accents produce different noises. Voice assistants aren’t trying to understand language they’re trying to decode noises.
Every accent produces different sounds. You probably heard the old thing about if you say beer can with an English accent, then it sounds like someone with a Jamaican accent saying bacon.
But it’s the same for every accent. Every single accent pronounces different words differently.
One of the best ways to impersonate another accent is not to try and say the words that you’re trying to say, but to try and produce the noises that that language or that accent produces.
For me, I say ‘top’. I’m trying to get to the top, I would say ‘top’ an Irish person might pronounce it ‘tap’ and a Scottish person might say ‘taupe’.
Three words that are exactly the same but pronounced totally differently: top, tap, taupe. (Of course, that’s based on my interpretation and how I pronounce things with my Northern accent. To an Irish or Scottish person, they’ll likely hear me pronounce it totally differently. It’s all relative)
And so that’s why it’s hard for speech recognition systems to understand different languages – because different languages have different noises and sounds, and you need to understand the noise, not the language.
That’s just an example of three words, just look at how many other words exist and how many different ways different accents pronounce different words and you’ve got a real real hard job to produce a language model that’s working for a particular accent.
There needs to be improvements, no doubt and I don’t doubt people are working on it, but that’s why it does what it does right now.