Are voice assistants more of a hinderance than a help?
Voice assistants utilise voice recognition, language processing algorithms and voice synthesis and are a key component of smart technology. In 2020, 4.2 billion digital voice assistants were being used in devices globally. It is predicted that by 2024, this number will reach 8.4 billion units.
Discussions of users explaining their experience of voice assistants becoming a nuisance has sparked conversation on LinkedIn. Here, Electronic Specifier examines whether voice assistants are more of a hinderance than a help.
Misinterpreting instructions and inclusivity
Perhaps one of the biggest reasons for voice assistants to be called a nuisance, is because they sometimes perform the wrong task or misinterpret instructions. To interpret speech, voice assistants convert voice commands into text, then compare the text to recognisable words in the database.
This could be as simple as asking Siri to play a specific song, for it to perform a completely different task, or more complex as those with disabilities or speech impediments might not ever be understood by a device.
Companies were quick to get voice assistants into every home, and in this rush, some believe they failed to consider whether they were setting expectations too high. Companies have dubbed many voice assistants as ‘human helpers’, and so this is what consumers expect.
Srini Pagidyala, Co-Founder of Aigo.ai said: “Voice assistants, text assistants, conversational AI systems and chatbots without a brain, are all doomed to fail, whether they are used by millions of consumers or hundreds of enterprises. However, much funding they’ve raised or spent, they are still not suitable for conversations.”
Companies have begun to re-engineer existing products to better understand those with speech impediments or disabilities. Apple has collected over 28,000 audio clips from people who have a stutter, hoping to improv Siri’s voice recognition systems. Also, in their efforts to be more inclusive, Microsoft has put $25m toward inclusive technology and Google has worked with speech engineers and speech language pathologists to train its existing software to recognise diverse speech patterns.
Bret Kinsella, Founder and CEO of Voicebot.ai said: “Voice assistants work tremendously well for a number of use cases that involve simple requests. They also work well for a handful of complex requests. I’ve yet to come across dire consequences from voice assistant use.
“Voicebot data shows more than 50% of smart speaker owners employ them daily. They must be working out for some people.”
Privacy concerns
Many have privacy concerns regarding voice assistants. Research from Accenture UK states that 40% of voice assistant users are concerned about who can listen and how their data is used.
Perhaps this panic was induced by Samsung, who made headlines in 2015 when they warned customers against discussing personal information in front of its voice-controlled smart TV.
Voice assistants are ‘always listening’ at a local level, meaning they don’t actually transmit any information until they hear the trigger word, for example, ‘Ok Google’ or ‘Hey Siri’.
The burden and responsibility of protecting consumer privacy falls to the company providing the software, however developers of products and services that interact with this software have some responsibility.
Getting smarter
The likes of Siri, Alexa and Google Assistant have become smarter over the last few years thanks to advancements in machine learning.
For technology to continue to get smarter it must make mistakes. A huge challenge with smart devices is that they will sometimes make the wrong decisions, but making mistakes allows them to learn and adapt to the user.
As AI becomes more advanced and voice technology becomes more accepted, we can expect to see further integration, natural conversations and more complex task flows.
Expectations
Amy Stapleton, CEO of Chatables said: “Voice assistants have been great at helping lots of people, including older adults who can make a call just using their voice.
“The technology isn’t quite there yet to really understand what we want, much less talk to us in a way that’s not annoying. I think the technology and our understanding about how to use it will improve a lot in the next few years.”
Why are humans happy to wait for a website to buffer without losing patience but get frustrated immediately if a smart speaker does not or perform a task or provide an answer perfectly.
Are our expectations simply too high? Why do we expect so much of voice technology?