By Brittany VanDerBill
Technology brings increased convenience and other benefits to users. Voice recognition technology provides the added ease of operating smartphones and other devices orally.
But this convenience could come at the cost of privacy, experts tell Digital Privacy News.
“Voice recognition technology is actually quite advanced and capable of a lot of privacy violations that people might not even think of,” said Brian Green, director of technology ethics at the Markkula Center for Applied Ethics at Santa Clara University in California.
Green’s views mirror the results of a study last year by the University of Michigan and National Science Foundation, which found how voice recognition technology could be exploited with lasers and audio signals to hack into and use smart devices.
The technology is “capable of a lot of privacy violations that people might not even think of.”Brian Green, Santa Clara University.
In 2017, researchers employed inaudible sounds to start FaceTime on an iPhone.
Reports also have discovered that inaudible sounds hidden within online videos by hackers or cybercriminals were used to command devices to access passwords and other sensitive data.
“At this time, the technology is not sophisticated enough to differentiate in all cases a human voice from other sounds,” Nicholas Davis, director of information security governance, risk and compliance for the University of Wisconsin (UW), told Digital Privacy News.
Many Other Questions
The technology may not be mature enough to differentiate the human voice from other sounds, but experts say other issues abound.
“There are a few privacy concerns that are either specific to voice recognition technology or amplified by it,” said Jessica Vitak, associate professor at the University of Maryland’s College of Information Studies.
Regarding smart devices, in particular, Vitak noted: “There is some ambiguity regarding when these devices are listening, how much data they collect and store, and how that data could be used.”
Smartphones and other electronics using voice recognition may even record sounds without the user’s knowledge.
“There is some ambiguity regarding when (smart devices) are listening … .”Jessica Vitak, University of Maryland.
In fact, smart-phone apps include user agreements that grant providers access to the microphone, according to a September report by the University of Alabama at Birmingham.
If that is the case, can this data be exploited?
“It is unethical to use the complex language of license agreements in order to surreptitiously collect huge volumes of very personal data,” Santa Clara’s Green told Digital Privacy News. “This data can then be used to attempt to manipulate people, whether commercially, politically or otherwise.”
Posing as Telemarketers
In some cases, regarding recorded information, hackers have posed as telemarketers to obtain unauthorized voice recordings.
Skilled hackers even could potentially utilize an automated cell phone system to steal bank or other personal financial details, the Phishlabs blog reported last month.
“Numerous social-engineering techniques exist for a malicious actor to harvest a person’s voiceprint,” Davis said, “including calling them, with the intent to record their voice and use it later.”
“Numerous social-engineering techniques exist for a malicious actor to harvest a person’s voiceprint.”Nicholas Davis, University of Wisconsin.
Beyond these schemes, additional questions on the privacy of voice recognition technology and its potential abuses need to be addressed, experts tell Digital Privacy News.
For instance, could hackers eventually access sensitive information protected by two-factor authentication systems, producing huge privacy breaches within companies?
Not ‘Leading Choice’ of Hackers
This scenario, UW’s Davis said, is not yet in play.
Voice recognition technology is not currently the “leading choice” for biometric authentication systems, he noted, because of cost and reliability issues.
Regardless, voice recognition technology still has its share of privacy questions.
“Every audio recording ever made, as long as it can be analyzed by voice recognition technology, can be combed for information,” Green said, “and it could be recorded now and repeatedly analyzed for years into the future.”
Davis added: “Many potential samples of peoples’ voices exist and could be exploited in a voiceprint-theft scheme.”
Brittany VanDerBill is a Minnesota writer.
Here’s how to guard against the unauthorized use of voice recognition technology:
- Consider minimizing or eliminating voice assistants and smart devices in your home.
- Be cautious of allowing a device’s voice assistant to access private or sensitive information, such as medical history.
- If you receive a call from an unknown number, let it go to voicemail to prevent your voice from being captured unknowingly.
- Mail loan payments instead of speaking payment information into an automated system.
- Never leave your smartphone or computer unattended while in public, especially without locking the device.
- Make sure your smart-phone apps do not have access to your microphone.
— Brittany VanDerBill
Sources (external links):
- Association for Computing Machinery: DolphinAttack: Inaudible Voice Commands
- LightCommands: Light Commands: Laser-Based Audio Injection Attacks on Voice-Controllable Systems
- Entrepreneur: The Latest Thing You Need to Worry About Cybercriminals Hacking? Your Voice.
- University of Alabama at Birmingham: Shh…Your devices may be listening to you
- PhishLabs: Two Romanian Threat Actors Extradited to US After $18M Fraud Scheme
- Cipher: Biometric Hacking