Skip to content

Are the digital voice assistants of Google, Apple and Amazon safe for the privacy of their users?

A series of privacy errors in recent months have raised new concerns about the future of digital voice assistants, a growing market considered by some to be the next frontier of computing.

Recent incidents involving devices from Google, Apple and Amazon giants show that despite the strong growth in the market for smart devices and speakers, more work is needed to guarantee consumers protection. of your data when you use this technology.

Apple analyzes conversations recorded by Siri with external companies

Apple said this week it was suspending its “Siri grading” program, in which people listen to conversation fragments to improve voice recognition technology, after the London newspaper The Guardian reported that contractors were listening to confidential medical information, issues criminals and even sexual encounters.

“We are committed to offering a great Siri experience while protecting user privacy,” Apple said in a statement, adding that it would allow consumers to opt for this feature in a future software update.

Experts believe that even today human intervention is necessary to improve these systems.

What about Google?

Google, meanwhile, said it would pause listening and transcription of conversations in the European Union (EU) of its Google Assistant, following a private investigation in Germany.

On its side, Amazon, which has recognized the use of human assistants to improve the artificial intelligence that feeds its Alexa device, recently announced a new feature that makes it easy to erase all recorded information.

Recent cases may give consumers the impression that someone is “listening” to their conversation even if it is rarely true.

“From a technological perspective it is not surprising that these companies use humans to write down this data, because the machines are not good enough to understand everything”, said Florian Schaub, a professor at a University of Michigan specializing in computer-to-computer interaction. Humans and who has done research on digital assistants.

“The problem is that people do not expect it and it is not communicated to them transparently”, he said.

Carolina Milanesi, a technology analyst at Creative Strategies, agrees that human beings are needed to improve technology.

“People have a somewhat unrealistic expectation that these assistants will improve by magic, that the machine could learn and be better by itself, but we are still in the beginning of AI (artificial intelligence), and human intervention remains important”, he said.

According to the research firm eMarketer, about 112 million people – a third of the population of the United States – will use a voice assistant at least once a month on a device, several powered by AI for searches, listening to music and news or information.

A Microsoft survey of consumers in five countries found that 80% were satisfied with their experience with digital assistants. But 41% of respondents said they had concerns about privacy, trust and listening.

Unfounded fears?

Some of the fears surrounding smart speakers are based on false assumptions, analysts said.

The devices do not record or transmit information until they are “activated” with a keyword or phrases such as “Hey, Siri” or “Alexa”.

But, “there is always a risk of false activation”, Schaub said.

“They have to rely on the device and the device manufacturer that the microphone is only processing locally until the word that orders the activation is heard.”

Ryan Calo, co-director of the Technology Policy Laboratory at the University of Washington, said that although the devices are not listening, there is still concern about the possible access to conversations.

“If employees are listening to things that they should not have access to, that is a true red alert flag, it is bad practice”, Calo said.


Also published on Medium.

Published inTechnology
%d bloggers like this: