Mother Jones illustration
I recently attended a holiday potluck hosted by a tech-junkie friend who had decked out his one-bedroom apartment with smart speakers, smart lights, and small, infrared motion sensors that looked disconcertingly like cameras. Towards the end of the party, after one of the guests disappeared into the back of the apartment, another decided to play a prank. “Hey Google, turn off bathroom lights,” he said quietly into a nearby sensor. A few seconds later we heard an exaggerated shriek.
“Hey Google, turn lights red,” my friend joined in, grinning. The room was bathed in a creepy, blood-red glow.
In-person interference—by mischievous houseguests, enterprising parrots, or curious kids experimenting with a parent’s Prime account—might be the most common security problem with smart speakers like the Amazon Echo and Google Home. But pranks are just the tip of the iceberg when it to comes to the security of the internet-linked microphones now present in 1 in 4 US households. And even as smart speakers become ubiquitous, privacy is the No. 1 concern for shoppers who are still holding out.
Most people’s first question about smart speakers is “Is this listening to me right now?” The short answer: yes.
Jason Hoffman-Andrews, a senior staff technologist at the Electronic Frontier Foundation, a digital privacy advocacy group, doesn’t fault consumers who might be skeptical about turning on a smart speaker in their house. Most people’s first question, he explains, is “Is this listening to me right now?” The short answer: yes. Company policies for Amazon and Google—which together dominate an estimated 84 percent of the smart-speaker market—spell out pretty clearly that their devices are always tuned in. But they’re not keeping track of every single thing you say. Every few seconds, the speakers delete their memory of the sounds around them, unless they detect now-familiar “wake words” like “Alexa” or “OK Google” and begin streaming a recording of your voice to company servers. That’s where Amazon Alexa and Google Assistant analyze the audio and identify commands, such as “set an alarm for 7:30,” “snooze my alarm,” and “snooze my alarm again.” The recordings are stored online indefinitely, where users can review and delete them if they wish.
It’s not a bad setup for making sure conscientious smart-speaker owners are aware of what, exactly, the companies are hearing. But that’s only if the system is working as designed. Wake-word technology is still imperfect, meaning sensitive conversations might still be picked up by built-in mics. In 2017, a tech journalist who received an early version of the Google Home Mini discovered a hardware problem that led some devices to make thousands of unwanted recordings. In May, an Oregon couple learned that a private conversation had mistakenly triggered their Amazon Echo, which recorded an audio clip of their discussion and sent it to someone on their contact list; in December, the company accidentally sent a German user 1,700 recordings from a stranger’s house. A study by cybersecurity company Symantec, meanwhile, found that phrases like “Hey Bobo” and “Okay Hodor” could accidentally trigger Google devices. And if your name sounds like “Alexa,” forget about it.
So for new smart-speaker owners, it’s safe to assume a mix of intentional interactions and accidental recordings will make it to Amazon and Google’s servers. Once your voice is in a company’s possession, though, what happens to it?
After your commands have been analyzed and your smart speaker responds, you might forget about the interaction, but the companies don’t. Amazon says voice recordings are used to “improve your experience and our services” and “make recommendations to you based on your requests”; Google says it uses data collected by Google Home to make its services “faster, smarter, more relevant, and more useful” and to provide “better and more personalized suggestions and answers.”
Asking Alexa for the score in a Golden State Warriors game won’t produce advertisements for basketball tickets next time you go online. But if you ask your Google Home, it might.
The legalese might sound similar, but in practice, this is where Amazon and Google diverge. According to Google spokesman Jake Jones, the company can use transcripts of what you say to your Google Home to “deliver more useful ads on other platforms.” That means the content of recordings made by your Google Home can become part of Google’s already ultra-specific profile of you as a consumer—demographics, preferences, interests—that companies pay Google to target with so-called “personalized” ads.
Amazon is collecting similar data but on a much smaller scale. Using Alexa to buy an item on Amazon or play an artist on Amazon Music, for instance, will create records that the company uses for targeted ads—the same as if you took those actions on a computer. But other Alexa interactions are not analyzed for use in advertising. “There is no keyword extraction happening whatsoever from a voice recording, ever,” spokesperson Leigh Nakanishi tells me.
In other words, asking Alexa for the score in a Golden State Warriors game won’t produce advertisements for basketball tickets next time you go online. But if you ask your Google Home, it might. The difference makes sense given that advertising accounts for a whopping 86 percent of revenue for Google’s parent company, Alphabet; Amazon’s ad revenues, though smaller, are also growing rapidly.
It’s thanks to this Big Data economy that companies have achieved such a frightening level of specificity in their ability to market to us, even without convincing us to put microphones in our homes. Some online ads are so narrowly tailored, people are convinced companies must be listening through their smartphones—”because why else would an ad for those pants just show up in my Facebook feed?!” (Security researchers and former Facebook employees generally agree that Facebook apps are not listening through your microphone in order to deliver ads. First, it would take too much processing power; second, companies already know practically everything about you thanks to location data and your browsing history.)
If you’re worried about unwanted advertisements suddenly playing on your new smart speaker, you can rest easy for now. Both Amazon and Google have so far limited the ads they allow to play on smart speakers to a handful of third-party apps, though they have reportedly considered more subtle types of promotion. A CNBC report last year suggested that Amazon was planning to let companies pay to have Alexa promote their products. (The company disputes this.) And some advertisers have found ways to jump the gun; last year, a Burger King commercial prompted Google Homes to read aloud from the Whopper burger’s Wikipedia page. It’s not hard to imagine malicious internet ads attempting something similar, ordering speakers to set alerts or order products for users while they’re out of the room.
“If you go into somebody’s home and they have one of these devices, it’s not really socially acceptable to say, ‘I’m actually going to turn around because I don’t want to be in a house with that.’”
Fundamentally, Hoffman-Andrews sees the privacy problems with smart speakers as an issue of consent, both for the devices’ owners and the people around them. Have your roommates agreed to their voices being recorded and accessible later on your Amazon account? What about guests? The issue gets even more complicated for children. “If you go into somebody’s home and they have one of these devices, it’s not really socially acceptable to say, ‘I’m actually going to turn around because I don’t want to be in a house with that,’” Hoffman-Andrews says. “So you kind of have to submit to what you may see as surveilling you.”
He also worries that surveillance could become more literal. If smart speakers could be made to activate without a “wake word” at all, he theorizes, they could be used by law enforcement or intelligence agencies to keep tabs on targets like activists, journalists, or people under criminal investigation. (For what it’s worth, both Google and Amazon say no such thing has ever happened with their smart speakers. “We believe the design of the device precludes that, and we would fight any request to change our technology for this purpose,” an Amazon spokesperson said; Jones said Google has “not developed such capability and has no intention of doing so in the future.”)
Of course, corporate assurances may not mean much to skeptical consumers after a year of scandals over tech companies’ handling of user data (Facebook, Facebook, Facebook, and also Google). Meanwhile, the stakes of potential smart-speaker security breaches are getting higher, thanks to the introduction of built-in cameras on the Amazon Echo Show, Facebook’s Portal, and similar devices.
Ultimately, the choice to keep a smart speaker around comes down to what you’re getting out of the product. For some people with physical disabilities or intellectual differences, smart speakers can make household tasks easier or provide an engaging presence in daily life. For tech junkies like my friend, the sheer joy of commanding a smart home network might be enough. For Hoffman-Andrews, though, the benefits of a speaker don’t outweigh the costs. He bought a couple of products for testing, but he admits he couldn’t actually bring himself to set them up. Being able to ask a speaker to dim the lights or play a weather forecast just didn’t seem like a good enough tradeoff for giving companies access to his home.
“Is it normal to have cameras and microphones pointed at you and your guests? Currently the answer is mostly no,” he says. “These devices aim to change the answer to yes.”
Read more: motherjones.com