From a distance of more than 300 feet and through a glass window, a laser beam can trick a voice-controlled virtual assistant like Siri, Alexa, or Google Assistant into behaving as if it registered an audio command, researchers from the University of Michigan and University of Electro-Communications in Tokyo have demonstrated.
The researchers discovered in the microphones of these systems a vulnerability that they call “Light Commands.” They also propose hardware and software fixes, and they’re working with Google, Apple, and Amazon to put them in place. Daniel Genkin, an assistant professor of computer science and engineering at the University of Michigan, said:
“We’ve shown that hijacking voice assistants only requires line-of-sight rather than being near the device.
“The risks associated with these attacks range from benign to frightening depending on how much a user has tied to their assistant.
“In the worst cases, this could mean dangerous access to homes, e-commerce accounts, credit cards, and even any connected medical devices the user has linked to their assistant.”
The team showed that Light Commands could enable an attacker to remotely inject inaudible and invisible commands into smart speakers, tablets, and phones in order to:
- Unlock a smart lock-protected front door
- Open a connected garage door
- Shop on e-commerce websites at the target’s expense
- Locate, unlock, and start a vehicle that’s connected to a target’s account
Just five milliwatts of laser power — the equivalent of a laser pointer — was enough to obtain full control over many popular Alexa and Google smart home devices, while about 60 milliwatts was sufficient in phones and tablets.
To document the vulnerability, the researchers aimed and focused their light commands with a telescope, a telephoto lens, and a tripod. They tested 17 different devices representing a range of the most popular assistants. Kevin Fu, associate professor of computer science and engineering at U-M, said:
“There is a semantic gap between what the sensors in these devices are advertised to do and what they actually sense, leading to security risks.
“In Light Commands, we show how a microphone can unwittingly listen to light as if it were sound.”
Users can take some measures to protect themselves from Light Commands. Sara Rampazzi, a postdoctoral researcher in computer science and engineering at U-M, said:
“One suggestion is to simply avoid putting smart speakers near windows, or otherwise attacker-visible places.
“While this is not always possible, it will certainly make the attacker’s window of opportunity smaller. Another option is to turn on user personalization, which will require the attacker to match some features of the owner’s voice in order to successfully inject the command.”
Provided by: University of Michigan [Note: Materials may be edited for content and length.]