A Laser Pointer Could Hack Your Voice-Controlled Virtual Assistant

From a distance of more than 300 feet and through a glass window, a laser beam can trick a voice-controlled virtual assistant like Siri, Alexa, or Google Assistant into behaving as if it registered an audio command.  (Image: via   pixabay  /  CC0 1.0)
From a distance of more than 300 feet and through a glass window, a laser beam can trick a voice-controlled virtual assistant like Siri, Alexa, or Google Assistant into behaving as if it registered an audio command. (Image: via pixabay / CC0 1.0)

From a distance of more than 300 feet and through a glass window, a laser beam can trick a voice-controlled virtual assistant like Siri, Alexa, or Google Assistant into behaving as if it registered an audio command, researchers from the University of Michigan and University of Electro-Communications in Tokyo have demonstrated.

The researchers discovered in the microphones of these systems a vulnerability that they call “Light Commands.” They also propose hardware and software fixes, and they’re working with Google, Apple, and Amazon to put them in place. Daniel Genkin, an assistant professor of computer science and engineering at the University of Michigan, said:

The team showed that Light Commands could enable an attacker to remotely inject inaudible and invisible commands into smart speakers, tablets, and phones in order to:

  • Unlock a smart lock-protected front door
  • Open a connected garage door
  • Shop on e-commerce websites at the target’s expense
  • Locate, unlock, and start a vehicle that’s connected to a target’s account

Just five milliwatts of laser power — the equivalent of a laser pointer — was enough to obtain full control over many popular Alexa and Google smart home devices, while about 60 milliwatts was sufficient in phones and tablets.

(Image: Screenshot / YouTube)

Just five milliwatts of laser power — the equivalent of a laser pointer — was enough to obtain full control over many popular smart home devices. (Image: Screenshot / YouTube)

To document the vulnerability, the researchers aimed and focused their light commands with a telescope, a telephoto lens, and a tripod. They tested 17 different devices representing a range of the most popular assistants. Kevin Fu, associate professor of computer science and engineering at U-M, said:

Users can take some measures to protect themselves from Light Commands. Sara Rampazzi, a postdoctoral researcher in computer science and engineering at U-M, said:

Provided by: University of Michigan [Note: Materials may be edited for content and length.]

Follow us on Twitter or subscribe to our weekly email

The World Is Facing a Sand Crisis
Ingenious Ways of Using Blockchain by the Chinese