Using Smart Assistants? Attackers Can Silently Control Siri, Alexa and Other Voice Assistants

Author Photo
Sep 7, 2017
13Shares
Submit

Cybercriminals can give potentially harmful instructions to popular voice assistants like Siri, Cortana, Alexa, and Google Assistant. Researchers have revealed that the most popular smart assistants can be manipulated to respond to commands that can’t be heard by their human owners. The attack vector requires just $3 investment enabling criminals to remotely launch attacks.

Simple design flaw puts AI assistants like Siri at risk of remote hacks

Security researchers from the Zheijiang University have discovered a way to activate voice recognition system without speaking a word. Their so-called DolphinAttack works against a number of hardware with all the popular voice assistants. The proof-of-concept shows how an attacker could exploit inaudible voice commands to perform a number of operations, including initiating a FaceTime call, switch the phone to airplane mode, manipulating navigation system in an Audi, and browsing malicious sites.

alexa-cortanaRelatedMicrosoft And Amazon Partner To Make Alexa And Cortana Work Together

“An adversary can upload an audio or video clip in which the voice commands are embedded in a website, eg, YouTube. When the audio or video is played by the victims’ devices, the surrounding voice-controllable systems such as Google Home assistant, Alexa, and mobile phones may be triggered unconsciously,” the researchers wrote.

The attack works by instructing AI assistants with commands in ultrasonic frequencies that are audible to smart devices but not humans. The attack mechanism is also extremely cheap, costing just $3 requiring an ultrasonic transducer and a low-cost amplifier.

Criminals can silently whisper commands, hijacking AI assistant like Siri and Alexa, forcing them to open malicious websites or even manipulate smart home products like your doors and automobiles.

DolphinAttack could inject covert voice commands at 7 state-of-the-art speech recognition systems (e.g., Siri, Alexa) to activate always-on system and achieve various attacks, which include activating Siri to initiate a FaceTime call on iPhone, activating Google Now to switch the phone to the airplane mode, and even manipulating the navigation system in an Audi automobile.

The attack works on all major platforms, including iOS and Android, putting all the major phones and devices at risk. The researchers have suggested that manufacturers shouldn’t allow to respond to sounds at frequencies higher than 20kHz. Researchers added that the criminals can “achieve the following sneaky attacks purely by a sequence of inaudible voice commands:”

enable-google-assistant-on-android-nougatRelatedGoogle Assistant Released For Android 6.0 Marshmallow
  • Visiting a malicious website – launching a drive-by-download attack or exploit a device with 0-day vulnerabilities.
  • Spying – an adversary can make the victim device initiate outgoing video/phone calls, therefore getting access to the image/sound of device surroundings.
  • Injecting fake information – adversary may instruct the victim device to send fake text messages and emails, to publish fake online posts, to add fake events to a calendar, etc.
  • Denial of service – adversary may inject commands to turn on the airplane mode, disconnecting all wireless communications.
  • Concealing attacks – the screen display and voice feedback may expose the attacks. The adversary may decrease the odds by dimming the screen and lowering the volume.

Here’s the proof of concept video:

Submit