Tuesday, 22 May, 2018

Researchers warn of growing voice AI vulnerabilities

Google Ai Vs Siri Credit The Verge
Sandy Nunez | 12 May, 2018, 07:47

Researchers in China and the United States have discovered that Apple's Siri, Amazon's Alexa, and Google's Assistant can be controlled by hidden commands undetectable to the human ear, The New York Times is reporting. However, this latest one is different because it could victimize even the most careful users, as the subliminal voice commands are undetectable to the human ear.

These commands can be hidden in white noise played over loudspeakers or YouTube videos, as students from the University of California, Berkeley, and Georgetown University demonstrated two years ago.

Researchers say that scammers could use this technology to ask your device to shop online, wire money or unlock doors. However, it's probably only a matter of time before these types of attacks trickle out into the wild-if students at universities are working on this sort of thing, it's likely that bad actors are doing the same.

The spokesperson goes on to describe Amazon's efforts at keeping the line of voice-activated Echo smart speakers secure, which they say includes "disallowing third party application installation on the device, rigorous security reviews, secure software development requirements and encryption of communication between Echo, the Alexa App and Amazon servers".

Now the technology is racing even further ahead of the law.

How England is preparing for Prince Harry and Meaghan Markle's royal wedding
Will the royal wedding be televised in the U.S.? To see the full picture gallery click on the link at the top of the story . The designers are the same names behind the fantastic $75,000 dress Meghan wore for the official engagement photos.

Testing against Mozilla's open source DeepSpeech voice recognition implementation, Carlini and Wagner achieved a 100 percent success rate without having to resort to large amounts of distortion, a hallmark of past attempts at creating audio attacks. More recently, researchers at Berkeley said they could hide commands into music recordings and spoken text. Currently, the scope of the DolphinAttack is limited but the University of IL has already demonstrated what ultrasound attacks from 25 feet away are capable of.

That warning was borne out in April, when researchers at the University of IL at Urbana-Champaign demonstrated ultrasound attacks from 71/2 metres away. Or that Google just this week announced more native control over kitchen appliances and a strikingly human sounding AI that can make calls on your behalf to set up appointments. You'll still need a direct line to the device, as the commands are incapable of penetrating through walls.

The Berkeley group also embedded the command in music files, including a four-second clip from Verdi's Requiem.

They used an external battery, an amplifier, and an ultrasonic transducer. He wrote one of the first papers on audio attacks, which he titled "Cocaine Noodles" because devices interpreted the phrase "cocaine noodles" as "OK, Google".

'We want to demonstarte that it's possible and then hope that other people will say, OK this is possible, now let's try and fix it, ' Nicholas Carlini, who co-led the study, told the Times.