SSL and internet security news

voicerecognition

Auto Added by WPeMatico

Sending Inaudible Commands to Voice Assistants

Researchers have demonstrated the ability to send inaudible commands to voice assistants like Alexa, Siri, and Google Assistant.

Over the last two years, researchers in China and the United States have begun demonstrating that they can send hidden commands that are undetectable to the human ear to Apple’s Siri, Amazon’s Alexa and Google’s Assistant. Inside university labs, the researchers have been able to secretly activate the artificial intelligence systems on smartphones and smart speakers, making them dial phone numbers or open websites. In the wrong hands, the technology could be used to unlock doors, wire money or buy stuff online ­– simply with music playing over the radio.

A group of students from University of California, Berkeley, and Georgetown University showed in 2016 that they could hide commands in white noise played over loudspeakers and through YouTube videos to get smart devices to turn on airplane mode or open a website.

This month, some of those Berkeley researchers published a research paper that went further, saying they could embed commands directly into recordings of music or spoken text. So while a human listener hears someone talking or an orchestra playing, Amazon’s Echo speaker might hear an instruction to add something to your shopping list.

Powered by WPeMatico

Stealing Voice Prints

This article feels like hyperbole:

The scam has arrived in Australia after being used in the United States and Britain.

The scammer may ask several times “can you hear me?”, to which people would usually reply “yes.”

The scammer is then believed to record the “yes” response and end the call.

That recording of the victim’s voice can then be used to authorise payments or charges in the victim’s name through voice recognition.

Are there really banking systems that use voice recognition of the word “yes” to authenticate? I have never heard of that.

Powered by WPeMatico

Forging Voice

LyreBird is a system that can accurately reproduce the voice of someone, given a large amount of sample inputs. It’s pretty good — listen to the demo here — and will only get better over time.

The applications for recorded-voice forgeries are obvious, but I think the larger security risk will be real-time forgery. Imagine the social engineering implications of an attacker on the telephone being able to impersonate someone the victim knows.

I don’t think we’re ready for this. We use people’s voices to authenticate them all the time, in all sorts of different ways.

Powered by WPeMatico