New technique protects consumers from voice spoofing attacks

Researchers from CSIRO’s Data61 have developed a new technique to protect consumers from voice spoofing attacks.

voice spoofing attacks

Fraudsters can record a person’s voice for voice assistants like Amazon Alexa or Google Assistant and replay it to impersonate that individual. They can also stitch samples together to mimic a person’s voice in order to spoof, or trick third parties.

Detecting when hackers are attempting to spoof a system

The new solution, called Void (Voice liveness detection), can be embedded in a smartphone or voice assistant software and works by identifying the differences in spectral power between a live human voice and a voice replayed through a speaker, in order to detect when hackers are attempting to spoof a system.

Consumers use voice assistants to shop online, make phone calls, send messages, control smart home appliances and access banking services.

Muhammad Ejaz Ahmed, Cybersecurity Research Scientist at CSIRO’s Data61, said privacy preserving technologies are becoming increasingly important in enhancing consumer privacy and security as voice technologies become part of daily life.

“Voice spoofing attacks can be used to make purchases using a victim’s credit card details, control Internet of Things connected devices like smart appliances and give hackers unsolicited access to personal consumer data such as financial information, home addresses and more,” Mr Ahmed said.

“Although voice spoofing is known as one of the easiest attacks to perform as it simply involves a recording of the victim’s voice, it is incredibly difficult to detect because the recorded voice has similar characteristics to the victim’s live voice. Void is game-changing technology that allows for more efficient and accurate detection helping to prevent people’s voice commands from being misused”.

Relying on insights from spectrograms

Unlike existing voice spoofing techniques which typically use deep learning models, Void was designed relying on insights from spectrograms — a visual representation of the spectrum of frequencies of a signal as it varies with time to detect the ‘liveness’ of a voice.

This technique provides a highly accurate outcome, detecting attacks eight times faster than deep learning methods, and uses 153 times less memory, making it a viable and lightweight solution that could be incorporated into smart devices.

Void has been tested using datasets from Samsung and Automatic Speaker Verification Spoofing and Countermeasures challenges, achieving an accuracy of 99 per cent and 94 per cent for each dataset.

Research estimates that by 2023, as many as 275 million voice assistant devices will be used to control homes across the globe — a growth of 1000 percent since 2018.

How to protect data when using voice assistants

Dr Adnene Guabtni, Senior Research Scientist at CSIRO‘s Data61, shares tips for consumers on how to protect their data when using voice assistants:

  • Always change your voice assistant settings to only activate the assistant using a physical action, such as pressing a button.
  • On mobile devices, make sure the voice assistant can only activate when the device is unlocked.
  • Turn off all home voice assistants before you leave your house, to reduce the risk of successful voice spoofing while you are out of the house.
  • Voice spoofing requires hackers to get samples of your voice. Make sure you regularly delete any voice data that Google, Apple or Amazon store.
  • Try to limit the use of voice assistants to commands that do not involve online purchases or authorizations – hackers or people around you might record you issuing payment commands and replay them at a later stage.

Researchers use ultrasound waves vibrating through tables to access cellphones

Ultrasonic waves don’t make a sound, but they can still activate Siri on your cellphone and have it make calls, take images or read the contents of a text to a stranger. All without the phone owner’s knowledge.

ultrasonic waves access cellphones

Ning Zhang, assistant professor of computer science and engineering at the McKelvey School of Engineering

Attacks on cell phones aren’t new, and researchers have previously shown that ultrasonic waves can be used to deliver a single command through the air.

However, a research from Washington University in St. Louis expands the scope of vulnerability that ultrasonic waves pose to cellphone security. These waves, the researchers found, can propagate through many solid surfaces to activate voice recognition systems and – with the addition of some cheap hardware – the person initiating the attack can also hear the phone’s response.

“We want to raise awareness of such a threat,” said Ning Zhang, assistant professor of computer science and engineering at the McKelvey School of Engineering. “I want everybody in the public to know this.”

Zhang and his co-authors were able to send “voice” commands to cellphones as they sat inconspicuously on a table, next to the owner. With the addition of a stealthily placed microphone, the researchers were able to communicate back and forth with the phone, ultimately controlling it from afar.

Ultrasonic waves are sound waves in a frequency that is higher than humans can hear. Cellphone microphones, however, can and do record these higher frequencies. “If you know how to play with the signals, you can get the phone such that when it interprets the incoming sound waves, it will think that you are saying a command,” Zhang said.

To test the ability of ultrasonic waves to transmit these “commands” through solid surfaces, the research team set up a host of experiments that included a phone on a table.

Attached to the bottom of the table was a microphone and a piezoelectric transducer (PZT), which is used to convert electricity into ultrasonic waves. On the other side of the table from the phone, ostensibly hidden from the phone’s user, is a waveform generator to generate the correct signals.

The team ran two tests, one to retrieve an SMS (text) passcode and another to make a fraudulent call. The first test relied on the common virtual assistant command “read my messages” and on the use of two-factor authentication, in which a passcode is sent to a user’s phone – from a bank, for instance – to verify the user’s identity.

The attacker first told the virtual assistant to turn the volume down to Level 3. At this volume, the victim did not notice their phone’s responses in an office setting with a moderate noise level.

Then, when a simulated message from a bank arrived, the attack device sent the “read my messages” command to the phone. The response was audible to the microphone under the table, but not to the victim.

In the second test, the attack device sent the message “call Sam with speakerphone,” initiating a call. Using the microphone under the table, the attacker was able to carry on a conversation with “Sam.”

The team tested 17 different phone models, including popular iPhone, Galaxy and Moto models. All but two were vulnerable to ultrasonic wave attacks.

Ultrasonic waves made it through metal, glass and wood

They also tested different table surfaces and phone configurations.

“We did it on metal. We did it on glass. We did it on wood,” Zhang said. They tried placing the phone in different positions, changing the orientation of the microphone. They placed objects on the table in an attempt to dampen the strength of the waves. “It still worked,” he said. Even at distances as far as 30 feet.

Ultrasonic wave attacks also worked on plastic tables, but not as reliably.

Phone cases only slightly affected the attack success rates. Placing water on the table, potentially to absorb the waves, had no effect. Moreover, an attack wave could simultaneously affect more than one phone.

Zhang said the success of the “surfing attack,” as it’s called in the paper, highlights the less-often discussed link between the cyber and the physical. Often, media outlets report on ways in which our devices are affecting the world we live in: Are our cellphones ruining our eyesight? Do headphones or earbuds damage our ears? Who is to blame if a self-driving car causes an accident?

“I feel like not enough attention is being given to the physics of our computing systems,” he said. “This is going to be one of the keys in understanding attacks that propagate between these two worlds.”

The team suggested some defense mechanisms that could protect against such an attack. One idea would be the development of phone software that analyzes the received signal to discriminate between ultrasonic waves and genuine human voices, Zhang said. Changing the layout of mobile phones, such as the placement of the microphone, to dampen or suppress ultrasound waves could also stop a surfing attack.

But Zhang said there’s a simple way to keep a phone out of harm’s way of ultrasonic waves: the interlayer-based defense, which uses a soft, woven fabric to increase the “impedance mismatch.”

In other words, put the phone on a tablecloth.

Fraud rates increasing as criminals become more sophisticated

Fraud rates have been skyrocketing, with 90 voice channel attacks occurring every minute in the U.S., Pindrop reveals. Key findings Voice fraud continues to serve as a major threat, with rates climbing more than 350 percent from 2014 to 2018 The 2018 fraud rate is 1 in 685, remaining at the top of a five-year peak Insurance voice fraud has increased by 248 percent as fraudsters chase policies that exceed $500,000 In 2018, 446 million … More

The post Fraud rates increasing as criminals become more sophisticated appeared first on Help Net Security.