Hearing aid technology has been developed that scans your facial movements and uses artificial intelligence (AI) to figure out what’s being said.
Developed by engineers at the University of Glasgow, the system is even able to read the lips of people wearing a mask.
The team trained algorithms with data collected by scanning people’s faces with radar and Wi-Fi signals while they spoke.
This allowed the system to correctly interpret speech up to 95 percent of the time with unmasked lips and up to 83 percent of the time with a mask.
When integrated into hearing aids, it could help deaf and hard of hearing people focus on sounds more easily in noisy environments.
“About five percent of the world’s population — approximately 430 million people — have some type of hearing impairment,” said lead author Dr. Qammer Abbasi.
“Hearing aids have brought transformative benefits to many hearing-impaired people.
Developed by engineers at the University of Glasgow, the system is even able to read the lips of people wearing a mask
The team trained algorithms with data collected by scanning people’s faces with radar and Wi-Fi signals as they speak
The engineers first set out to train machine learning and deep learning algorithms to recognize lip and mouth movements associated with each vowel. To do this, male and female volunteers were asked to repeat the five vowels A, E, I, O and U – without a mask and with a surgical mask (archive image).
WHAT IS “MACHINE LEARNING”?
Machine learning algorithms use statistics to find patterns in huge amounts of data – like numbers, words, images or clicks.
It supports many of the services we use today – including recommendation systems like those on Netflix, YouTube and Spotify; search engines like Google; social media feeds such as Facebook and Twitter; and voice assistants like Siri and Alexa.
In all of these cases, each platform collects as much data about you as possible — what genres you enjoy watching, what links you click, what statuses you respond to — and uses machine learning to make an educated guess about what you might want next .
Or, in the case of a voice assistant, what words go best with the sounds coming out of your mouth.
Source: MIT Technology Review
“A new generation of technology that collects a wide range of data to amplify and improve sound amplification could be another important step towards improving the quality of life for people with hearing impairments.
“With this research, we have shown that radio frequency signals can be used to accurately read vowels on people’s lips, even when their mouth is covered.”
Current hearing aids support hearing-impaired people by amplifying all the ambient noise around them.
While this is helpful, it can be difficult for users to focus on certain sounds in noisy situations, such as a car. B. during a conversation with a person at a party.
To overcome this, “smart” hearing aids have been developed that collect lip-reading data using a camera used alongside traditional audio amplification.
However, collecting video footage of people without their express consent raises concerns about the privacy of the individual.
The cameras are also unable to read the lips of people wearing face coverings for religious, cultural or health reasons.
In their study, published today in Nature Communications, the researchers outline a face-scanning technique that could work as an alternative to the camera.
They first set out to train machine learning and deep learning algorithms on how to recognize lip and mouth movements associated with each vowel.
To do this, male and female volunteers were asked to repeat the five vowels A, E, I, O and U – without a mask and with a surgical mask.
As they did so, and their lips were still, their faces were scanned with radio frequency signals from a radar sensor and Wi-Fi transmitter.
These accumulated into 3,600 data samples that were used to teach the algorithms to read the vowel formations of masked and unmasked users.
Current hearing aids support hearing-impaired people by amplifying all the ambient noise around them. While this is helpful, it can be difficult for users to focus on certain sounds in noisy situations, such as a car. B. during a conversation with a person at a party (stock image).
The learning algorithms correctly interpreted the WLAN data 95 percent for unmasked lips and 80 percent for masked lips.
In the meantime, the radar data was correctly interpreted 91 percent without a mask and 83 percent with a mask.
“Given the ubiquity and affordability of Wi-Fi technologies, the results are very encouraging, suggesting that this technique has value both as a standalone technology and as a component in future multimodal hearing aids,” said Dr. Abbasi.
Because this system protects privacy by only collecting high-frequency data with no video footage, it’s hoped it could be built into smart hearing aids in the future.
£400 XRAI smart glasses convert audio to closed captions so deaf people can ‘SEE’ conversations
A new smart glasses has been launched for the deaf and hard of hearing.
Dubbed XRAI Glass, the glasses use augmented reality to convert audio into captions that are instantly projected in front of the wearer’s eyes.
This software converts audio into a subtitled version of the conversation, which then appears on the glasses’ screen.
Thanks to speech recognition capabilities, the glasses can even recognize who is speaking and, according to XRAI Glass, will soon be able to translate languages, voice tones, accents and pitches.
Read more here