Study: Modified headphones translate sign language via Doppler

Video showing how proposed tech measures echo created by hand movements; could help boost communication between deaf and hearing populations

Proposed tech measures echo created by hand movements; could help boost communication between deaf and hearing populations

By Melvin Bankhead III

Release Date: September 7, 2021

Print
Zhanpeng Jin.
“SonicASL is an exciting proof-of-concept that could eventually help greatly improve communication between deaf and hearing populations. ”
Zhanpeng Jin, associate professor of computer science and engineering
University at Buffalo School of Engineering and Applied Sciences

BUFFALO, N.Y. — A University at Buffalo-led research team has modified noise-cancelling headphones, enabling the common electronic device to “see” and translate American Sign Language (ASL) when paired with a smartphone.

Reported in the journal “Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies,” the headphone-based system uses Doppler technology to sense tiny fluctuations, or echoes, in acoustic soundwaves that are created by the hands of someone signing.

Dubbed SonicASL, the system proved 93.8% effective in tests performed indoors and outdoors involving 42 words. Word examples include “love,” “space,” and “camera.” Under the same conditions involving 30 simple sentences – for example, “Nice to meet you.” – SonicASL was 90.6% effective.

“SonicASL is an exciting proof-of-concept that could eventually help greatly improve communication between deaf and hearing populations,” says corresponding author Zhanpeng Jin, PhD, associate professor in the Department of Computer Science and Engineering at UB.

Before such technology is commercially available, much work must be done, he stressed. For example, SonicASL’s vocabulary must be greatly expanded. Also, the system must be able to read facial expressions, a major component of ASL.

The study will be presented at the ACM Conference on Pervasive and Ubiquitous Computing (UbiComp), taking place Sept. 21-26.

For the deaf, communication barriers persist

Worldwide, according to the World Federation of the Deaf, there are about 72 million deaf people using more than 300 different sign languages.

Although the United Nations recognizes that sign languages are equal in importance to the spoken word, that view is not yet a reality in many nations. People who are deaf or hard of hearing still experience multiple communications barriers.

Traditionally, communications between deaf American Sign Language (ASL) users and hearing people who do not know the language take place either in the presence of an ASL interpreter, or through a camera set-up.

A frequent concern over the use of cameras, according to Jin, includes whether those video recordings could be misused. And while the use of ASL interpreters is becoming more common, there is no guarantee that one will be available when needed.

SonicASL aims to address these issues, especially in casual circumstances without pre-arranged planning and setup, Jin says.

Illustration showing the modified headphones and a smartphone screen showing the signed words.

The illustration on the left shows the mdifications made to the headphones. The right shows what a user sees on their smartphone. Credit: University at Buffalo.

Modify headphones with speaker, add app

Most noise-cancelling headphones rely on an outward-facing microphone that picks up environmental noise. The headphones then produce an anti-sound – a soundwave with the same amplitude but with an inverted phase of the surrounding noise – to cancel the external noise.

“We added an additional speaker next to the outward-facing microphone. We wanted to see if the modified headphone could sense moving objects, similar to radar,” says co-lead author Yincheng Jin (no relation), a PhD candidate in Jin’s lab.

The speaker and microphone do indeed pick up hand movements. The information is relayed through the SonicASL cellphone app, which contains an algorithm the team created to identify the words and sentences. The app then translates the signs and speaks to the hearing person via the earphones.

“We tested SonicASL under different environments, including office, apartment, corridor and sidewalk locations,” says co-lead author Yang Gao, PhD, who completed the research in Jin’s lab before becoming a postdoctoral scholar at Northwestern University. “Although it has seen a slight decrease in accuracy as overall environmental noises increase, the overall accuracy is still quite good, because the majority of the environmental noises do not overlap or interfere with the frequency range required by SonicASL.”

The core SonicASL algorithm can be implemented and deployed on any smartphone, he says.

Illustration shows the acoustic soundwaves created by signing the phrase "I need help.".

These are the acoustic soundwaves created by signing the phrase "I need help."

SonicASL can be adapted for other sign languages

Unlike systems that put the responsibility for “bridging” the communications gap on the deaf, SonicASL flips the script, encouraging the hearing population to make the effort.

An added benefit of SonicASL’s flexibility is that it can be adapted for languages other than ASL, Jin says.

“Different sign languages have diverse features, with their own rules for pronunciation, word formation and word order,” he says. “For example, the same gesture may represent different sign language words in different countries. However, the key functionality of SonicASL is to recognize various hand gestures representing words and sentences in sign languages, which are generic and universal. Although our current technology focuses on ASL, with proper training of the algorithmic model, it can be easily adapted to other sign languages.”

The next steps, says Jin, will be expanding the sign vocabulary that can be recognized and differentiated by SonicASL as well as working to incorporate the ability to read facial expressions.

“The proposed SonicASL aims to develop a user-friendly, convenient and easy-to-use headset-style system to promote and facilitate communication between the deaf and hearing populations,” says Jin.

Additional authors of the study represent the University of Southampton, United Kingdom, and the University of Washington. Henry Adler, PhD, research scientist at the Center for Hearing and Deafness at the University at Buffalo, also contributed.

Media Contact Information

Media Relations (University Communications)
330 Crofts Hall (North Campus)
Buffalo, NY 14260-7015
Tel: 716-645-6969
ub-news@buffalo.edu