Research News

Modified headphones translate sign language by using Doppler

 Graphic demonstration how sonic ASL headphone work.

The headphone-based system uses Doppler technology to sense tiny fluctuations, or echoes, in acoustic soundwaves that are created by the hands of someone signing. Photo: University at Buffalo.

By MELVIN BANKHEAD III

Published September 9, 2021

Print
headshot of Zhanpeng Jin.
“SonicASL is an exciting proof of concept that could eventually help greatly improve communication between deaf and hearing populations. ”
Zhanpeng Jin, associate professor
Department of Computer Science and Engineering

A UB-led research team has modified noise-cancelling headphones to enable the common electronic device to “see” and translate American Sign Language (ASL) when paired with a smartphone.

Reported in the journal “Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies,” the headphone-based system uses Doppler technology to sense tiny fluctuations, or echoes, in acoustic soundwaves that are created by the hands of someone signing.

Dubbed SonicASL, the system proved 93.8% effective in tests performed indoors and outdoors involving 42 words. Word examples include “love,” “space” and “camera.” Under the same conditions involving 30 simple sentences — for example, “Nice to meet you.” — SonicASL was 90.6% effective.

“SonicASL is an exciting proof of concept that could eventually help greatly improve communication between deaf and hearing populations,” says corresponding author Zhanpeng Jin, associate professor in the Department of Computer Science and Engineering at UB.

But before such technology is commercially available, much work must be done, he stresses. For example, SonicASL’s vocabulary must be greatly expanded. Also, the system must be able to read facial expressions, a major component of ASL.

The study will be presented at the ACM Conference on Pervasive and Ubiquitous Computing (UbiComp), taking place Sept. 21-26.

Communication barriers persist

Sonic representation of the words, "I need your help.".

These are the acoustic soundwaves created by signing the phrase "I need help."

Worldwide, according to the World Federation of the Deaf, there are about 72 million deaf people using more than 300 different sign languages.

Although the United Nations recognizes that sign languages are equal in importance to the spoken word, that view is not yet a reality in many nations. People who are deaf or hard of hearing still experience multiple communication barriers.

Traditionally, communication between deaf American Sign Language (ASL) users and hearing people who do not know the language takes place either in the presence of an ASL interpreter, or through a camera setup.

A frequent concern regarding the use of cameras, according to Jin, includes whether those video recordings could be misused. And while the use of ASL interpreters is becoming more common, there is no guarantee that one will be available when needed.

SonicASL aims to address these issues, especially in casual circumstances without pre-arranged planning and setup, Jin says.

Concept illustration of how the headphones work to interpret sign language.

The illustration on the left shows the modifications made to the headphones. The right shows what a user sees on their smartphone. Photo: University at Buffalo.

Modify headphones with speaker, add app

Most noise-cancelling headphones rely on an outward-facing microphone that picks up environmental noise. The headphones then produce an anti-sound — a soundwave with the same amplitude but with an inverted phase of the surrounding noise — to cancel the external noise.

“We added an additional speaker next to the outward-facing microphone. We wanted to see if the modified headphone could sense moving objects, similar to radar,” says co-lead author Yincheng Jin (no relation), a PhD candidate in Jin’s lab.

The speaker and microphone do indeed pick up hand movements. The information is relayed through the SonicASL cellphone app, which contains an algorithm the team created to identify the words and sentences. The app then translates the signs and speaks to the hearing person via the earphones.

“We tested SonicASL under different environments, including office, apartment, corridor and sidewalk locations,” says co-lead author Yang Gao, who completed the research in Jin’s lab before becoming a postdoctoral scholar at Northwestern University. “Although it has seen a slight decrease in accuracy as overall environmental noises increase, the overall accuracy is still quite good because the majority of the environmental noises do not overlap or interfere with the frequency range required by SonicASL.”

The core SonicASL algorithm can be implemented and deployed on any smartphone, he says.

SonicASL adaptable for other sign languages

Unlike systems that put the responsibility for “bridging” the communication gap on the deaf, SonicASL flips the script, encouraging the hearing population to make the effort.

An added benefit of SonicASL’s flexibility is that it can be adapted for languages other than ASL, Jin says.

“Different sign languages have diverse features, with their own rules for pronunciation, word formation and word order,” he says. “For example, the same gesture may represent different sign language words in different countries. However, the key functionality of SonicASL is to recognize various hand gestures representing words and sentences in sign languages, which are generic and universal. Although our current technology focuses on ASL, with proper training of the algorithmic model, it can be easily adapted to other sign languages.”

The next steps, says Jin, will be expanding the sign vocabulary that can be recognized and differentiated by SonicASL, as well as working to incorporate the ability to read facial expressions.

“The proposed SonicASL aims to develop a user-friendly, convenient and easy-to-use headset-style system to promote and facilitate communication between the deaf and hearing populations,” says Jin.

Additional authors of the study represent the University of Southampton, United Kingdom, and the University of Washington. Henry Adler, research scientist at UB’s Center for Hearing and Deafness, also contributed.