By Allie Arp, Coordinator of Communications, Grainger College of Engineering

CSL’s Systems and Networking Research Group (SyNRG) is defining a new sub-area of mobile technology that they call “earable computing.” The team believes that earphones will be the next significant milestone in wearable devices, and that new hardware, software, and apps will all run on this platform, according to an article published on the Grainger College of Engineering website.

Earable computing timeline, according to SyNRG.

Earable computing timeline, according to SyNRG.

“The leap from today’s earphones to ‘earables’ would mimic the transformation that we had seen from basic phones to smartphones,” said Romit Roy Choudhury, professor in electrical and computer engineering (ECE). “Today’s smartphones are hardly a calling device anymore, much like how tomorrow’s earables will hardly be a smartphone accessory.”

Instead, the group believes tomorrow’s earphones will continuously sense human behavior, run acoustic augmented reality, have Alexa and Siri whisper just-in-time information, track user motion and health, and offer seamless security, among many other capabilities.

The research questions that underlie earable computing draw from a wide range of fields, including sensing, signal processing, embedded systems, communications, and machine learning. The SyNRG team is on the forefront of developing new algorithms while also experimenting with them on real earphone platforms with live users.

Zhijian Yang
Zhijian Yang

Computer science PhD student Zhijian Yang and other members of the SyNRG group, including his fellow students Yu-Lin Wei and Liz Li, are leading the way. They have published a series of papers in this area, starting with one on the topic of hollow noise cancellation that was published at ACM SIGCOMM 2018. Recently, the group had three papers published at the 26th Annual International Conference on Mobile Computing and Networking (ACM MobiCom) on three different aspects of earables research: facial motion sensing, acoustic augmented reality, and voice localization for earphones.

In “Ear-AR: Indoor Acoustic Augmented Reality on Earphones,” the group looks at how smart earphone sensors can track human movement, and, depending on the user’s location, play 3D sounds in the ear.

“If you want to find a store in a mall,” says Zhijian, “the earphone could estimate the relative location of the store and play a 3D voice that simply says ‘follow me.’ In your ears, the sound would appear to come from the direction in which you should walk, as if it’s a voice escort.”

The second paper, “EarSense: Earphones as a Teeth Activity Sensor,” looks at how earphones could sense facial and in-mouth activities such as teeth movements and taps, enabling a hands-free modality of communication to smartphones. Moreover, various medical conditions manifest in teeth chatter, and the proposed technology would make it possible to identify them by wearing earphones during the day. In the future, the team is planning to look into analyzing facial muscle movements and emotions with earphone sensors.

The third publication, “Voice Localization Using Nearby Wall Reflections,” investigates the use of algorithms to detect the direction of a sound. This means that if Alice and Bob are having a conversation, Bob’s earphones would be able to tune into the direction Alice’s voice is coming from.

Yu-Lin Wei

“We’ve been working on mobile sensing and computing for 10 years,” said Wei. “We have a lot of experience to define this emerging landscape of earable computing.”

Haitham AlHassanieh, assistant professor in ECE, is also involved in this research. The team has been funded by both NSF and NIH, as well as companies like Nokia and Google. See more at the group’s Earable Computing website.

Original Papers: Yang Z, Wei Y-L, Shen S, Choudhury RR. Ear-AR: Indoor acoustic augmented reality on earphones. Paper presented at: MobiCom 2020: The 26th Annual International Conference on Mobile Computing and Networking; September 21-25; London, United Kingdom.

Prakash J, Yang Z, Wei Y-L, Hassanieh H, Choudhury RR. EarSense: Earphones as a teeth activity sensor. Paper presented at: MobiCom 2020: The 26th Annual International Conference on Mobile Computing and Networking; September 21-25; London, United Kingdom.

Shen S, Chen D, Wei Y-L, Yang Z, Choudhury RR. Voice localization using nearby wall reflections. Paper presented at: MobiCom 2020: The 26th Annual International Conference on Mobile Computing and Networking; September 21-25; London, United Kingdom.

Source: Grainger College of Engineering, MobiCom 2020

Images: Grainger College of Engineering