AI has far to go before solving deafness, but along the way are opportunities to ‘reshape hearing healthcare’

Unlike many chronic health issues, permanent hearing problems tend to stem not from molecular or cellular pathologies but from aberrations in brain networks.

For this reason, AI technologies likely can go only so far toward improving on hearing aids and cochlear implants.

However, experts in the field from the U.S. and U.K. expect fertile grounds to open for exploration in clinical as well as research arenas.

Here’s neuroengineer Nicolas Lesica of the Ear Institute at University College London, otolaryngologist Fan-Gang Zeng of the Center for Hearing Research at UC-Irvine and colleagues in commentary published Oct. 18 in Nature Machine Intelligence:

We envision a future in which the natural links between machine hearing and biological hearing are leveraged to provide effective hearing healthcare across the world and enable progress in hearing’s most complex research challenges.”

After briefing readers on the human auditory system and outlining AI’s potential to augment it, the authors lay out steps that AI developers and hearing specialists might collaboratively pursue en route to designing true artificial auditory systems.

Part of the challenge will be tackling three critical aspects of hearing that artificial auditory systems will need to incorporate, as follows:

1. Temporal processing. Recent research has suggested the existence of a single dedicated neural circuit handling perceptions of sounds and related auditory information. But the mechanisms by which the brain processes such inputs in milliseconds “seems to rely on a complex interplay between distributed networks in different brain areas,” the authors explain.

One recent project involved training artificial neural networks to perform a variety of auditory tasks in temporal intervals. The researchers found these networks “exhibited a number of phenomena that have been observed in the brain,” Zeng and co-authors report.

“Further work along these lines is needed to go beyond the analysis of time intervals to tasks involving the processing of complex temporal patterns that are typical of natural sounds,” they write.

2. Multimodal processing. To realistically replicate nature’s auditory system, artificial neural networks “must ultimately integrate other sensorimotor modalities with the flexibility to perform a wide range of different tasks just as the brain does,” the authors state. That’s because normal hearing only begins with the ears. For the brain to make sense of incoming sounds, it has to immediately begin integrating auditory information from myriad sources.

“Explicit attempts to model multimodal properties in isolation are unlikely to be useful (beyond providing a compact description of the phenomena),” Zeng and colleagues write. “But if networks with appropriate features are trained on a wide variety of tasks, multimodal flexibility will emerge, just as it has in the brain.”

3. Plasticity. Recent research has hypothesized that the effectiveness of cochlear implants relies on how well the technology enables neuroplasticity. The better it is at this, the more it helps the brain adopt a new way of receiving auditory signals through the ear canal, Zeng and co-authors note.

There’s been no shortage of algorithmic training to encourage this kind of plasticity, the authors point out, but no approach has fully succeeded.

Artificial networks that accurately model auditory plasticity after hearing restoration “would allow a systematic exploration of different training strategies to determine the conditions under which each is optimal,” Zeng and co-authors comment.

“Of course, there is no guarantee that training strategies that are optimal for the artificial system will prove useful for human users,” they add. “But the likelihood of successful translation will be increased if the key features of the artificial and biological systems are closely matched.”

Zeng et al. conclude:

Ongoing collaboration between AI researchers and hearing researchers would create a win-win situation for both communities and also help to ensure that new technologies are well matched to the needs of users. … This is not the first call for the AI and hearing communities to come together, but, given the immense opportunities created by recent developments, we are hopeful that it will be the last.”

The study is available in full for free.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

Compensation for heart specialists continues to climb. What does this say about cardiology as a whole? Could private equity's rising influence bring about change? We spoke to MedAxiom CEO Jerry Blackwell, MD, MBA, a veteran cardiologist himself, to learn more.

The American College of Cardiology has shared its perspective on new CMS payment policies, highlighting revenue concerns while providing key details for cardiologists and other cardiology professionals. 

As debate simmers over how best to regulate AI, experts continue to offer guidance on where to start, how to proceed and what to emphasize. A new resource models its recommendations on what its authors call the “SETO Loop.”