The 2nd Virtual Conference on Computational Audiology (VCCA2021) will take place on June 25, 2021. According to the conference organizers, the scientific program will combine “keynotes as well as featured and invited talks with scientific contributions to highlight the wide range of world-class research and hot topics in computational audiology.” Two special sessions will be held to showcase and discuss applications of Big Data and to exchange knowledge for addressing the global burden of hearing loss.

Related article: Deep Neural Networks in Hearing Devices

Please click here for the full conference program. For program highlights, including special session videos, please click here.

The program will be organized into three main blocks, to allow for participation from different time zones. Click here to register for free.

Keynote speakers:Topic and Bio
Professor Brian CJ Moore
Emeritus Professor of Auditory Perception
Dept. of Experimental Psychology,
University of Cambridge
Time-efficient hearing tests and their use in the fitting of hearing aids Brian’s research focuses on hearing and hearing loss, especially the perceptual analysis of complex sounds. He has played a central role in the development of models of masking and of loudness. He has made contributions to the design of hearing aids, especially amplitude compression systems. He also led the development of a method for fitting wide bandwidth hearing aids. Recently he has contributed to the development of efficient diagnostic tests of hearing. He is a Fellow of the Royal Society, the Academy of Medical Sciences, the Acoustical Society of America, and the Audio Engineering Society.
Professor Josh McDermott
Associate Professor, Department of Brain and Cognitive Sciences, MIT
Faculty Member
, Program in Speech and Hearing Bioscience and Technology
New Models of Human Hearing via Deep Learning Josh is a perceptual scientist studying sound, hearing, and music. His research addresses human and machine audition using tools from experimental psychology, engineering, and neuroscience. He is particularly interested in using the gap between human and machine competence to both better understand biological hearing and design better algorithms for analyzing sound.
Professor Mounya Elhilali
Professor and Charles Renn Faculty Scholar
Dept of Electrical and Computer Engineering
Dept of Psychology and Brain Sciences
Johns Hopkins University
Auditory salience Mounya’s research examines sound processing by humans and machines in noisy soundscapes, and investigates reverse engineering intelligent processing of sounds by brain networks with applications to speech and audio technologies and medical systems. Her work examines neural and computational underpinnings of auditory scene analysis and role of attention and context in guiding perception and behavior.
Professor Nicholas Lesica
Professor of Neuroengineering and Wellcome Trust Senior Research Fellow,
Ear Institute, University College London
Harnessing the power of AI to combat the global burden of hearing loss: Opportunities and challenges Nick’s research is focused on the study of hearing and hearing loss from the perspective of the neural code — the activity patterns that carry information about sound along the auditory pathway. He uses large-scale electrophysiology in animal models to study how hearing loss distorts the neural code and to develop new ideas for how assistive devices might correct these distortions.
Featured talks:Topic
Dr Simone Graetzer
Research Fellow
University of Salford
Clarity: Machine learning challenges for improving hearing aid processing of speech in noise
Dr Maartje Hendrikse
Marie Curie Fellow
Erasmus MC Rotterdam
Virtual audiovisual environments for hearing aid evaluation (and fitting)
Dr Niels Pontoppidan
Research Area Manager
Eriksholm Research Centre
Learning from audiological data collected in the lab and the real world
Dr Raul Sanchez-Lopez
Post-Doc at DTU
Interacoustics Research Unit
Hearing deficits and auditory profiling: Data-driven approaches towards personalized audiology
Dr Josef Schlittenlacher
Lecturer
University of Manchester
Machine learning for models of auditory perception

Special sessions:

Big data, data sharing, and data pooling across countries in audiology
Chaired by Professor Waldo Nogueira and with presentations by:

Interactive discussions on Addressing the Global Burden of Hearing Loss:
Chaired by Dr Saima Rajasingam and Dr Alan Archer-Boyd

  • Hearing diagnostics and services of the future – Ensuring wide and equitable access to hearing healthcare
  • Hearing devices of the future – Overcoming barriers of stigma, logistics, costs, and efficacy

Source: VCCA2021

Images: VCCA2021