I develop brain-controlled selective hearing systems that decode a listener’s focus from neural signals to enhance the attended speaker in noisy, multi-talker environments. My work brings together auditory attention decoding, invasive brain recordings, signal processing, and machine learning to enable real-time, perception-aligned audio processing.
My Ph.D. research goal is to create listener-centered auditory technologies that adapt to the user's intention, thereby advancing the future of cognitively informed hearing systems.
The projects listed below are from my Ph.D. work. For earlier projects from my undergraduate years, which focused more on biosignal processing and communication systems, please see this page for demos.
Vishal Choudhari, Maximilian Nentwich, Sarah Johnson, Jose L. Herrero, Stephan Bickel, Ashesh D. Mehta, Daniel Friedman, Adeen Flinker, Edward F. Chang, Nima Mesgarani
Preprint, under revision at Nature Nature Neuroscience
Vishal Choudhari, Cong Han, Stephan Bickel, Ashesh D Mehta, Catherine Schevon, Guy M McKhann, Nima Mesgarani
🏆 Third Place Winner at 2024 International BCI Competition
Advanced Science 2023
Xilin Jiang, Junkai Wu, Vishal Choudhari, Nima Mesgarani
IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA 2025)
🏆 Best Paper Award Winner at WASPAA 2025
[arXiv]
Corentin Puffay, Gavin Mischler, Vishal Choudhari, Jonas Vanthornhout, Stephan Bickel, Ashesh D Mehta, Catherine A Schevon, Guy M McKhann, Hugo Van hamme, Tom Francart, Nima Mesgarani
Preprint, in review
[bioRxiv]