• | Ceolini, E., Kiselev, I., Liu, S-C. (2020). Evaluating multi-channel multi-device speech separation algorithms in the wild: a hardware-software solution. IEEE/ACM Transactions on Audio, Speech, and Language Processing.
[bibtex] [pdf] |
• | Fuglsang, Søren A., Märcher-Rørsted, Jonatan; Dau, Torsten; Hjortkjær, Jens (2020). Effects of Sensorineural Hearing Loss on Cortical Synchronization to Competing Speech during Selective Attention. Journal of Neuroscience Society for Neuroscience, 40 12, 2562-2572.
[bibtex] [pdf] [doi] |
• | Di Liberto, G.M. (2019). Studiare il cervello con un audio-libro: un approccio moderno per le neuroscienze del linguaggio. Sapere, 1, 28-33.
[bibtex] [doi] |
• | Liberto, Giovanni M. Di; Pelofi, Claire; Bianco, Roberta; Patel, Prachi; Mehta, Ashesh D., Herrero, Jose L., Cheveigné, Alain de; Shamma, Shihab; Mesgarani, Nima (2019). Cortical encoding of melodic expectations in human temporal cortex. bioRxiv.
[bibtex] [doi] |
• | Di Liberto, G.M., Pelofi, C., Shamma, S., de Cheveigné, A. (2019). Musical expertise enhances the cortical tracking of the acoustic envelope during naturalistic music listening. Acoustical Science and Technology (Japan), in press.
[bibtex] |
• | Di Liberto, G.M. (2019). Investigation the cortical tracking of melodic expectation during naturalistic music listening. In , ARO Midwinter meeting (abstract).
[bibtex] |
• | Di Liberto, G.M., Wong, D.D.E., Melnik, G.A., de Cheveigné, A. (2019). Low-frequency cortical responses to natural speech reflect probabilistic phonotactics. NeuroImage, 196, 237-247.
[bibtex] [doi] |
• | Cheveigné, Alain de (2019). ZapLine: a simple and effective method to remove power line artifacts. NeuroImage, 207 116356.
[bibtex] [doi] |
• | de Cheveigné, A., Nelken, I. (2019). Filters: when, why, and how (not) to use them. Neuron, 102, 280-293.
[bibtex] |
• | de Cheveigné, Alain; di Liberto, Giovanni M., Arzounian, Dorothée; Wong, Daniel; Hjortkjaer, Jens; Asp Fuglsang, Soren; Parra, Lucas C (2019). Multiway Canonical Correlation Analysis of Brain Signals. Neuroimage, 186, 728-740.
[bibtex] [doi] |
• | Y. Lou, E. Ceolini; C. Han, S-C. Liu; Mesgarani, N. (2019). FaSNet: Low-latency adaptive beamforming for multi-microphone audio processing. IEEE Automatic Speech Recognition and Understanding (ASRU) Workshop.
[bibtex] |
• | Ceolini, E., Liu, S-C. (2019). Combining deep neural networks and beamforming for real-time multi-channel speech enhancement using a wireless acoustic sensor network. IEEE International Workshop on Machine Learning for Speech Processing (MLSP 2019).
[bibtex] |
• | Bury, G., Zhao, S., Milne, A., Chait, M. (2019). Pupil dilation as an objective measure of effort to sustain attention in young and older listeners. ARO Midwinter meeting (abstract).
[bibtex] |
• | Alickovic, Emina; Lunner, Thomas; Gustafsson, Fredrik; Ljung, Lennart (2019). A Tutorial on Auditory Attention Identification Methods. Frontiers in Neuroscience.
[bibtex] [doi] |
• | Zhao, Sijia; Wai Yum, Nga; Benjamin, Lucas; Benhamou, Elia; Yoneya, Makoto; Furukawa, Shigeto; Dick, Fred; Slaney, Malcolm; Chait, Maria (2019). Rapid ocular responses are modulated by bottom-up driven auditory salience. Journal of Neuroscience.
[bibtex] [pdf] [doi] |
• | Wong, Daniel D.E., Di Liberto, Giovanni M., de Cheveigné, Alain (2019). Accurate Modeling of Brain Responses to Speech. bioRxiv Cold Spring Harbor Laboratory.
[bibtex] [pdf] [doi] |
• | Molloy, K., Lavie, N., Chait, M. (2019). Auditory Figure-Ground Segregation is Impaired by High Visual Load. Journal of Neuroscience, 35, 16046-16054.
[bibtex] [pdf] [doi] |
• | Gao, G., Braun, S., Kiselev, I; Anumula, J., Delbruk, T., Liu, S-C (2019). Live demonstration: Real-time spoken digit recognition using the DeltaRNN accelerator. IEEE International Symposium on Circuits and Systems.
[bibtex] |
• | Gao, G., Braun, S., Kiselev, I; Anumula, J., Delbruk, T., Liu, S-C (2019). Real-time speech recognition for IoT purpose using a delta recurrent neural network accelerator. IEEE International Symposium on Circuits and Systems.
[bibtex] |
• | Wong, Daniel D. E., Hjortkjær, Jens; Ceolini, Enea; Nielsen, Søren Vørnle; Griful, Sergi Rotger; Fuglsang, Søren; Chait, Maria; Lunner, Thomas; Dau, Torsten; Liu, Shih-Chii; Cheveigné, Alain de (2018). A closed-loop platform for real-time attention control of simultaneous sound streams., ARO Midwinter meeting (abstract).
[bibtex] |
• | Favre-Felix, A; Hietkamp, R; Graversen, C; Dau, T; Lunner, T (2018). Steering of audio input in hearing aids by eye gaze through electrooculogram., ARO Midwinter meeting (abstract).
[bibtex] |
• | Di Liberto, G.M., Wong, Daniel D.E., Melnik, G.A., de Cheveigné, A. (2018). Cortical responses to natural speech reflect probabilistic phonotactics. In , Attention to Sound workshop, Chicheley Hall (UK).
[bibtex] |
• | de Cheveigné, Alain; di Liberto, Giovanni M., Arzounian, Dorothée; Wong, Daniel; Hjortkjaer, Jens; Asp Fuglsang, Soren; Parra, Lucas C (2018). Multiway Canonical Correlation Analysis of Brain Signals. bioRxiv.
[bibtex] [pdf] [doi] |
• | Ceolini, Enea; Anumula, Jithendar; Huber, Adrian; Kiselev, Ilya; Liu, Shih-Chii (2018). Speaker Activity Detection and Minimum Variance Beamforming for Source Separation. Interspeech 2018.
[bibtex] |
• | Braun, S., D., Neil; J., Anumula; E., Ceolini; Liu, S.C. (2018). Multi-channel Attention for End-to-End Speech Recognition. Interspeech 2018.
[bibtex] |
• | Wong, Daniel D.E., Fuglsang, Søren A., Hjortkjær, Jens; Ceolini, Enea; Slaney, Malcolm; de Cheveigné, Alain (2018). A Comparison of Temporal Response Function Estimation Methods for Auditory Attention Decoding. bioRxiv Cold Spring Harbor Laboratory.
[bibtex] [pdf] [doi] |
• | Wong, D.D.E., S., Fuglsang., Hjortjaer, J., Di Liberto, G.M., de Cheveigné, A. (2018). Classifying attended talker from EEG using artificial neural networks. ARO Midwinter meeting (abstract), ARO Midwinter meeting (abstract).
[bibtex] |
• | Märcher-Rørsted, Jonatan; Fuglsang, Søren; Wong, Daniel E., Dau, Torsten; Cheveigné, Alain de; Hjortkjær, Jens (2018). Closed-loop BCI Control of Auditory Feedback Using Selective Auditory Attention (abstract). In ICHON 2018.
[bibtex] [pdf] |
• | Jean, H., Pressnitzer, D., Di Liberto, G.M. (2018). Fast decoding of auditory steady-state responses in an informational masking paradigm. In CuttingEEG workshop, Paris, CuttingEEG workshop, Paris.
[bibtex] |
• | Jagiello, R., Pomper, U., Yoney, Makoto; Zhao, S., Chait, M (2018). Rapid Brain Responses to Familiar vs. Unfamiliar Music - an EEG and Pupillometry study. bioRxiv.
[bibtex] [pdf] |
• | Hjortkjær, J., Märcher-Rørsted, J., Fuglsang, S.A., Dau, T. (2018). Cortical oscillations and entrainment in speech processing during working memory load. European Journal of Neuroscience.
[bibtex] [doi] |
• | Hjortkjær, J. (2018). Cognitive Control of a Hearing Aid., Workshop on Auditory Machine Learning, DTU.
[bibtex] |
• | Fuglsang, S. (2018). Cognitive Control of a Hearing Aid -- COCOHA., Audiologisk Årsmøde (Annual meeting for the Danish Audiology Society, 2018).
[bibtex] |
• | Di Liberto, Giovanni M; Wong, Daniel; Melnik, Gerda Ana; de Cheveigne, Alain (2018). Cortical responses to natural speech reflect probabilistic phonotactics. bioRxiv.
[bibtex] [pdf] [doi] |
• | Dau, T. (2018). From data-driven auditory profiling to scene-aware signal processing in hearing aids. In Deutsche Gesellschaft für Audiologie (DGA 2018), Deutsche Gesellschaft für Audiologie (DGA 2018).
[bibtex] |
• | de Cheveigné, A., Wong, D.D.E; Di Liberto, G.M., J., Hjortkjær; M., Slaney; Lalor, E. (2018). Decoding the auditory brain with canonical component analysis. NeuroImage, 172, 206 - 216.
[bibtex] [pdf] [doi] |
• | de Cheveigné, A., Arzounian, D. (2018). Robust detrending, rereferencing, outlier detection, and inpainting for multichannel data. NeuroImage, 172, 903-912.
[bibtex] [pdf] [doi] |
• | Alickovic, E., Lunner, T., Gustafsson, F., L., Ljung (2018). Auditory Attention Identification Methods: A Review. Frontiers in Neuroscience.
[bibtex] |
• | Alickovic, E., Lunner, T., Gustafsson, F. (2018). A Correlation-Based Learning Approach to Determining Listening Attention from EEG Signals. In EMBC '18: 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society.
[bibtex] |
• | Alickovic, E., Lunner, T., Graversen, C., Gustafsson, F. (2018). A sparse estimation approach to modeling listening attention from EEG signals. IEEE Transactions on Biomedical Engineering.
[bibtex] |
• | Wong, Daniel D. E., Fuglsang, Søren A., Hjortkjær, Jens; Ceolini, Enea; Slaney, Malcolm; de Cheveigné, Alain (2018). A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding. Frontiers in Neuroscience, 12, 531.
[bibtex] [pdf] [doi] |
• | Pomper, U., Chait, M. (2017). The impact of visual gaze direction on auditory object tracking. Scientific reports, 7 (1), 4640.
[bibtex] [doi] |
• | Kiselev, I., Ceolini, E., Wong, D., de Cheveigné, A., Liu, S.-C. (2017). WHISPER: Wirelessly Synchronized Distributed Audio Sensor Platform., IEEE Workshop on Practical Issues in Building Sensor Network Applications.
[bibtex] |
• | Fuglsang, S.A., Dau, T., Hjortkjaer, J. (2017). Noise-robust cortical tracking of attended speech in real-world acoustic scenes. Neuroimage, 156, 435-444.
[bibtex] [pdf] |
• | Fiedler, L; Wöstmann, M., Graversen, C., Brandmeyer, A., Lunner, T., Obleser, J. (2017). Single-channel in-ear-EEG detects the focus of auditory attention to concurrent tone streams and mixed speech. J. Neural Eng., 14(3), 036020.
[bibtex] [doi] |
• | Favré-Felix, A., Hietkamp, R.K., Graversen, C., Dau, T., Lunner, T. (2017). Steering of audio input in hearing aids by eye gaze through electrooculography. Trends in Hearing.
[bibtex] |
• | Favre-Felix, A; Graversen, C; Dau, T; T, Lunner (2017). Real-time estimation of eye gaze by in-ear electrodes., Conf Proc IEEE Eng Med Biol Soc.
[bibtex] [doi] |
• | Hjortkjær, Jens; Kassuba, Tanja; Madsen, Kristoffer H; Skov, Martin; Siebner, Hartwig R (2017). Task-Modulated Cortical Representations of Natural Sound Source Categories. Cerebral Cortex, 1-12.
[bibtex] [pdf] [doi] |
• | de Cheveigné, A., Wong, D., Di Liberto, G; Hjortkjaer, J., M., Slaney; Lalor, E. (2017). Decoding the auditory brain with canonical correlation analysis. BioRxiv.
[bibtex] [pdf] |
• | de Cheveigné, A., Arzounian, D. (2017). Robust detrending, rereferencing, outlier detection, and inpainting for multichannel data. bioRxiv.
[bibtex] [pdf] |
• | Ceolini, Enea; Liu, S-C. (2017). Impact of low precision deep regression networks on single- channel source separation. In , IEEE International Conference on Acoustics, Speech and Signal Processing.
[bibtex] |
• | Andrillon, T; Pressnitzer, D; Léger, D; Kouider, S (2017). Formation and suppression of acoustic memories in human sleep. Nature Communications, 8, 179.
[bibtex] [doi] |
• | Pelofi, C., de Gardelle, V., Egré, P., Pressnitzer, D. (2017). Interindividual variability in auditory scene analysis revealed by confidence judgements. Philosophical Transactions of the Royal Society B: Biological Sciences.
[bibtex] [doi] |
• | Hjortkjær, J., Märcher-Rørsted, J. (2017). Digit speech: A test material for measuring ongoing speech perception., International Symposium on Auditory and Audiological Research (ISAAR 2017).
[bibtex] |
• | Hjortkjær, J., Dau, T. (2017). Neural tracking of attended talkers in natural environments., SFB Workshop: The Active Auditory System, Physikzentrum Bad Honnef.
[bibtex] |
• | Hjortkjaer, J. (2017). Dynamics of cortical oscillations during an auditory N-back task., International Symposium on Auditory and Audiological Research (ISAAR 2017).
[bibtex] |
• | Hjortkjær, J. (2017). Single-trial EEG measures of attention to speech in a multi-talker scenario., Danish Acoustical Society Meeting (DAS, 2015).
[bibtex] |
• | Fuglsang, S.A. (2017). Deciphering attentional modulations of single-trial EEG in real-world acoustic scenes., 13th Congress of the European Federation of Audiology Societies (EFAS 2017).
[bibtex] |
• | Fuglsang, S., Hjortkjær, J., Dau, T. (2017). Neural reconstructions of speech in reverberant multi-talker environments., Association for Research in Otolaryngology 40th Midwinter Meeting (ARO 2017).
[bibtex] |
• | Dau, T., de Cheveigné, A. (2017). Towards a cognitively controlled hearing aid. In , Acoustical Society of America Meeting (ASA 2017).
[bibtex] |
• | Lin, I-Fan; Agus, Trevor R., Suied, Clara; Pressnitzer, Daniel; Yamada, Takashi; Komine, Yoko; Kato, Nobumasa; Kashino, Makio (2016). Fast response to human voices in autism. Scientific Reports The Author(s) SN -, 6, 26336 EP -.
[bibtex] [pdf] [doi] |
• | Wong, Daniel D.E., Pomper, Ulrich; Alickovic, Emina; Hjortkaer, Jens; Slaney, Malcolm; Shamma, Shihab; Cheveigné, Alain de (2016). Decoding Speech Sound Source Direction from Electroencephalography Data. ARO midwinter meeting (abstract).
[bibtex] |
• | de Cheveigné, A. (2016). Sparse Time Artifact Removal. Journal of Neuroscience Methods, 262, 14-20.
[bibtex] [pdf] [doi] |
• | Bates, D.I.R., Pomper, U., Chait, M. (2016). Attentive object tracking in busy scenes. ARO winter meeting (abstract).
[bibtex] |
• | Yang, Minhao; Chien, Chen-Han; Delbruck, Tobi; Liu, Shih-Chii (2016). A 0.5V 55μW 64×2-Channel Binaural Silicon Cochlea for Event-Driven Stereo-Audio Sensing. IEEE ISSCC conference, IEEE ISSCC conference.
[bibtex] |
• | Neil, D., Purghart, M., Liu, S-C. (2016). Effective sensor fusion with event-based sensors and deep network architectures. IEEE International Symposium on Circuits and Systems, May 24--27, Montreal, Canada.
[bibtex] |
• | Kiselev, I., Neil, D., Liu, S-C. (2016). Event-driven deep neural network hardware system for sensor fusion (live demonstration). IEEE International Symposium on Circuits and Systems, May 24--27, Montreal, Canada.
[bibtex] |
• | Kiselev, I., Neil, D., Liu, S-C. (2016). Event-driven deep neural network hardware system for sensor fusion. IEEE International Symposium on Circuits and Systems, May 24--27, Montreal, Canada.
[bibtex] |
• | Hjortkjaer, J. (2016). EEG decoding of continuous speech in realistic acoustic scenes., 8th Speech in Noise Workshop (SpiN 2016).
[bibtex] |
• | Hjortkjaer, J. (2016). Noise-robust neural tracking of attended talkers in real-world acoustic scenarios., Arches Meeting, Zurich.
[bibtex] |
• | Hjortkjær, J., Dau, T. (2016). Low-frequency neural oscillations in auditory stream segregation., Association for Research in Otolaryngology 39th Midwinter Meeting (ARO 2016).
[bibtex] |
• | Fiedler, L., Obleser, J., Lunner, T., C., Graversen (2016). Ear-EEG allows extraction of neural responses in challenging listening scenarios - A future technology for hearing aids?. In EMBC'16: 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society.
[bibtex] |
• | Riday, Cosimo; Bhargava, Saurabh; Hahnloser, Richard H.R., Liu, Shih-Chii (2016). Monaural Source Separation Using a Random Forest Classifier. In Interspeech 2016.
[bibtex] |
• | Anja, Zai; Saurabh, Bhargava; Nima, Mesgarani; Shih-Chii, Liu (2015). Reconstruction of audio waveforms from spike trains of artificial cochlea models. Frontiers in Neuroscience, 9.
[bibtex] [pdf] [doi] |
• | Santurette, S., Hjortkjær, J. (2015). From ear to brain (Keynote)., Fifth Nordic Conference: Hearing - Cognition -- Communication.
[bibtex] |
• | Hjortkjær, J., Dau, T. (2015). Single-trial EEG measures of attention to speech in a multi-talker scenario. Won best poster award., 7th Speech in Noise Workshop (SpiN 2015).
[bibtex] |
• | Hjortkjær, J. (2015). EEG Stimulus reconstruction and attention decoding., Scientific Workshop of the IcanHear Initial Training Network.
[bibtex] |
• | de Cheveigné, A., Arzounian, D. (2015). Scanning for oscillations. Journal of Neural Engineering, 12, 066020.
[bibtex] [pdf] [doi] |