Temporal recurrence as a general mechanism to explain neural responses in the auditory system

সংরক্ষণ করুন:
গ্রন্থ-পঞ্জীর বিবরন
প্রকাশিত:bioRxiv (Feb 4, 2025)
প্রধান লেখক: Ulysse Rançon
অন্যান্য লেখক: Masquelier, Timothée, Cottereau, Benoit R
প্রকাশিত:
Cold Spring Harbor Laboratory Press
বিষয়গুলি:
অনলাইন ব্যবহার করুন:Citation/Abstract
Full Text - PDF
Full text outside of ProQuest
ট্যাগগুলো: ট্যাগ যুক্ত করুন
কোনো ট্যাগ নেই, প্রথমজন হিসাবে ট্যাগ করুন!

MARC

LEADER 00000nab a2200000uu 4500
001 3153304477
003 UK-CbPIL
022 |a 2692-8205 
024 7 |a 10.1101/2025.01.08.631909  |2 doi 
035 |a 3153304477 
045 0 |b d20250204 
100 1 |a Ulysse Rançon 
245 1 |a Temporal recurrence as a general mechanism to explain neural responses in the auditory system 
260 |b Cold Spring Harbor Laboratory Press  |c Feb 4, 2025 
513 |a Working Paper 
520 3 |a Computational models of neural processing in the auditory cortex usually ignore that neurons have an internal memory: they characterize their responses from simple convolutions with a finite temporal window of arbitrary duration. To circumvent this limitation, we propose here a new, simple and fully recurrent neural network (RNN) architecture incorporating cutting-edge computational blocks from the deep learning community and constituting the first attempt to model auditory responses with deep RNNs. We evaluated the ability of this approach to fit neural responses from 8 publicly available datasets, spanning 3 animal species and 6 auditory brain areas, representing the largest compilation of this kind. Our recurrent models significantly outperform previous methods and a new Transformer-based architecture of our design on this task, suggesting that temporal recurrence is the key to explain auditory responses. Finally, we developed a novel interpretation technique to reverse-engineer any pretrained model, regardless of their stateful or stateless nature. Largely inspired by works from explainable artificial intelligence (xAI), our method suggests that auditory neurons have much longer memory (several seconds) than indicated by current STRF techniques. Together, these results highly motivate the use of deep RNNs within computational models of sensory neurons, as protean building blocks capable of assuming any function.Competing Interest StatementThe authors have declared no competing interest.Footnotes* update reports of performance on AA1 datasets after a correction in the data preprocessing methods.* https://github.com/urancon/deepSTRF 
653 |a Sensory neurons 
653 |a Brain architecture 
653 |a Somatosensory cortex 
653 |a Hearing 
653 |a Cortex (auditory) 
653 |a Artificial intelligence 
653 |a Computational neuroscience 
653 |a Information processing 
653 |a Auditory discrimination learning 
653 |a Auditory system 
653 |a Deep learning 
653 |a Temporal lobe 
653 |a Neural networks 
700 1 |a Masquelier, Timothée 
700 1 |a Cottereau, Benoit R 
773 0 |t bioRxiv  |g (Feb 4, 2025) 
786 0 |d ProQuest  |t Biological Science Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3153304477/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3153304477/fulltextPDF/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u https://www.biorxiv.org/content/10.1101/2025.01.08.631909v2