PAM: Predictive attention mechanism for neural decoding of visual perception

Guardado en:
Bibliografiske detaljer
Udgivet i:bioRxiv (Feb 8, 2025)
Hovedforfatter: Dado, Thirza
Andre forfattere: Le, Lynn, Marcel Van Gerven, Güçlütürk, Yağmur, Güçlü, Umut
Udgivet:
Cold Spring Harbor Laboratory Press
Fag:
Online adgang:Citation/Abstract
Full Text - PDF
Full text outside of ProQuest
Tags: Tilføj Tag
Ingen Tags, Vær først til at tagge denne postø!

MARC

LEADER 00000nab a2200000uu 4500
001 3165216348
003 UK-CbPIL
022 |a 2692-8205 
024 7 |a 10.1101/2024.06.04.596589  |2 doi 
035 |a 3165216348 
045 0 |b d20250208 
100 1 |a Dado, Thirza 
245 1 |a PAM: Predictive attention mechanism for neural decoding of visual perception 
260 |b Cold Spring Harbor Laboratory Press  |c Feb 8, 2025 
513 |a Working Paper 
520 3 |a In neural decoding, reconstruction seeks to create a literal image from information in brain activity, typically achieved by mapping neural responses to a latent representation of a generative model. A key challenge in this process is understanding how information is processed across visual areas to effectively integrate their neural signals. This requires an attention mechanism that selectively focuses on neural inputs based on their relevance to the task of reconstruction --- something conventional attention models, which capture only input-input relationships, cannot achieve. To address this, we introduce predictive attention mechanisms (PAMs), a novel approach that learns task-driven "output queries" during training to focus on the neural responses most relevant for predicting the latents underlying perceived images, effectively allocating attention across brain areas. We validate PAM with two datasets: (i) B2G, which contains GAN-synthesized images, their original latents and multi-unit activity data; (ii) Shen-19, which includes real photographs, their inverted latents and functional magnetic resonance imaging data. Beyond achieving state-of-the-art reconstructions, PAM offers a key interpretative advantage through the availability of (i) attention weights, revealing how the model's focus was distributed across visual areas for the task of latent prediction, and (ii) values, capturing the stimulus information decoded from each area.Competing Interest StatementThe authors have declared no competing interest.Footnotes* This revision improves the clarity of the explanation of PAM. The results themselves remain unchanged. 
653 |a Attention task 
653 |a Image processing 
653 |a Visual perception 
653 |a Unit activity 
653 |a Magnetic resonance imaging 
653 |a Information processing 
653 |a Functional magnetic resonance imaging 
653 |a Attention 
653 |a Neuroimaging 
653 |a Neural coding 
700 1 |a Le, Lynn 
700 1 |a Marcel Van Gerven 
700 1 |a Güçlütürk, Yağmur 
700 1 |a Güçlü, Umut 
773 0 |t bioRxiv  |g (Feb 8, 2025) 
786 0 |d ProQuest  |t Biological Science Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3165216348/abstract/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3165216348/fulltextPDF/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u https://www.biorxiv.org/content/10.1101/2024.06.04.596589v2