Sparse Representations in Artificial and Biological Neural Networks

Guardado en:
Detalles Bibliográficos
Publicado en:ProQuest Dissertations and Theses (2025)
Autor principal: Bricken, Trenton
Publicado:
ProQuest Dissertations & Theses
Materias:
Acceso en línea:Citation/Abstract
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!

MARC

LEADER 00000nab a2200000uu 4500
001 3216755033
003 UK-CbPIL
020 |a 9798280715219 
035 |a 3216755033 
045 2 |b d20250101  |b d20251231 
084 |a 66569  |2 nlm 
100 1 |a Bricken, Trenton 
245 1 |a Sparse Representations in Artificial and Biological Neural Networks 
260 |b ProQuest Dissertations & Theses  |c 2025 
513 |a Dissertation/Thesis 
520 3 |a This thesis explores how sparsity, the idea that only a small fraction of neurons are active at any time, is a common thread connecting biological brains and artificial intelligence. By combining theory, experiments, and real-world applications, we show how sparsity is a key ingredient underlying core cognitive abilities like attention, memory, and learning. We start by uncovering a surprising link between the "attention" mechanism powering recent artificial intelligence (AI) breakthroughs and a classic theory of human memory called Sparse Distributed Memory (SDM). This suggests that brains and AI may leverage similar computational tricks. Taking inspiration from the brain's cerebellum, we then use SDM to improve an AI's ability to learn continuously without forgetting previous knowledge. This showcases sparsity's ability to enable more flexible learning. We also find that simply adding noise during training pushes AI to use sparse representations, causing it to develop more brain-like properties. This provides clues about why sparsity emerges in the brain while offering an easy way to encourage it in AI. Finally, we use sparsity to peek inside the black box of large language models like ChatGPT and Claude. By pulling apart the tangled web of information these models use to think, we make progress towards more transparent and controllable AI. Together, these findings paint sparsity as a unifying principle for intelligent systems, be they made of biological neurons or silicon chips. By connecting the dots between neuroscience and AI, this thesis advances our understanding of intelligence while charting a course towards more capable and interpretable AI systems. 
653 |a Computer science 
653 |a Neurosciences 
653 |a Artificial intelligence 
773 0 |t ProQuest Dissertations and Theses  |g (2025) 
786 0 |d ProQuest  |t ProQuest Dissertations & Theses Global 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3216755033/abstract/embedded/Y2VX53961LHR7RE6?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3216755033/fulltextPDF/embedded/Y2VX53961LHR7RE6?source=fedsrch