CALM: Continual Associative Learning Model via Sparse Distributed Memory

Αποθηκεύτηκε σε:
Λεπτομέρειες βιβλιογραφικής εγγραφής
Εκδόθηκε σε:Technologies vol. 13, no. 12 (2025), p. 587-612
Κύριος συγγραφέας: Nechesov Andrey
Άλλοι συγγραφείς: Ruponen Janne
Έκδοση:
MDPI AG
Θέματα:
Διαθέσιμο Online:Citation/Abstract
Full Text
Full Text - PDF
Ετικέτες: Προσθήκη ετικέτας
Δεν υπάρχουν, Καταχωρήστε ετικέτα πρώτοι!

MARC

LEADER 00000nab a2200000uu 4500
001 3286356925
003 UK-CbPIL
022 |a 2227-7080 
024 7 |a 10.3390/technologies13120587  |2 doi 
035 |a 3286356925 
045 2 |b d20250101  |b d20251231 
084 |a 231637  |2 nlm 
100 1 |a Nechesov Andrey 
245 1 |a CALM: Continual Associative Learning Model via Sparse Distributed Memory 
260 |b MDPI AG  |c 2025 
513 |a Journal Article 
520 3 |a Sparse Distributed Memory (SDM) provides a biologically inspired mechanism for associative and online learning. Transformer architectures, despite exceptional inference performance, remain static and vulnerable to catastrophic forgetting. This work introduces Continual Associative Learning Model (CALM), a conceptual framework that defines the theoretical base and integration logic for the cognitive model seeking to establish continual, lifelong adaptation without retraining by combining SDM system with lightweight dual-transformer modules. The architecture proposes an always-online associative memory for episodic storage (System 1), as well as a pair of asynchronous transformer consolidate experience in the background for uninterrupted reasoning and gradual model evolution (System 2). The framework remains compatible with standard transformer benchmarks, establishing a shared evaluation basis for both reasoning accuracy and continual learning stability. Preliminary experiments using the SDMPreMark benchmark evaluate algorithmic behavior across multiple synthetic sets, confirming a critical radius-threshold phenomenon in SDM recall. These results represent deterministic characterization of SDM dynamics in the component level, preceding the integration in the model level with transformer-based semantic tasks. The CALM framework provides a reproducible foundation for studying continual memory and associative learning in hybrid transformer architectures, although future work should involve experiments with non-synthetic, high-load data to confirm scalable behavior in high interference. 
653 |a Approximation 
653 |a Neurons 
653 |a Principles 
653 |a Embedded systems 
653 |a Associative memory 
653 |a Artificial intelligence 
653 |a Distance learning 
653 |a Distributed memory 
653 |a Benchmarks 
653 |a Episodic memory 
653 |a Reasoning 
700 1 |a Ruponen Janne 
773 0 |t Technologies  |g vol. 13, no. 12 (2025), p. 587-612 
786 0 |d ProQuest  |t Materials Science Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3286356925/abstract/embedded/J7RWLIQ9I3C9JK51?source=fedsrch 
856 4 0 |3 Full Text  |u https://www.proquest.com/docview/3286356925/fulltext/embedded/J7RWLIQ9I3C9JK51?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3286356925/fulltextPDF/embedded/J7RWLIQ9I3C9JK51?source=fedsrch