Efficient Context-Preserving Encoding and Decoding of Compositional Structures Using Sparse Binary Representations

保存先:
書誌詳細
出版年:Information vol. 16, no. 5 (2025), p. 343
第一著者: Malits Roman
その他の著者: Mendelson Avi
出版事項:
MDPI AG
主題:
オンライン・アクセス:Citation/Abstract
Full Text + Graphics
Full Text - PDF
タグ: タグ追加
タグなし, このレコードへの初めてのタグを付けませんか!

MARC

LEADER 00000nab a2200000uu 4500
001 3211985916
003 UK-CbPIL
022 |a 2078-2489 
024 7 |a 10.3390/info16050343  |2 doi 
035 |a 3211985916 
045 2 |b d20250101  |b d20251231 
084 |a 231474  |2 nlm 
100 1 |a Malits Roman 
245 1 |a Efficient Context-Preserving Encoding and Decoding of Compositional Structures Using Sparse Binary Representations 
260 |b MDPI AG  |c 2025 
513 |a Journal Article 
520 3 |a Despite their unprecedented success, artificial neural networks suffer extreme opacity and weakness in learning general knowledge from limited experience. Some argue that the key to overcoming those limitations in artificial neural networks is efficiently combining continuity with compositionality principles. While it is unknown how the brain encodes and decodes information in a way that enables both rapid responses and complex processing, there is evidence that the neocortex employs sparse distributed representations for this task. This is an active area of research. This work deals with one of the challenges in this field related to encoding and decoding nested compositional structures, which are essential for representing complex real-world concepts. One of the algorithms in this field is called context-dependent thinning (CDT). A distinguishing feature of CDT relative to other methods is that the CDT-encoded vector remains similar to each component input and combinations of similar inputs. In this work, we propose a novel encoding method termed CPSE, based on CDT ideas. In addition, we propose a novel decoding method termed CPSD, based on triadic memory. The proposed algorithms extend CDT by allowing both encoding and decoding of information, including the composition order. In addition, the proposed algorithms allow to optimize the amount of compute and memory needed to achieve the desired encoding/decoding performance. 
653 |a Sparsity 
653 |a Deep learning 
653 |a Memory 
653 |a Context 
653 |a Artificial neural networks 
653 |a Neural networks 
653 |a Cerebral cortex 
653 |a Algorithms 
653 |a Encoding-Decoding 
653 |a Methods 
653 |a Codes 
653 |a Representations 
653 |a Roles 
700 1 |a Mendelson Avi 
773 0 |t Information  |g vol. 16, no. 5 (2025), p. 343 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3211985916/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text + Graphics  |u https://www.proquest.com/docview/3211985916/fulltextwithgraphics/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3211985916/fulltextPDF/embedded/L8HZQI7Z43R0LA5T?source=fedsrch