Efficient Context-Preserving Encoding and Decoding of Compositional Structures Using Sparse Binary Representations

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Information vol. 16, no. 5 (2025), p. 343
1. Verfasser: Malits Roman
Weitere Verfasser: Mendelson Avi
Veröffentlicht:
MDPI AG
Schlagworte:
Online-Zugang:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Tags: Tag hinzufügen
Keine Tags, Fügen Sie das erste Tag hinzu!
Beschreibung
Abstract:Despite their unprecedented success, artificial neural networks suffer extreme opacity and weakness in learning general knowledge from limited experience. Some argue that the key to overcoming those limitations in artificial neural networks is efficiently combining continuity with compositionality principles. While it is unknown how the brain encodes and decodes information in a way that enables both rapid responses and complex processing, there is evidence that the neocortex employs sparse distributed representations for this task. This is an active area of research. This work deals with one of the challenges in this field related to encoding and decoding nested compositional structures, which are essential for representing complex real-world concepts. One of the algorithms in this field is called context-dependent thinning (CDT). A distinguishing feature of CDT relative to other methods is that the CDT-encoded vector remains similar to each component input and combinations of similar inputs. In this work, we propose a novel encoding method termed CPSE, based on CDT ideas. In addition, we propose a novel decoding method termed CPSD, based on triadic memory. The proposed algorithms extend CDT by allowing both encoding and decoding of information, including the composition order. In addition, the proposed algorithms allow to optimize the amount of compute and memory needed to achieve the desired encoding/decoding performance.
ISSN:2078-2489
DOI:10.3390/info16050343
Quelle:Advanced Technologies & Aerospace Database