Emergence of Abstractions: Concept Encoding and Decoding Mechanism for In-Context Learning in Transformers

Shranjeno v:
Bibliografske podrobnosti
izdano v:arXiv.org (Dec 18, 2024), p. n/a
Glavni avtor: Han, Seungwook
Drugi avtorji: Song, Jinyeop, Gore, Jeff, Agrawal, Pulkit
Izdano:
Cornell University Library, arXiv.org
Teme:
Online dostop:Citation/Abstract
Full text outside of ProQuest
Oznake: Označite
Brez oznak, prvi označite!
Opis
Resumen:Humans distill complex experiences into fundamental abstractions that enable rapid learning and adaptation. Similarly, autoregressive transformers exhibit adaptive learning through in-context learning (ICL), which begs the question of how. In this paper, we propose concept encoding-decoding mechanism to explain ICL by studying how transformers form and use internal abstractions in their representations. On synthetic ICL tasks, we analyze the training dynamics of a small transformer and report the coupled emergence of concept encoding and decoding. As the model learns to encode different latent concepts (e.g., ``Finding the first noun in a sentence.") into distinct, separable representations, it concureently builds conditional decoding algorithms and improve its ICL performance. We validate the existence of this mechanism across pretrained models of varying scales (Gemma-2 2B/9B/27B, Llama-3.1 8B/70B). Further, through mechanistic interventions and controlled finetuning, we demonstrate that the quality of concept encoding is causally related and predictive of ICL performance. Our empirical insights shed light into better understanding the success and failure modes of large language models via their representations.
ISSN:2331-8422
Fuente:Engineering Database