Where Did Your Model Learn That? Label-free Influence for Self-supervised Learning

Gardado en:
Detalles Bibliográficos
Publicado en:arXiv.org (Dec 22, 2024), p. n/a
Autor Principal: Harilal, Nidhin
Outros autores: Rege, Amit Kiran, Reza Akbarian Bafghi, Raissi, Maziar, Monteleoni, Claire
Publicado:
Cornell University Library, arXiv.org
Materias:
Acceso en liña:Citation/Abstract
Full text outside of ProQuest
Etiquetas: Engadir etiqueta
Sen Etiquetas, Sexa o primeiro en etiquetar este rexistro!
Descripción
Resumo:Self-supervised learning (SSL) has revolutionized learning from large-scale unlabeled datasets, yet the intrinsic relationship between pretraining data and the learned representations remains poorly understood. Traditional supervised learning benefits from gradient-based data attribution tools like influence functions that measure the contribution of an individual data point to model predictions. However, existing definitions of influence rely on labels, making them unsuitable for SSL settings. We address this gap by introducing Influence-SSL, a novel and label-free approach for defining influence functions tailored to SSL. Our method harnesses the stability of learned representations against data augmentations to identify training examples that help explain model predictions. We provide both theoretical foundations and empirical evidence to show the utility of Influence-SSL in analyzing pre-trained SSL models. Our analysis reveals notable differences in how SSL models respond to influential data compared to supervised models. Finally, we validate the effectiveness of Influence-SSL through applications in duplicate detection, outlier identification and fairness analysis. Code is available at: \url{https://github.com/cryptonymous9/Influence-SSL}.
ISSN:2331-8422
Fonte:Engineering Database