LVP-CLIP:Revisiting CLIP for Continual Learning with Label Vector Pool

Shranjeno v:
Bibliografske podrobnosti
izdano v:arXiv.org (Dec 8, 2024), p. n/a
Glavni avtor: Ma, Yue
Drugi avtorji: Ren, Huantao, Wang, Boyu, Jin, Jingang, Velipasalar, Senem, Qiu, Qinru
Izdano:
Cornell University Library, arXiv.org
Teme:
Online dostop:Citation/Abstract
Full text outside of ProQuest
Oznake: Označite
Brez oznak, prvi označite!
Opis
Resumen:Continual learning aims to update a model so that it can sequentially learn new tasks without forgetting previously acquired knowledge. Recent continual learning approaches often leverage the vision-language model CLIP for its high-dimensional feature space and cross-modality feature matching. Traditional CLIP-based classification methods identify the most similar text label for a test image by comparing their embeddings. However, these methods are sensitive to the quality of text phrases and less effective for classes lacking meaningful text labels. In this work, we rethink CLIP-based continual learning and introduce the concept of Label Vector Pool (LVP). LVP replaces text labels with training images as similarity references, eliminating the need for ideal text descriptions. We present three variations of LVP and evaluate their performance on class and domain incremental learning tasks. Leveraging CLIP's high dimensional feature space, LVP learning algorithms are task-order invariant. The new knowledge does not modify the old knowledge, hence, there is minimum forgetting. Different tasks can be learned independently and in parallel with low computational and memory demands. Experimental results show that proposed LVP-based methods outperform the current state-of-the-art baseline by a significant margin of 40.7%.
ISSN:2331-8422
Fuente:Engineering Database