LVP-CLIP:Revisiting CLIP for Continual Learning with Label Vector Pool

Salvato in:
Dettagli Bibliografici
Pubblicato in:arXiv.org (Dec 8, 2024), p. n/a
Autore principale: Ma, Yue
Altri autori: Ren, Huantao, Wang, Boyu, Jin, Jingang, Velipasalar, Senem, Qiu, Qinru
Pubblicazione:
Cornell University Library, arXiv.org
Soggetti:
Accesso online:Citation/Abstract
Full text outside of ProQuest
Tags: Aggiungi Tag
Nessun Tag, puoi essere il primo ad aggiungerne!!

MARC

LEADER 00000nab a2200000uu 4500
001 3142732272
003 UK-CbPIL
022 |a 2331-8422 
035 |a 3142732272 
045 0 |b d20241208 
100 1 |a Ma, Yue 
245 1 |a LVP-CLIP:Revisiting CLIP for Continual Learning with Label Vector Pool 
260 |b Cornell University Library, arXiv.org  |c Dec 8, 2024 
513 |a Working Paper 
520 3 |a Continual learning aims to update a model so that it can sequentially learn new tasks without forgetting previously acquired knowledge. Recent continual learning approaches often leverage the vision-language model CLIP for its high-dimensional feature space and cross-modality feature matching. Traditional CLIP-based classification methods identify the most similar text label for a test image by comparing their embeddings. However, these methods are sensitive to the quality of text phrases and less effective for classes lacking meaningful text labels. In this work, we rethink CLIP-based continual learning and introduce the concept of Label Vector Pool (LVP). LVP replaces text labels with training images as similarity references, eliminating the need for ideal text descriptions. We present three variations of LVP and evaluate their performance on class and domain incremental learning tasks. Leveraging CLIP's high dimensional feature space, LVP learning algorithms are task-order invariant. The new knowledge does not modify the old knowledge, hence, there is minimum forgetting. Different tasks can be learned independently and in parallel with low computational and memory demands. Experimental results show that proposed LVP-based methods outperform the current state-of-the-art baseline by a significant margin of 40.7%. 
653 |a Memory tasks 
653 |a Algorithms 
653 |a Labels 
653 |a Image acquisition 
653 |a Image quality 
653 |a Machine learning 
653 |a Cognitive tasks 
700 1 |a Ren, Huantao 
700 1 |a Wang, Boyu 
700 1 |a Jin, Jingang 
700 1 |a Velipasalar, Senem 
700 1 |a Qiu, Qinru 
773 0 |t arXiv.org  |g (Dec 8, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3142732272/abstract/embedded/ZKJTFFSVAI7CB62C?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2412.05840