BayesAdapter: enhanced uncertainty estimation in CLIP few-shot adaptation
-д хадгалсан:
| -д хэвлэсэн: | arXiv.org (Dec 12, 2024), p. n/a |
|---|---|
| Үндсэн зохиолч: | |
| Бусад зохиолчид: | , , , |
| Хэвлэсэн: |
Cornell University Library, arXiv.org
|
| Нөхцлүүд: | |
| Онлайн хандалт: | Citation/Abstract Full text outside of ProQuest |
| Шошгууд: |
Шошго байхгүй, Энэхүү баримтыг шошголох эхний хүн болох!
|
MARC
| LEADER | 00000nab a2200000uu 4500 | ||
|---|---|---|---|
| 001 | 3145272708 | ||
| 003 | UK-CbPIL | ||
| 022 | |a 2331-8422 | ||
| 035 | |a 3145272708 | ||
| 045 | 0 | |b d20241212 | |
| 100 | 1 | |a Morales-Álvarez, Pablo | |
| 245 | 1 | |a BayesAdapter: enhanced uncertainty estimation in CLIP few-shot adaptation | |
| 260 | |b Cornell University Library, arXiv.org |c Dec 12, 2024 | ||
| 513 | |a Working Paper | ||
| 520 | 3 | |a The emergence of large pre-trained vision-language models (VLMs) represents a paradigm shift in machine learning, with unprecedented results in a broad span of visual recognition tasks. CLIP, one of the most popular VLMs, has exhibited remarkable zero-shot and transfer learning capabilities in classification. To transfer CLIP to downstream tasks, adapters constitute a parameter-efficient approach that avoids backpropagation through the large model (unlike related prompt learning methods). However, CLIP adapters have been developed to target discriminative performance, and the quality of their uncertainty estimates has been overlooked. In this work we show that the discriminative performance of state-of-the-art CLIP adapters does not always correlate with their uncertainty estimation capabilities, which are essential for a safe deployment in real-world scenarios. We also demonstrate that one of such adapters is obtained through MAP inference from a more general probabilistic framework. Based on this observation we introduce BayesAdapter, which leverages Bayesian inference to estimate a full probability distribution instead of a single point, better capturing the variability inherent in the parameter space. In a comprehensive empirical evaluation we show that our approach obtains high quality uncertainty estimates in the predictions, standing out in calibration and selective classification. Our code is publicly available at: https://github.com/pablomorales92/BayesAdapter. | |
| 653 | |a Estimates | ||
| 653 | |a Visual tasks | ||
| 653 | |a Visual discrimination | ||
| 653 | |a Classification | ||
| 653 | |a Parameter estimation | ||
| 653 | |a Bayesian analysis | ||
| 653 | |a Probabilistic inference | ||
| 653 | |a Machine learning | ||
| 653 | |a Parameter uncertainty | ||
| 653 | |a Statistical analysis | ||
| 653 | |a Statistical inference | ||
| 653 | |a Adapters | ||
| 653 | |a Back propagation | ||
| 700 | 1 | |a Christodoulidis, Stergios | |
| 700 | 1 | |a Vakalopoulou, Maria | |
| 700 | 1 | |a Piantanida, Pablo | |
| 700 | 1 | |a Dolz, Jose | |
| 773 | 0 | |t arXiv.org |g (Dec 12, 2024), p. n/a | |
| 786 | 0 | |d ProQuest |t Engineering Database | |
| 856 | 4 | 1 | |3 Citation/Abstract |u https://www.proquest.com/docview/3145272708/abstract/embedded/ZKJTFFSVAI7CB62C?source=fedsrch |
| 856 | 4 | 0 | |3 Full text outside of ProQuest |u http://arxiv.org/abs/2412.09718 |