PAR: Prompt-Aware Token Reduction Method for Efficient Large Multimodal Models

Gorde:
Xehetasun bibliografikoak
Argitaratua izan da:arXiv.org (Dec 2, 2024), p. n/a
Egile nagusia: Liu, Yingen
Beste egile batzuk: Wu, Fan, Li, Ruihui, Tang, Zhuo, Li, Kenli
Argitaratua:
Cornell University Library, arXiv.org
Gaiak:
Sarrera elektronikoa:Citation/Abstract
Full text outside of ProQuest
Etiketak: Etiketa erantsi
Etiketarik gabe, Izan zaitez lehena erregistro honi etiketa jartzen!
Deskribapena
Laburpena:Multimodal large language models (MLLMs) demonstrate strong performance across visual tasks, but their efficiency is hindered by significant computational and memory demands from processing long contexts in multimodal inputs. To address this, we introduce PAR (Prompt-Aware Token Reduction), a novel and plug-and-play approach that reduces visual tokens efficiently without compromising model performance. Unlike previous methods that rely heavily on attention mechanisms and overlooking cross-modal interactions , we uses a prompt-aware strategy to adpative identify and cluster essential visual tokens. PAR categorizes visual context redundancy into two types: external and internal. External redundancy is minimized through semantic retrieval, while internal redundancy is addressed using a token routing mechanism. This method substantially reduces computational load without requiring additional training or complex architectural modifications. \textbf{Experimental results demonstrate that across various visual question answering tasks, PAR reduces FLOPs by 83\% with a compression ratio of 89\%, while retaining 97\% of baseline accuracy.} The adaptive design of PAR achieves a 2x token reduction ratio compared to prior approaches, enabling a better balance between performance and efficiency.
ISSN:2331-8422
Baliabidea:Engineering Database