Video Token Sparsification for Efficient Multimodal LLMs in Autonomous Driving

Guardado en:
Detalles Bibliográficos
Publicado en:arXiv.org (Sep 16, 2024), p. n/a
Autor principal: Ma, Yunsheng
Otros Autores: Abdelraouf, Amr, Gupta, Rohit, Wang, Ziran, Han, Kyungtae
Publicado:
Cornell University Library, arXiv.org
Materias:
Acceso en línea:Citation/Abstract
Full text outside of ProQuest
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!

MARC

LEADER 00000nab a2200000uu 4500
001 3106537834
003 UK-CbPIL
022 |a 2331-8422 
035 |a 3106537834 
045 0 |b d20240916 
100 1 |a Ma, Yunsheng 
245 1 |a Video Token Sparsification for Efficient Multimodal LLMs in Autonomous Driving 
260 |b Cornell University Library, arXiv.org  |c Sep 16, 2024 
513 |a Working Paper 
520 3 |a Multimodal large language models (MLLMs) have demonstrated remarkable potential for enhancing scene understanding in autonomous driving systems through powerful logical reasoning capabilities. However, the deployment of these models faces significant challenges due to their substantial parameter sizes and computational demands, which often exceed the constraints of onboard computation. One major limitation arises from the large number of visual tokens required to capture fine-grained and long-context visual information, leading to increased latency and memory consumption. To address this issue, we propose Video Token Sparsification (VTS), a novel approach that leverages the inherent redundancy in consecutive video frames to significantly reduce the total number of visual tokens while preserving the most salient information. VTS employs a lightweight CNN-based proposal model to adaptively identify key frames and prune less informative tokens, effectively mitigating hallucinations and increasing inference throughput without compromising performance. We conduct comprehensive experiments on the DRAMA and LingoQA benchmarks, demonstrating the effectiveness of VTS in achieving up to a 33\% improvement in inference throughput and a 28\% reduction in memory usage compared to the baseline without compromising performance. 
653 |a Onboard equipment 
653 |a Large language models 
653 |a Frames (data processing) 
653 |a Scene analysis 
653 |a Cognition & reasoning 
653 |a Redundancy 
653 |a Inference 
700 1 |a Abdelraouf, Amr 
700 1 |a Gupta, Rohit 
700 1 |a Wang, Ziran 
700 1 |a Han, Kyungtae 
773 0 |t arXiv.org  |g (Sep 16, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3106537834/abstract/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2409.11182