VIoTGPT: Learning to Schedule Vision Tools in LLMs towards Intelligent Video Internet of Things

Shranjeno v:
Bibliografske podrobnosti
izdano v:arXiv.org (Dec 22, 2024), p. n/a
Glavni avtor: Zhong, Yaoyao
Drugi avtorji: Qi, Mengshi, Wang, Rui, Qiu, Yuhan, Zhang, Yang, Ma, Huadong
Izdano:
Cornell University Library, arXiv.org
Teme:
Online dostop:Citation/Abstract
Full text outside of ProQuest
Oznake: Označite
Brez oznak, prvi označite!
Opis
Resumen:Video Internet of Things (VIoT) has shown full potential in collecting an unprecedented volume of video data. How to schedule the domain-specific perceiving models and analyze the collected videos uniformly, efficiently, and especially intelligently to accomplish complicated tasks is challenging. To address the challenge, we build VIoTGPT, the framework based on LLMs to correctly interact with humans, query knowledge videos, and invoke vision models to analyze multimedia data collaboratively. To support VIoTGPT and related future works, we meticulously crafted the VIoT-Tool dataset, including the training dataset and the benchmark involving 11 representative vision models across three categories based on semi-automatic annotations. To guide LLM to act as the intelligent agent towards intelligent VIoT, we resort to the ReAct instruction tuning method based on VIoT-Tool to learn the tool capability. Quantitative and qualitative experiments and analyses demonstrate the effectiveness of VIoTGPT. We believe VIoTGPT contributes to improving human-centered experiences in VIoT applications. The project website is https://github.com/zhongyy/VIoTGPT.
ISSN:2331-8422
Fuente:Engineering Database