VIoTGPT: Learning to Schedule Vision Tools in LLMs towards Intelligent Video Internet of Things

Gorde:
Xehetasun bibliografikoak
Argitaratua izan da:arXiv.org (Dec 22, 2024), p. n/a
Egile nagusia: Zhong, Yaoyao
Beste egile batzuk: Qi, Mengshi, Wang, Rui, Qiu, Yuhan, Zhang, Yang, Ma, Huadong
Argitaratua:
Cornell University Library, arXiv.org
Gaiak:
Sarrera elektronikoa:Citation/Abstract
Full text outside of ProQuest
Etiketak: Etiketa erantsi
Etiketarik gabe, Izan zaitez lehena erregistro honi etiketa jartzen!
Deskribapena
Laburpena:Video Internet of Things (VIoT) has shown full potential in collecting an unprecedented volume of video data. How to schedule the domain-specific perceiving models and analyze the collected videos uniformly, efficiently, and especially intelligently to accomplish complicated tasks is challenging. To address the challenge, we build VIoTGPT, the framework based on LLMs to correctly interact with humans, query knowledge videos, and invoke vision models to analyze multimedia data collaboratively. To support VIoTGPT and related future works, we meticulously crafted the VIoT-Tool dataset, including the training dataset and the benchmark involving 11 representative vision models across three categories based on semi-automatic annotations. To guide LLM to act as the intelligent agent towards intelligent VIoT, we resort to the ReAct instruction tuning method based on VIoT-Tool to learn the tool capability. Quantitative and qualitative experiments and analyses demonstrate the effectiveness of VIoTGPT. We believe VIoTGPT contributes to improving human-centered experiences in VIoT applications. The project website is https://github.com/zhongyy/VIoTGPT.
ISSN:2331-8422
Baliabidea:Engineering Database