VIoTGPT: Learning to Schedule Vision Tools in LLMs towards Intelligent Video Internet of Things

-д хадгалсан:
Номзүйн дэлгэрэнгүй
-д хэвлэсэн:arXiv.org (Dec 22, 2024), p. n/a
Үндсэн зохиолч: Zhong, Yaoyao
Бусад зохиолчид: Qi, Mengshi, Wang, Rui, Qiu, Yuhan, Zhang, Yang, Ma, Huadong
Хэвлэсэн:
Cornell University Library, arXiv.org
Нөхцлүүд:
Онлайн хандалт:Citation/Abstract
Full text outside of ProQuest
Шошгууд: Шошго нэмэх
Шошго байхгүй, Энэхүү баримтыг шошголох эхний хүн болох!
Тодорхойлолт
Хураангуй:Video Internet of Things (VIoT) has shown full potential in collecting an unprecedented volume of video data. How to schedule the domain-specific perceiving models and analyze the collected videos uniformly, efficiently, and especially intelligently to accomplish complicated tasks is challenging. To address the challenge, we build VIoTGPT, the framework based on LLMs to correctly interact with humans, query knowledge videos, and invoke vision models to analyze multimedia data collaboratively. To support VIoTGPT and related future works, we meticulously crafted the VIoT-Tool dataset, including the training dataset and the benchmark involving 11 representative vision models across three categories based on semi-automatic annotations. To guide LLM to act as the intelligent agent towards intelligent VIoT, we resort to the ReAct instruction tuning method based on VIoT-Tool to learn the tool capability. Quantitative and qualitative experiments and analyses demonstrate the effectiveness of VIoTGPT. We believe VIoTGPT contributes to improving human-centered experiences in VIoT applications. The project website is https://github.com/zhongyy/VIoTGPT.
ISSN:2331-8422
Эх сурвалж:Engineering Database