A Vanilla Multi-Task Framework for Dense Visual Prediction Solution to 1st VCL Challenge -- Multi-Task Robustness Track

I tiakina i:
Ngā taipitopito rārangi puna kōrero
I whakaputaina i:arXiv.org (Feb 27, 2024), p. n/a
Kaituhi matua: Chen, Zehui
Ētahi atu kaituhi: Wang, Qiuchen, Li, Zhenyu, Liu, Jiaming, Zhang, Shanghang, Zhao, Feng
I whakaputaina:
Cornell University Library, arXiv.org
Ngā marau:
Urunga tuihono:Citation/Abstract
Full text outside of ProQuest
Ngā Tūtohu: Tāpirihia he Tūtohu
Kāore He Tūtohu, Me noho koe te mea tuatahi ki te tūtohu i tēnei pūkete!
Whakaahuatanga
Whakarāpopotonga:In this report, we present our solution to the multi-task robustness track of the 1st Visual Continual Learning (VCL) Challenge at ICCV 2023 Workshop. We propose a vanilla framework named UniNet that seamlessly combines various visual perception algorithms into a multi-task model. Specifically, we choose DETR3D, Mask2Former, and BinsFormer for 3D object detection, instance segmentation, and depth estimation tasks, respectively. The final submission is a single model with InternImage-L backbone, and achieves a 49.6 overall score (29.5 Det mAP, 80.3 mTPS, 46.4 Seg mAP, and 7.93 silog) on SHIFT validation set. Besides, we provide some interesting observations in our experiments which may facilitate the development of multi-task learning in dense visual prediction.
ISSN:2331-8422
Puna:Engineering Database