Learning-based 6 DOF Camera Pose Estimation Using BIMgenerated Virtual Scene for Facility Management

Guardado en:
Detalles Bibliográficos
Publicado en:ISARC. Proceedings of the International Symposium on Automation and Robotics in Construction vol. 42 (2025), p. 42-49
Autor principal: Le, Thai-Hoa
Otros Autores: Chang, Ju-Chi, Hsu, Wei-Yi, Lin, Tzu-Yang, Chang, Ting-wei, Lin, Jacob J
Publicado:
IAARC Publications
Materias:
Acceso en línea:Citation/Abstract
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!

MARC

LEADER 00000nab a2200000uu 4500
001 3240508115
003 UK-CbPIL
035 |a 3240508115 
045 2 |b d20250101  |b d20251231 
084 |a 180234  |2 nlm 
100 1 |a Le, Thai-Hoa  |u Department of Civil Engineering, National Taiwan University, Taiwan 
245 1 |a Learning-based 6 DOF Camera Pose Estimation Using BIMgenerated Virtual Scene for Facility Management 
260 |b IAARC Publications  |c 2025 
513 |a Journal Article 
520 3 |a Image-based indoor localization is a promising approach to enhancing facility management efficiency. However, ensuring localization accuracy and improving data accessibility remain key challenges. Therefore, this research aims to automatedly localize images captured during facility inspections by matching the viewpoint of the camera with a corresponding viewpoint in a Building Information Modeling-based (BIM) simulated environment. In this paper, we present a framework that generates photorealistic synthetic images and trains a deep learning model for camera pose estimation. Synthetic datasets are generated in a simulation environment, allowing precise control over scene parameters, camera positions, and lighting conditions. This allows the creation of diverse and realistic training data tailored to specific facility environments. The deep learning model takes RGB images, semantic segmented maps, and corresponding camera poses as inputs to predict sixdegree-of-freedom (6DOF) camera poses, including position and orientation. Experimental results demonstrate that the proposed approach can enable indoor image localization with an average translation error of 5.8 meters and a rotation error of 69.05 degrees. 
653 |a Cameras 
653 |a Deep learning 
653 |a Pose estimation 
653 |a Virtual reality 
653 |a Localization 
653 |a Facilities management 
653 |a Color imagery 
653 |a Synthetic data 
700 1 |a Chang, Ju-Chi  |u Department of Civil Engineering, National Taiwan University, Taiwan 
700 1 |a Hsu, Wei-Yi  |u Department of Civil Engineering, National Taiwan University, Taiwan 
700 1 |a Lin, Tzu-Yang  |u Lab for Service Robot Systems, Delta Research Center, Taiwan 
700 1 |a Chang, Ting-wei  |u Lab for Service Robot Systems, Delta Research Center, Taiwan 
700 1 |a Lin, Jacob J 
773 0 |t ISARC. Proceedings of the International Symposium on Automation and Robotics in Construction  |g vol. 42 (2025), p. 42-49 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3240508115/abstract/embedded/75I98GEZK8WCJMPQ?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3240508115/fulltextPDF/embedded/75I98GEZK8WCJMPQ?source=fedsrch