FFMT: Unsupervised RGB-D Point Cloud Registration via Fusion Feature Matching with Transformer

Guardado en:
Detalles Bibliográficos
Publicado en:Applied Sciences vol. 15, no. 5 (2025), p. 2472
Autor principal: Qiu, Jiacun
Otros Autores: Han, Zhenqi, Liu, Lizhaung, Zhang, Jialu
Publicado:
MDPI AG
Materias:
Acceso en línea:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:Point cloud registration is a fundamental problem in computer vision and 3D computing, aiming to align point cloud data from different sensors or viewpoints into a unified coordinate system. In recent years, the rapid development of RGB-D sensor technology has greatly facilitated the acquisition of RGB-D data. In previous unsupervised point cloud registration methods based on RGB-D data, there has often been an overemphasis on matching local features, while the potential value of global information has been overlooked, thus limiting the improvement in registration performance. To address this issue, this paper proposes a self-attention-based global information attention module, which learns the global context of fused RGB-D features and effectively integrates global information into each individual feature. Furthermore, this paper introduces alternating self-attention and cross-attention layers, enabling the final feature fusion to achieve a broader global receptive field, thereby facilitating more precise matching relationships. We conduct extensive experiments on the ScanNet and 3DMatch datasets, and the results show that, compared to the previous state-of-the-art methods, our approach reduces the average rotation error by 26.9% and 32% on the ScanNet and 3DMatch datasets, respectively. Our method also achieves state-of-the-art performance on other key metrics.
ISSN:2076-3417
DOI:10.3390/app15052472
Fuente:Publicly Available Content Database