Personalized FedM2former: An Innovative Approach Towards Federated Multi-Modal 3D Object Detection for Autonomous Driving

Guardado en:
Detalles Bibliográficos
Publicado en:Processes vol. 13, no. 2 (2025), p. 449
Autor principal: Zhao, Liang
Otros Autores: Li, Xuan, Jia, Xin, Fu, Lulu
Publicado:
MDPI AG
Materias:
Acceso en línea:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:With the swift evolution of artificial intelligence in the automotive sector, autonomous driving has ascended as a pivotal research frontier for automotive manufacturers. Environmental perception, as the cornerstone of autonomous driving, necessitates innovative solutions to address the intricate challenges posed by data sensitivity during vehicle operations. To this end, federated learning (FL) emerges as a promising paradigm, offering a balance between data privacy preservation and performance optimization for perception tasks. In this paper, we pioneer the integration of FL into 3D object detection, presenting personalized FedM2former, a novel multi-modal framework tailored for autonomous driving. This framework aims to elevate the accuracy and robustness of 3D object detection while mitigating concerns over data sensitivity. Recognizing the heterogeneity inherent in user data, we introduce a personalization strategy leveraging stochastic gradient descent optimization prior to local training, ensuring the global model’s adaptability and generalization across diverse user vehicles. Furthermore, to address the sparsity of point cloud data, we innovate the attention layer within our detection model. Our balanced window attention mechanism innovatively processes both point cloud and image data in parallel within each window, significantly enhancing model efficiency and performance. Extensive experiments on benchmark datasets, including nuScenes, ONCE, and Waymo, demonstrate the efficacy of our approach. Notably, we achieve state-of-the-art results with test mAP and NDS of 71.2% and 73.6% on nuScenes, 67.14% test mAP on ONCE, and 83.9% test mAP and 81.8% test mAPH on Waymo, respectively. These outcomes underscore the feasibility of our method in enhancing object detection performance and speed while safeguarding privacy and data security, positioning Personalized FedM2former as a significant advancement in the autonomous driving landscape.
ISSN:2227-9717
DOI:10.3390/pr13020449
Fuente:Materials Science Database