Hierarchical Feature Fusion and Enhanced Attention Mechanism for Robust GAN-Generated Image Detection

Guardado en:
Detalles Bibliográficos
Publicado en:Mathematics vol. 13, no. 9 (2025), p. 1372
Autor principal: Zhang, Weinan
Otros Autores: Cui Sanshuai, Zhang, Qi, Chen Biwei, Zeng, Hui, Zhong Qi
Publicado:
MDPI AG
Materias:
Acceso en línea:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:In recent years, with the rapid advancement of deep learning technologies such as generative adversarial networks (GANs), deepfake technology has become increasingly sophisticated. As a result, the generated fake images are becoming more difficult to visually distinguish from real ones. Existing deepfake detection methods primarily rely on training models with specific datasets. However, these models often suffer from limited generalization when processing images of unknown origin or across domains, leading to a significant decrease in detection accuracy. To address this issue, this paper proposes a deepfake image-detection network based on feature aggregation and enhancement. The key innovation of the proposed method lies in the integration of two modules: the Feature Aggregation Module (FAM) and the Attention Enhancement Module (AEM). The FAM effectively aggregates both deep semantic information and shallow detail features through a multi-scale feature-fusion mechanism, overcoming the limitations of traditional methods that rely on a single-level feature. Meanwhile, the AEM enhances the network’s ability to capture subtle forgery traces by incorporating attention mechanisms and filtering techniques, significantly boosting the model’s efficiency in processing complex information. The experimental results demonstrate that the proposed method achieves significant improvements across all evaluation metrics. Specifically, on the StarGAN dataset, the model attained outstanding performance, with accuracy (Acc) and average precision (AP) both reaching 100%. In cross-dataset testing, the proposed method exhibited strong generalization ability, raising the overall average accuracy to 87.0% and average precision to 92.8%, representing improvements of 5.2% and 6.7%, respectively, compared to existing state-of-the-art methods. These results show that the proposed method can not only achieve optimal performance on data with the same distribution, but also demonstrate strong generalization ability in cross-domain detection tasks.
ISSN:2227-7390
DOI:10.3390/math13091372
Fuente:Engineering Database