Hierarchical Feature Fusion and Enhanced Attention Mechanism for Robust GAN-Generated Image Detection

Guardat en:
Dades bibliogràfiques
Publicat a:Mathematics vol. 13, no. 9 (2025), p. 1372
Autor principal: Zhang, Weinan
Altres autors: Cui Sanshuai, Zhang, Qi, Chen Biwei, Zeng, Hui, Zhong Qi
Publicat:
MDPI AG
Matèries:
Accés en línia:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiquetes: Afegir etiqueta
Sense etiquetes, Sigues el primer a etiquetar aquest registre!

MARC

LEADER 00000nab a2200000uu 4500
001 3203209798
003 UK-CbPIL
022 |a 2227-7390 
024 7 |a 10.3390/math13091372  |2 doi 
035 |a 3203209798 
045 2 |b d20250101  |b d20251231 
084 |a 231533  |2 nlm 
100 1 |a Zhang, Weinan  |u Faculty of Data Science, City University of Macau, Macau SAR, China; d23091110387@cityu.edu.mo (W.Z.); qizhang@cityu.edu.mo (Q.Z.); qizhong@cityu.edu.mo (Q.Z.) 
245 1 |a Hierarchical Feature Fusion and Enhanced Attention Mechanism for Robust GAN-Generated Image Detection 
260 |b MDPI AG  |c 2025 
513 |a Journal Article 
520 3 |a In recent years, with the rapid advancement of deep learning technologies such as generative adversarial networks (GANs), deepfake technology has become increasingly sophisticated. As a result, the generated fake images are becoming more difficult to visually distinguish from real ones. Existing deepfake detection methods primarily rely on training models with specific datasets. However, these models often suffer from limited generalization when processing images of unknown origin or across domains, leading to a significant decrease in detection accuracy. To address this issue, this paper proposes a deepfake image-detection network based on feature aggregation and enhancement. The key innovation of the proposed method lies in the integration of two modules: the Feature Aggregation Module (FAM) and the Attention Enhancement Module (AEM). The FAM effectively aggregates both deep semantic information and shallow detail features through a multi-scale feature-fusion mechanism, overcoming the limitations of traditional methods that rely on a single-level feature. Meanwhile, the AEM enhances the network’s ability to capture subtle forgery traces by incorporating attention mechanisms and filtering techniques, significantly boosting the model’s efficiency in processing complex information. The experimental results demonstrate that the proposed method achieves significant improvements across all evaluation metrics. Specifically, on the StarGAN dataset, the model attained outstanding performance, with accuracy (Acc) and average precision (AP) both reaching 100%. In cross-dataset testing, the proposed method exhibited strong generalization ability, raising the overall average accuracy to 87.0% and average precision to 92.8%, representing improvements of 5.2% and 6.7%, respectively, compared to existing state-of-the-art methods. These results show that the proposed method can not only achieve optimal performance on data with the same distribution, but also demonstrate strong generalization ability in cross-domain detection tasks. 
653 |a Accuracy 
653 |a Forgery 
653 |a Image manipulation 
653 |a Datasets 
653 |a Deepfake 
653 |a Image detection 
653 |a Deception 
653 |a Generative adversarial networks 
653 |a Decomposition 
653 |a Attention 
653 |a Diffusion models 
653 |a Modules 
653 |a Image 
700 1 |a Cui Sanshuai  |u Faculty of Data Science, City University of Macau, Macau SAR, China; d23091110387@cityu.edu.mo (W.Z.); qizhang@cityu.edu.mo (Q.Z.); qizhong@cityu.edu.mo (Q.Z.) 
700 1 |a Zhang, Qi  |u Faculty of Data Science, City University of Macau, Macau SAR, China; d23091110387@cityu.edu.mo (W.Z.); qizhang@cityu.edu.mo (Q.Z.); qizhong@cityu.edu.mo (Q.Z.) 
700 1 |a Chen Biwei  |u Belt and Road School, Beijing Normal University at Zhuhai, Zhuhai 519088, China; bchen@bnu.edu.cn 
700 1 |a Zeng, Hui  |u School of Computer Science and Technology, Southwest University of Science and Technology, Mianyang 621010, China 
700 1 |a Zhong Qi  |u Faculty of Data Science, City University of Macau, Macau SAR, China; d23091110387@cityu.edu.mo (W.Z.); qizhang@cityu.edu.mo (Q.Z.); qizhong@cityu.edu.mo (Q.Z.) 
773 0 |t Mathematics  |g vol. 13, no. 9 (2025), p. 1372 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3203209798/abstract/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text + Graphics  |u https://www.proquest.com/docview/3203209798/fulltextwithgraphics/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3203209798/fulltextPDF/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch