Swin Transformer for Robust Differentiation of Real and Synthetic Images: Intra- and Inter-Dataset Analysis

Guardado en:
書目詳細資料
發表在:arXiv.org (Sep 7, 2024), p. n/a
主要作者: Mehta, Preetu
其他作者: Sagar, Aman, Kumari, Suchi
出版:
Cornell University Library, arXiv.org
主題:
在線閱讀:Citation/Abstract
Full text outside of ProQuest
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!

MARC

LEADER 00000nab a2200000uu 4500
001 3102583557
003 UK-CbPIL
022 |a 2331-8422 
035 |a 3102583557 
045 0 |b d20240907 
100 1 |a Mehta, Preetu 
245 1 |a Swin Transformer for Robust Differentiation of Real and Synthetic Images: Intra- and Inter-Dataset Analysis 
260 |b Cornell University Library, arXiv.org  |c Sep 7, 2024 
513 |a Working Paper 
520 3 |a \textbf{Purpose} This study aims to address the growing challenge of distinguishing computer-generated imagery (CGI) from authentic digital images in the RGB color space. Given the limitations of existing classification methods in handling the complexity and variability of CGI, this research proposes a Swin Transformer-based model for accurate differentiation between natural and synthetic images. \textbf{Methods} The proposed model leverages the Swin Transformer's hierarchical architecture to capture local and global features crucial for distinguishing CGI from natural images. The model's performance was evaluated through intra-dataset and inter-dataset testing across three distinct datasets: CiFAKE, JSSSTU, and Columbia. The datasets were tested individually (D1, D2, D3) and in combination (D1+D2+D3) to assess the model's robustness and domain generalization capabilities. \textbf{Results} The Swin Transformer-based model demonstrated high accuracy, consistently achieving a range of 97-99\% across all datasets and testing scenarios. These results confirm the model's effectiveness in detecting CGI, showcasing its robustness and reliability in both intra-dataset and inter-dataset evaluations. \textbf{Conclusion} The findings of this study highlight the Swin Transformer model's potential as an advanced tool for digital image forensics, particularly in distinguishing CGI from natural images. The model's strong performance across multiple datasets indicates its capability for domain generalization, making it a valuable asset in scenarios requiring precise and reliable image classification. 
653 |a Differentiation 
653 |a Image classification 
653 |a Digital imaging 
653 |a Computer-generated imagery 
653 |a Datasets 
653 |a Performance evaluation 
653 |a Digital computers 
653 |a Color imagery 
653 |a Transformers 
653 |a Robustness 
653 |a Synthetic data 
700 1 |a Sagar, Aman 
700 1 |a Kumari, Suchi 
773 0 |t arXiv.org  |g (Sep 7, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3102583557/abstract/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2409.04734