Trans-cVAE-GAN: Transformer-Based cVAE-GAN for High-Fidelity EEG Signal Generation

I tiakina i:
Ngā taipitopito rārangi puna kōrero
I whakaputaina i:Bioengineering vol. 12, no. 10 (2025), p. 1028-1068
Kaituhi matua: Yao Yiduo
Ētahi atu kaituhi: Wang, Xiao, Hao Xudong, Sun, Hongyu, Dong Ruixin, Li, Yansheng
I whakaputaina:
MDPI AG
Ngā marau:
Urunga tuihono:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Ngā Tūtohu: Tāpirihia he Tūtohu
Kāore He Tūtohu, Me noho koe te mea tuatahi ki te tūtohu i tēnei pūkete!

MARC

LEADER 00000nab a2200000uu 4500
001 3265830712
003 UK-CbPIL
022 |a 2306-5354 
024 7 |a 10.3390/bioengineering12101028  |2 doi 
035 |a 3265830712 
045 2 |b d20250101  |b d20251231 
100 1 |a Yao Yiduo 
245 1 |a Trans-cVAE-GAN: Transformer-Based cVAE-GAN for High-Fidelity EEG Signal Generation 
260 |b MDPI AG  |c 2025 
513 |a Journal Article 
520 3 |a Electroencephalography signal generation remains a challenging task due to its non-stationarity, multi-scale oscillations, and strong spatiotemporal coupling. Conventional generative models, including VAEs and GAN variants such as DCGAN, WGAN, and WGAN-GP, often yield blurred waveforms, unstable spectral distributions, or lack semantic controllability, limiting their effectiveness in emotion-related applications. To address these challenges, this research proposes a Transformer-based conditional variational autoencoder–generative adversarial network (Trans-cVAE-GAN) that combines Transformer-driven temporal modeling, label-conditioned latent inference, and adversarial learning. A multi-dimensional structural loss further constrains generation by preserving temporal correlation, frequency-domain consistency, and statistical distribution. Experiments on three SEED-family datasets—SEED, SEED-FRA, and SEED-GER—demonstrate high similarity to real EEG, with representative mean ± SD correlations of Pearson ≈ 0.84 ± 0.08/0.74 ± 0.12/0.84 ± 0.07 and Spearman ≈ 0.82 ± 0.07/0.72 ± 0.12/0.83 ± 0.08, together with low spectral divergence (KL ≈ 0.39 ± 0.15/0.41 ± 0.20/0.37 ± 0.18). Comparative analyses show consistent gains over classical GAN baselines, while ablations verify the indispensable roles of the Transformer encoder, label conditioning, and cVAE module. In downstream emotion recognition, augmentation with generated EEG raises accuracy from 86.9% to 91.8% on SEED (with analogous gains on SEED-FRA and SEED-GER), underscoring enhanced generalization and robustness. These results confirm that the proposed approach simultaneously ensures fidelity, stability, and controllability across cohorts, offering a scalable solution for affective computing and brain–computer interface applications. 
653 |a Brain 
653 |a Oscillations 
653 |a Comparative analysis 
653 |a Labels 
653 |a Waveforms 
653 |a Wavelet transforms 
653 |a Affective computing 
653 |a Ablation 
653 |a Biochips 
653 |a Generative adversarial networks 
653 |a Implants 
653 |a Measurement techniques 
653 |a Electroencephalography 
653 |a Computer applications 
653 |a Control stability 
653 |a Machine learning 
653 |a Time series 
653 |a Realism 
653 |a Emotions 
653 |a Conditioning 
653 |a Signal generation 
653 |a Human-computer interface 
653 |a Fourier transforms 
653 |a Emotion recognition 
653 |a EEG 
653 |a Neural networks 
653 |a Design 
653 |a Controllability 
653 |a Semantics 
700 1 |a Wang, Xiao 
700 1 |a Hao Xudong 
700 1 |a Sun, Hongyu 
700 1 |a Dong Ruixin 
700 1 |a Li, Yansheng 
773 0 |t Bioengineering  |g vol. 12, no. 10 (2025), p. 1028-1068 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3265830712/abstract/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text + Graphics  |u https://www.proquest.com/docview/3265830712/fulltextwithgraphics/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3265830712/fulltextPDF/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch