Attention-Driven Time-Domain Convolutional Network for Source Separation of Vocal and Accompaniment
Guardat en:
| Publicat a: | Electronics vol. 14, no. 20 (2025), p. 3982-4009 |
|---|---|
| Autor principal: | |
| Altres autors: | , , , |
| Publicat: |
MDPI AG
|
| Matèries: | |
| Accés en línia: | Citation/Abstract Full Text + Graphics Full Text - PDF |
| Etiquetes: |
Sense etiquetes, Sigues el primer a etiquetar aquest registre!
|
| Resum: | Time-domain signal models have been widely applied to single-channel music source separation tasks due to their ability to overcome the limitations of fixed spectral representations and phase information loss. However, the high acoustic similarity and synchronous temporal evolution between vocals and accompaniment make accurate separation challenging for existing time-domain models. These challenges are mainly reflected in two aspects: (1) the lack of a dynamic mechanism to evaluate the contribution of each source during feature fusion, and (2) difficulty in capturing fine-grained temporal details, often resulting in local artifacts in the output. To address these issues, we propose an attention-driven time-domain convolutional network for vocal and accompaniment source separation. Specifically, we design an embedding attention module to perform adaptive source weighting, enabling the network to emphasize components more relevant to the target mask during training. In addition, an efficient convolutional block attention module is developed to enhance local feature extraction. This module integrates an efficient channel attention mechanism based on one-dimensional convolution while preserving spatial attention, thereby improving the ability to learn discriminative features from the target audio. Comprehensive evaluations on public music datasets demonstrate the effectiveness of the proposed model and its significant improvements over existing approaches. |
|---|---|
| ISSN: | 2079-9292 |
| DOI: | 10.3390/electronics14203982 |
| Font: | Advanced Technologies & Aerospace Database |