Attention-Driven Time-Domain Convolutional Network for Source Separation of Vocal and Accompaniment
保存先:
| 出版年: | Electronics vol. 14, no. 20 (2025), p. 3982-4009 |
|---|---|
| 第一著者: | |
| その他の著者: | , , , |
| 出版事項: |
MDPI AG
|
| 主題: | |
| オンライン・アクセス: | Citation/Abstract Full Text + Graphics Full Text - PDF |
| タグ: |
タグなし, このレコードへの初めてのタグを付けませんか!
|
| 抄録: | Time-domain signal models have been widely applied to single-channel music source separation tasks due to their ability to overcome the limitations of fixed spectral representations and phase information loss. However, the high acoustic similarity and synchronous temporal evolution between vocals and accompaniment make accurate separation challenging for existing time-domain models. These challenges are mainly reflected in two aspects: (1) the lack of a dynamic mechanism to evaluate the contribution of each source during feature fusion, and (2) difficulty in capturing fine-grained temporal details, often resulting in local artifacts in the output. To address these issues, we propose an attention-driven time-domain convolutional network for vocal and accompaniment source separation. Specifically, we design an embedding attention module to perform adaptive source weighting, enabling the network to emphasize components more relevant to the target mask during training. In addition, an efficient convolutional block attention module is developed to enhance local feature extraction. This module integrates an efficient channel attention mechanism based on one-dimensional convolution while preserving spatial attention, thereby improving the ability to learn discriminative features from the target audio. Comprehensive evaluations on public music datasets demonstrate the effectiveness of the proposed model and its significant improvements over existing approaches. |
|---|---|
| ISSN: | 2079-9292 |
| DOI: | 10.3390/electronics14203982 |
| ソース: | Advanced Technologies & Aerospace Database |