Speech Separation based on Contrastive Learning and Deep Modularization

保存先:
書誌詳細
出版年:arXiv.org (Oct 9, 2024), p. n/a
第一著者: Ochieng, Peter
出版事項:
Cornell University Library, arXiv.org
主題:
オンライン・アクセス:Citation/Abstract
Full text outside of ProQuest
タグ: タグ追加
タグなし, このレコードへの初めてのタグを付けませんか!

MARC

LEADER 00000nab a2200000uu 4500
001 2815836564
003 UK-CbPIL
022 |a 2331-8422 
035 |a 2815836564 
045 0 |b d20241009 
100 1 |a Ochieng, Peter 
245 1 |a Speech Separation based on Contrastive Learning and Deep Modularization 
260 |b Cornell University Library, arXiv.org  |c Oct 9, 2024 
513 |a Working Paper 
520 3 |a The current monaural state of the art tools for speech separation relies on supervised learning. This means that they must deal with permutation problem, they are impacted by the mismatch on the number of speakers used in training and inference. Moreover, their performance heavily relies on the presence of high-quality labelled data. These problems can be effectively addressed by employing a fully unsupervised technique for speech separation. In this paper, we use contrastive learning to establish the representations of frames then use the learned representations in the downstream deep modularization task. Concretely, we demonstrate experimentally that in speech separation, different frames of a speaker can be viewed as augmentations of a given hidden standard frame of that speaker. The frames of a speaker contain enough prosodic information overlap which is key in speech separation. Based on this, we implement a self-supervised learning to learn to minimize the distance between frames belonging to a given speaker. The learned representations are used in a downstream deep modularization task to cluster frames based on speaker identity. Evaluation of the developed technique on WSJ0-2mix and WSJ0-3mix shows that the technique attains SI-SNRi and SDRi of 20.8 and 21.0 respectively in WSJ0-2mix. In WSJ0-3mix, it attains SI-SNRi and SDRi of 20.7 and 20.7 respectively in WSJ0-2mix. Its greatest strength being that as the number of speakers increase, its performance does not degrade significantly. 
653 |a Linguistics 
653 |a Speech 
653 |a Modularization 
653 |a Self-supervised learning 
653 |a Performance degradation 
653 |a Permutations 
653 |a Frames 
653 |a Separation 
653 |a Representations 
773 0 |t arXiv.org  |g (Oct 9, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/2815836564/abstract/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2305.10652