Resource-Efficient Fine-Tuning Strategies for Automatic MOS Prediction in Text-to-Speech for Low-Resource Languages
Kaydedildi:
| Yayımlandı: | arXiv.org (May 30, 2023), p. n/a |
|---|---|
| Yazar: | |
| Diğer Yazarlar: | , , |
| Baskı/Yayın Bilgisi: |
Cornell University Library, arXiv.org
|
| Konular: | |
| Online Erişim: | Citation/Abstract Full text outside of ProQuest |
| Etiketler: |
Etiket eklenmemiş, İlk siz ekleyin!
|
| Özet: | We train a MOS prediction model based on wav2vec 2.0 using the open-access data sets BVCC and SOMOS. Our test with neural TTS data in the low-resource language (LRL) West Frisian shows that pre-training on BVCC before fine-tuning on SOMOS leads to the best accuracy for both fine-tuned and zero-shot prediction. Further fine-tuning experiments show that using more than 30 percent of the total data does not lead to significant improvements. In addition, fine-tuning with data from a single listener shows promising system-level accuracy, supporting the viability of one-participant pilot tests. These findings can all assist the resource-conscious development of TTS for LRLs by progressing towards better zero-shot MOS prediction and informing the design of listening tests, especially in early-stage evaluation. |
|---|---|
| ISSN: | 2331-8422 |
| Kaynak: | Engineering Database |