Optimizing Large Language Models for Low-Resource Languages: A Case Study on Saudi Dialects

Uloženo v:
Podrobná bibliografie
Vydáno v:International Journal of Advanced Computer Science and Applications vol. 16, no. 3 (2025)
Hlavní autor: PDF
Vydáno:
Science and Information (SAI) Organization Limited
Témata:
On-line přístup:Citation/Abstract
Full Text - PDF
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!

MARC

LEADER 00000nab a2200000uu 4500
001 3192357832
003 UK-CbPIL
022 |a 2158-107X 
022 |a 2156-5570 
024 7 |a 10.14569/IJACSA.2025.0160384  |2 doi 
035 |a 3192357832 
045 2 |b d20250101  |b d20251231 
100 1 |a PDF 
245 1 |a Optimizing Large Language Models for Low-Resource Languages: A Case Study on Saudi Dialects 
260 |b Science and Information (SAI) Organization Limited  |c 2025 
513 |a Journal Article Case Study 
520 3 |a Large Language Models (LLMs) have revolutionized natural language processing (NLP); however, their effectiveness remains limited for low-resource languages and dialects due to data scarcity. One such underrepresented variety is the Saudi dialect, a widely spoken yet linguistically distinct variant of Arabic. NLP models trained on Modern Standard Arabic (MSA) often struggle with dialectal variations, leading to suboptimal performance in real-world applications. This study aims to enhance LLM performance for the Saudi dialect by leveraging the MADAR dataset, applying data augmentation techniques, and fine-tuning a state-of-the-art LLM. Experimental results demonstrate the model’s effectiveness in Saudi dialect classification, achieving 91% accuracy, with precision, recall, and F1-scores all exceeding 0.90 across different dialectal variations. These findings underscore the potential of LLMs in handling dialectal Arabic and their applicability in tasks such as social media monitoring and automatic translation. Future research can further improve performance by refining fine-tuning strategies, integrating additional linguistic features, and expanding training datasets. Ultimately, this work contributes to democratizing NLP technologies for low-resource languages and dialects, bridging the gap in linguistic inclusivity within AI applications. 
651 4 |a Saudi Arabia 
653 |a Linguistics 
653 |a Datasets 
653 |a Data augmentation 
653 |a Performance enhancement 
653 |a Large language models 
653 |a Natural language processing 
653 |a Effectiveness 
653 |a Language 
653 |a Text categorization 
653 |a Accuracy 
653 |a Computer science 
653 |a Sentiment analysis 
653 |a Social networks 
653 |a Classification 
653 |a Machine translation 
653 |a Phonology 
653 |a Information technology 
653 |a Dialects 
653 |a Social media 
653 |a Case studies 
653 |a Arabic language 
653 |a Language modeling 
653 |a Democratization 
653 |a Scarcity 
653 |a Languages 
773 0 |t International Journal of Advanced Computer Science and Applications  |g vol. 16, no. 3 (2025) 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3192357832/abstract/embedded/6A8EOT78XXH2IG52?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3192357832/fulltextPDF/embedded/6A8EOT78XXH2IG52?source=fedsrch