Optimizing Large Language Models for Low-Resource Languages: A Case Study on Saudi Dialects

Uloženo v:
Podrobná bibliografie
Vydáno v:International Journal of Advanced Computer Science and Applications vol. 16, no. 3 (2025)
Hlavní autor: PDF
Vydáno:
Science and Information (SAI) Organization Limited
Témata:
On-line přístup:Citation/Abstract
Full Text - PDF
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Abstrakt:Large Language Models (LLMs) have revolutionized natural language processing (NLP); however, their effectiveness remains limited for low-resource languages and dialects due to data scarcity. One such underrepresented variety is the Saudi dialect, a widely spoken yet linguistically distinct variant of Arabic. NLP models trained on Modern Standard Arabic (MSA) often struggle with dialectal variations, leading to suboptimal performance in real-world applications. This study aims to enhance LLM performance for the Saudi dialect by leveraging the MADAR dataset, applying data augmentation techniques, and fine-tuning a state-of-the-art LLM. Experimental results demonstrate the model’s effectiveness in Saudi dialect classification, achieving 91% accuracy, with precision, recall, and F1-scores all exceeding 0.90 across different dialectal variations. These findings underscore the potential of LLMs in handling dialectal Arabic and their applicability in tasks such as social media monitoring and automatic translation. Future research can further improve performance by refining fine-tuning strategies, integrating additional linguistic features, and expanding training datasets. Ultimately, this work contributes to democratizing NLP technologies for low-resource languages and dialects, bridging the gap in linguistic inclusivity within AI applications.
ISSN:2158-107X
2156-5570
DOI:10.14569/IJACSA.2025.0160384
Zdroj:Advanced Technologies & Aerospace Database