SeaLLMs -- Large Language Models for Southeast Asia

Đã lưu trong:
Chi tiết về thư mục
Xuất bản năm:arXiv.org (Jul 1, 2024), p. n/a
Tác giả chính: Nguyen, Xuan-Phi
Tác giả khác: Zhang, Wenxuan, Li, Xin, Aljunied, Mahani, Hu, Zhiqiang, Shen, Chenhui, Yew, Ken Chia, Li, Xingxuan, Wang, Jianyu, Tan, Qingyu, Cheng, Liying, Chen, Guanzheng, Deng, Yue, Sen, Yang, Liu, Chaoqun, Zhang, Hang, Bing, Lidong
Được phát hành:
Cornell University Library, arXiv.org
Những chủ đề:
Truy cập trực tuyến:Citation/Abstract
Full text outside of ProQuest
Các nhãn: Thêm thẻ
Không có thẻ, Là người đầu tiên thẻ bản ghi này!
Miêu tả
Bài tóm tắt:Despite the remarkable achievements of large language models (LLMs) in various tasks, there remains a linguistic bias that favors high-resource languages, such as English, often at the expense of low-resource and regional languages. To address this imbalance, we introduce SeaLLMs, an innovative series of language models that specifically focuses on Southeast Asian (SEA) languages. SeaLLMs are built upon the Llama-2 model and further advanced through continued pre-training with an extended vocabulary, specialized instruction and alignment tuning to better capture the intricacies of regional languages. This allows them to respect and reflect local cultural norms, customs, stylistic preferences, and legal considerations. Our comprehensive evaluation demonstrates that SeaLLM-13b models exhibit superior performance across a wide spectrum of linguistic tasks and assistant-style instruction-following capabilities relative to comparable open-source models. Moreover, they outperform ChatGPT-3.5 in non-Latin languages, such as Thai, Khmer, Lao, and Burmese, by large margins while remaining lightweight and cost-effective to operate.
số ISSN:2331-8422
Nguồn:Engineering Database