SubData: A Python Library to Collect and Combine Datasets for Evaluating LLM Alignment on Downstream Tasks
Đã lưu trong:
| Xuất bản năm: | arXiv.org (Dec 21, 2024), p. n/a |
|---|---|
| Tác giả chính: | |
| Tác giả khác: | , |
| Được phát hành: |
Cornell University Library, arXiv.org
|
| Những chủ đề: | |
| Truy cập trực tuyến: | Citation/Abstract Full text outside of ProQuest |
| Các nhãn: |
Không có thẻ, Là người đầu tiên thẻ bản ghi này!
|
| Bài tóm tắt: | With the release of ever more capable large language models (LLMs), researchers in NLP and related disciplines have started to explore the usability of LLMs for a wide variety of different annotation tasks. Very recently, a lot of this attention has shifted to tasks that are subjective in nature. Given that the latest generations of LLMs have digested and encoded extensive knowledge about different human subpopulations and individuals, the hope is that these models can be trained, tuned or prompted to align with a wide range of different human perspectives. While researchers already evaluate the success of this alignment via surveys and tests, there is a lack of resources to evaluate the alignment on what oftentimes matters the most in NLP; the actual downstream tasks. To fill this gap we present SubData, a Python library that offers researchers working on topics related to subjectivity in annotation tasks a convenient way of collecting, combining and using a range of suitable datasets. |
|---|---|
| số ISSN: | 2331-8422 |
| Nguồn: | Engineering Database |