Do Language Models Understand the Cognitive Tasks Given to Them? Investigations with the N-Back Paradigm

שמור ב:
מידע ביבליוגרפי
הוצא לאור ב:arXiv.org (Dec 24, 2024), p. n/a
מחבר ראשי: Hu, Xiaoyang
מחברים אחרים: Lewis, Richard L
יצא לאור:
Cornell University Library, arXiv.org
נושאים:
גישה מקוונת:Citation/Abstract
Full text outside of ProQuest
תגים: הוספת תג
אין תגיות, היה/י הראשונ/ה לתייג את הרשומה!
תיאור
Resumen:Cognitive tasks originally developed for humans are now increasingly used to study language models. While applying these tasks is often straightforward, interpreting their results can be challenging. In particular, when a model underperforms, it's often unclear whether this results from a limitation in the cognitive ability being tested or a failure to understand the task itself. A recent study argued that GPT 3.5's declining performance on 2-back and 3-back tasks reflects a working memory capacity limit similar to humans. By analyzing a range of open-source language models of varying performance levels on these tasks, we show that the poor performance instead reflects a limitation in task comprehension and task set maintenance. In addition, we push the best performing model to higher n values and experiment with alternative prompting strategies, before analyzing model attentions. Our larger aim is to contribute to the ongoing conversation around refining methodologies for the cognitive evaluation of language models.
ISSN:2331-8422
Fuente:Engineering Database