Do Language Models Understand the Cognitive Tasks Given to Them? Investigations with the N-Back Paradigm
Guardado en:
| Publicado en: | arXiv.org (Dec 24, 2024), p. n/a |
|---|---|
| Autor principal: | |
| Otros Autores: | |
| Publicado: |
Cornell University Library, arXiv.org
|
| Materias: | |
| Acceso en línea: | Citation/Abstract Full text outside of ProQuest |
| Etiquetas: |
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
MARC
| LEADER | 00000nab a2200000uu 4500 | ||
|---|---|---|---|
| 001 | 3149106909 | ||
| 003 | UK-CbPIL | ||
| 022 | |a 2331-8422 | ||
| 035 | |a 3149106909 | ||
| 045 | 0 | |b d20241224 | |
| 100 | 1 | |a Hu, Xiaoyang | |
| 245 | 1 | |a Do Language Models Understand the Cognitive Tasks Given to Them? Investigations with the N-Back Paradigm | |
| 260 | |b Cornell University Library, arXiv.org |c Dec 24, 2024 | ||
| 513 | |a Working Paper | ||
| 520 | 3 | |a Cognitive tasks originally developed for humans are now increasingly used to study language models. While applying these tasks is often straightforward, interpreting their results can be challenging. In particular, when a model underperforms, it's often unclear whether this results from a limitation in the cognitive ability being tested or a failure to understand the task itself. A recent study argued that GPT 3.5's declining performance on 2-back and 3-back tasks reflects a working memory capacity limit similar to humans. By analyzing a range of open-source language models of varying performance levels on these tasks, we show that the poor performance instead reflects a limitation in task comprehension and task set maintenance. In addition, we push the best performing model to higher n values and experiment with alternative prompting strategies, before analyzing model attentions. Our larger aim is to contribute to the ongoing conversation around refining methodologies for the cognitive evaluation of language models. | |
| 653 | |a Memory tasks | ||
| 653 | |a Cognitive tasks | ||
| 700 | 1 | |a Lewis, Richard L | |
| 773 | 0 | |t arXiv.org |g (Dec 24, 2024), p. n/a | |
| 786 | 0 | |d ProQuest |t Engineering Database | |
| 856 | 4 | 1 | |3 Citation/Abstract |u https://www.proquest.com/docview/3149106909/abstract/embedded/ITVB7CEANHELVZIZ?source=fedsrch |
| 856 | 4 | 0 | |3 Full text outside of ProQuest |u http://arxiv.org/abs/2412.18120 |