CodeComplex: Dataset for Worst-Case Time Complexity Prediction

Guardat en:
Dades bibliogràfiques
Publicat a:arXiv.org (Dec 24, 2024), p. n/a
Autor principal: Baik, Seung-Yeop
Altres autors: Hahn, Joonghyuk, Kim, Jungin, Jeon, Mingi, Aditi, Han, Yo-Sub, Sang-Ki Ko
Publicat:
Cornell University Library, arXiv.org
Matèries:
Accés en línia:Citation/Abstract
Full text outside of ProQuest
Etiquetes: Afegir etiqueta
Sense etiquetes, Sigues el primer a etiquetar aquest registre!

MARC

LEADER 00000nab a2200000uu 4500
001 3149111419
003 UK-CbPIL
022 |a 2331-8422 
035 |a 3149111419 
045 0 |b d20241224 
100 1 |a Baik, Seung-Yeop 
245 1 |a CodeComplex: Dataset for Worst-Case Time Complexity Prediction 
260 |b Cornell University Library, arXiv.org  |c Dec 24, 2024 
513 |a Working Paper 
520 3 |a Reasoning ability of Large Language Models (LLMs) is a crucial ability, especially in complex decision-making tasks. One significant task to show LLMs' reasoning capability is code time complexity prediction, which involves various intricate factors such as the input range of variables and conditional loops. Current benchmarks fall short of providing a rigorous assessment due to limited data, language constraints, and insufficient labeling. They do not consider time complexity based on input representation and merely evaluate whether predictions fall into the same class, lacking a measure of how close incorrect predictions are to the correct ones. To address these dependencies, we introduce CodeComplex, the first robust and extensive dataset designed to evaluate LLMs' reasoning abilities in predicting code time complexity. CodeComplex comprises 4,900 Java codes and an equivalent number of Python codes, overcoming language and labeling constraints, carefully annotated with complexity labels based on input characteristics by a panel of algorithmic experts. Additionally, we propose specialized evaluation metrics for the reasoning of complexity prediction tasks, offering a more precise and reliable assessment of LLMs' reasoning capabilities. We release our dataset (https://github.com/sybaik1/CodeComplex-Data) and baseline models (https://github.com/sybaik1/CodeComplex-Models) publicly to encourage the relevant (NLP, SE, and PL) communities to utilize and participate in this research. 
653 |a Labeling 
653 |a Datasets 
653 |a Python 
653 |a Labels 
653 |a Large language models 
653 |a Constraints 
653 |a Predictions 
653 |a Evaluation 
653 |a Task complexity 
653 |a Time measurement 
653 |a Reasoning 
700 1 |a Hahn, Joonghyuk 
700 1 |a Kim, Jungin 
700 1 |a Jeon, Mingi 
700 1 |a Aditi 
700 1 |a Han, Yo-Sub 
700 1 |a Sang-Ki Ko 
773 0 |t arXiv.org  |g (Dec 24, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3149111419/abstract/embedded/6A8EOT78XXH2IG52?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2401.08719