Tasks People Prompt: A Taxonomy of LLM Downstream Tasks in Software Verification and Falsification Approaches

Guardado en:
Detalles Bibliográficos
Publicado en:arXiv.org (Sep 8, 2024), p. n/a
Autor principal: Braberman, Víctor A
Otros Autores: Bonomo-Braberman, Flavia, Charalambous, Yiannis, Colonna, Juan G, Cordeiro, Lucas C, Rosiane de Freitas
Publicado:
Cornell University Library, arXiv.org
Materias:
Acceso en línea:Citation/Abstract
Full text outside of ProQuest
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!

MARC

LEADER 00000nab a2200000uu 4500
001 3039626598
003 UK-CbPIL
022 |a 2331-8422 
035 |a 3039626598 
045 0 |b d20240908 
100 1 |a Braberman, Víctor A 
245 1 |a Tasks People Prompt: A Taxonomy of LLM Downstream Tasks in Software Verification and Falsification Approaches 
260 |b Cornell University Library, arXiv.org  |c Sep 8, 2024 
513 |a Working Paper 
520 3 |a Prompting has become one of the main approaches to leverage emergent capabilities of Large Language Models [Brown et al. NeurIPS 2020, Wei et al. TMLR 2022, Wei et al. NeurIPS 2022]. Recently, researchers and practitioners have been "playing" with prompts (e.g., In-Context Learning) to see how to make the most of pre-trained Language Models. By homogeneously dissecting more than a hundred articles, we investigate how software testing and verification research communities have leveraged LLMs capabilities. First, we validate that downstream tasks are adequate to convey a nontrivial modular blueprint of prompt-based proposals in scope. Moreover, we name and classify the concrete downstream tasks we recover in both validation research papers and solution proposals. In order to perform classification, mapping, and analysis, we also develop a novel downstream-task taxonomy. The main taxonomy requirement is to highlight commonalities while exhibiting variation points of task types that enable pinpointing emerging patterns in a varied spectrum of Software Engineering problems that encompasses testing, fuzzing, fault localization, vulnerability detection, static analysis, and program verification approaches. Avenues for future research are also discussed based on conceptual clusters induced by the taxonomy. 
653 |a Taxonomy 
653 |a Program verification (computers) 
653 |a Software engineering 
653 |a Large language models 
653 |a Software testing 
700 1 |a Bonomo-Braberman, Flavia 
700 1 |a Charalambous, Yiannis 
700 1 |a Colonna, Juan G 
700 1 |a Cordeiro, Lucas C 
700 1 |a Rosiane de Freitas 
773 0 |t arXiv.org  |g (Sep 8, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3039626598/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2404.09384