Design choices made by LLM-based test generators prevent them from finding bugs

Сохранить в:
Библиографические подробности
Опубликовано в::arXiv.org (Dec 18, 2024), p. n/a
Главный автор: Noble Saji Mathews
Другие авторы: Nagappan, Meiyappan
Опубликовано:
Cornell University Library, arXiv.org
Предметы:
Online-ссылка:Citation/Abstract
Full text outside of ProQuest
Метки: Добавить метку
Нет меток, Требуется 1-ая метка записи!
Описание
Краткий обзор:There is an increasing amount of research and commercial tools for automated test case generation using Large Language Models (LLMs). This paper critically examines whether recent LLM-based test generation tools, such as Codium CoverAgent and CoverUp, can effectively find bugs or unintentionally validate faulty code. Considering bugs are only exposed by failing test cases, we explore the question: can these tools truly achieve the intended objectives of software testing when their test oracles are designed to pass? Using real human-written buggy code as input, we evaluate these tools, showing how LLM-generated tests can fail to detect bugs and, more alarmingly, how their design can worsen the situation by validating bugs in the generated test suite and rejecting bug-revealing tests. These findings raise important questions about the validity of the design behind LLM-based test generation tools and their impact on software quality and test suite reliability.
ISSN:2331-8422
Источник:Engineering Database