BLAZE: Cross-Language and Cross-Project Bug Localization via Dynamic Chunking and Hard Example Learning
Gardado en:
| Publicado en: | arXiv.org (Aug 19, 2024), p. n/a |
|---|---|
| Autor Principal: | |
| Outros autores: | , |
| Publicado: |
Cornell University Library, arXiv.org
|
| Materias: | |
| Acceso en liña: | Citation/Abstract Full text outside of ProQuest |
| Etiquetas: |
Sen Etiquetas, Sexa o primeiro en etiquetar este rexistro!
|
MARC
| LEADER | 00000nab a2200000uu 4500 | ||
|---|---|---|---|
| 001 | 3084969526 | ||
| 003 | UK-CbPIL | ||
| 022 | |a 2331-8422 | ||
| 035 | |a 3084969526 | ||
| 045 | 0 | |b d20240819 | |
| 100 | 1 | |a Chakraborty, Partha | |
| 245 | 1 | |a BLAZE: Cross-Language and Cross-Project Bug Localization via Dynamic Chunking and Hard Example Learning | |
| 260 | |b Cornell University Library, arXiv.org |c Aug 19, 2024 | ||
| 513 | |a Working Paper | ||
| 520 | 3 | |a Software bugs require developers to exert significant effort to identify and resolve them, often consuming about one-third of their time. Bug localization, the process of pinpointing the exact source code files that need modification, is crucial in reducing this effort. Existing bug localization tools, typically reliant on deep learning techniques, face limitations in cross-project applicability and effectiveness in multi-language environments. Recent advancements with Large Language Models (LLMs) offer detailed representations for bug localization. However, they encounter challenges with limited context windows and mapping accuracy. To address these issues, we propose BLAZE, an approach that employs dynamic chunking and hard example learning. First, BLAZE dynamically segments source code to minimize continuity loss. Then, BLAZE fine-tunes a GPT-based model using challenging bug cases, in order to enhance cross-project and cross-language bug localization. To support the capability of BLAZE, we create the BEETLEBOX dataset, which comprises 26,321 bugs from 29 large and thriving open-source projects across five different programming languages (Java, C++, Python, Go, and JavaScript). Our evaluations of BLAZE on three benchmark datasets BEETLEBOX, SWE-Bench, and Ye et al. demonstrate substantial improvements compared to six state-of-the-art baselines. Specifically, BLAZE achieves up to an increase of 120% in Top 1 accuracy, 144% in Mean Average Precision (MAP), and 100% in Mean Reciprocal Rank (MRR). An extensive ablation study confirms the contributions of our pipeline components to the overall performance enhancement. | |
| 653 | |a Accuracy | ||
| 653 | |a Datasets | ||
| 653 | |a Python | ||
| 653 | |a Source code | ||
| 653 | |a Large language models | ||
| 653 | |a Deep learning | ||
| 653 | |a Localization | ||
| 653 | |a Programming languages | ||
| 653 | |a Windows (computer programs) | ||
| 653 | |a Ablation | ||
| 700 | 1 | |a Alfadel, Mahmoud | |
| 700 | 1 | |a Nagappan, Meiyappan | |
| 773 | 0 | |t arXiv.org |g (Aug 19, 2024), p. n/a | |
| 786 | 0 | |d ProQuest |t Engineering Database | |
| 856 | 4 | 1 | |3 Citation/Abstract |u https://www.proquest.com/docview/3084969526/abstract/embedded/H09TXR3UUZB2ISDL?source=fedsrch |
| 856 | 4 | 0 | |3 Full text outside of ProQuest |u http://arxiv.org/abs/2407.17631 |