Rango: Adaptive Retrieval-Augmented Proving for Automated Software Verification

में बचाया:
ग्रंथसूची विवरण
में प्रकाशित:arXiv.org (Dec 18, 2024), p. n/a
मुख्य लेखक: Thompson, Kyle
अन्य लेखक: Saavedra, Nuno, Carrott, Pedro, Fisher, Kevin, Sanchez-Stern, Alex, Brun, Yuriy, Ferreira, João F, Lerner, Sorin, First, Emily
प्रकाशित:
Cornell University Library, arXiv.org
विषय:
ऑनलाइन पहुंच:Citation/Abstract
Full text outside of ProQuest
टैग: टैग जोड़ें
कोई टैग नहीं, इस रिकॉर्ड को टैग करने वाले पहले व्यक्ति बनें!

MARC

LEADER 00000nab a2200000uu 4500
001 3147265850
003 UK-CbPIL
022 |a 2331-8422 
035 |a 3147265850 
045 0 |b d20241218 
100 1 |a Thompson, Kyle 
245 1 |a Rango: Adaptive Retrieval-Augmented Proving for Automated Software Verification 
260 |b Cornell University Library, arXiv.org  |c Dec 18, 2024 
513 |a Working Paper 
520 3 |a Formal verification using proof assistants, such as Coq, enables the creation of high-quality software. However, the verification process requires significant expertise and manual effort to write proofs. Recent work has explored automating proof synthesis using machine learning and large language models (LLMs). This work has shown that identifying relevant premises, such as lemmas and definitions, can aid synthesis. We present Rango, a fully automated proof synthesis tool for Coq that automatically identifies relevant premises and also similar proofs from the current project and uses them during synthesis. Rango uses retrieval augmentation at every step of the proof to automatically determine which proofs and premises to include in the context of its fine-tuned LLM. In this way, Rango adapts to the project and to the evolving state of the proof. We create a new dataset, CoqStoq, of 2,226 open-source Coq projects and 196,929 theorems from GitHub, which includes both training data and a curated evaluation benchmark of well-maintained projects. On this benchmark, Rango synthesizes proofs for 32.0% of the theorems, which is 29% more theorems than the prior state-of-the-art tool Tactician. Our evaluation also shows that Rango adding relevant proofs to its context leads to a 47% increase in the number of theorems proven. 
653 |a Program verification (computers) 
653 |a Theorems 
653 |a Large language models 
653 |a Automation 
653 |a Machine learning 
653 |a Synthesis 
653 |a Context 
653 |a Benchmarks 
653 |a Retrieval 
700 1 |a Saavedra, Nuno 
700 1 |a Carrott, Pedro 
700 1 |a Fisher, Kevin 
700 1 |a Sanchez-Stern, Alex 
700 1 |a Brun, Yuriy 
700 1 |a Ferreira, João F 
700 1 |a Lerner, Sorin 
700 1 |a First, Emily 
773 0 |t arXiv.org  |g (Dec 18, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3147265850/abstract/embedded/6A8EOT78XXH2IG52?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2412.14063