LLM-Assisted Static Analysis for Detecting Security Vulnerabilities

Збережено в:
Бібліографічні деталі
Опубліковано в::arXiv.org (Nov 11, 2024), p. n/a
Автор: Li, Ziyang
Інші автори: Dutta, Saikat, Naik, Mayur
Опубліковано:
Cornell University Library, arXiv.org
Предмети:
Онлайн доступ:Citation/Abstract
Full text outside of ProQuest
Теги: Додати тег
Немає тегів, Будьте першим, хто поставить тег для цього запису!
Опис
Короткий огляд:Software is prone to security vulnerabilities. Program analysis tools to detect them have limited effectiveness in practice due to their reliance on human labeled specifications. Large language models (or LLMs) have shown impressive code generation capabilities but they cannot do complex reasoning over code to detect such vulnerabilities especially since this task requires whole-repository analysis. We propose IRIS, a neuro-symbolic approach that systematically combines LLMs with static analysis to perform whole-repository reasoning for security vulnerability detection. Specifically, IRIS leverages LLMs to infer taint specifications and perform contextual analysis, alleviating needs for human specifications and inspection. For evaluation, we curate a new dataset, CWE-Bench-Java, comprising 120 manually validated security vulnerabilities in real-world Java projects. A state-of-the-art static analysis tool CodeQL detects only 27 of these vulnerabilities whereas IRIS with GPT-4 detects 55 (+28) and improves upon CodeQL's average false discovery rate by 5% points. Furthermore, IRIS identifies 6 previously unknown vulnerabilities which cannot be found by existing tools.
ISSN:2331-8422
Джерело:Engineering Database