Phaedrus: Exploring Dynamic Application Behavior with Lightweight Generative Models and Large-Language Models

Uloženo v:
Podrobná bibliografie
Vydáno v:arXiv.org (Dec 9, 2024), p. n/a
Hlavní autor: Chatterjee, Bodhisatwa
Další autoři: Jadhav, Neeraj, Khan, Sharjeel, Pande, Santosh
Vydáno:
Cornell University Library, arXiv.org
Témata:
On-line přístup:Citation/Abstract
Full text outside of ProQuest
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!

MARC

LEADER 00000nab a2200000uu 4500
001 3143052544
003 UK-CbPIL
022 |a 2331-8422 
035 |a 3143052544 
045 0 |b d20241209 
100 1 |a Chatterjee, Bodhisatwa 
245 1 |a Phaedrus: Exploring Dynamic Application Behavior with Lightweight Generative Models and Large-Language Models 
260 |b Cornell University Library, arXiv.org  |c Dec 9, 2024 
513 |a Working Paper 
520 3 |a Application profiling is an indispensable technique for many software development tasks, such as code optimization and memory management, where optimization decisions are tailored to specific program profiles. Unfortunately, modern applications codebases exhibit highly variant behavior across different inputs, creating challenges for conventional profiling approaches that rely on a single execution instance. In this paper, we propose \textbf{Phaedrus}, a new \textit{compiler-assisted deep learning framework} designed to predict dynamic program behaviors across varied execution scenarios, specifically focusing on dynamic function call prediction. Traditional profile-guided optimization methods struggle with the input-dependent variability of modern applications, where profiling on different inputs yields divergent application behaviors. To address this, Phaedrus proposes two new approaches: \textit{Application Profile Generalization}, which uses generative models trained on compressed and augmented \textit{Whole Program Path} (WPP) profiles to predict application behavior under unseen inputs, and \textit{Application Behavior Synthesis}, a profile-less approach where Large Language Models (LLMs) directly infer dynamic functions based on source code \& static compiler analysis, bypassing the need for traditional profiling. Our experiments show that \textit{Phaedrus} can achieve upto \(10^7X\) reduction in WPP profile sizes, can predict dynamic hot functions that cover upto 85-99\% of the execution time, along with an average of \textbf{13.46\%} (upto \textbf{65\%}) reduction in application binary size reduction, without profiles. 
653 |a Behavior 
653 |a Memory tasks 
653 |a Source code 
653 |a Compilers 
653 |a Large language models 
653 |a Software development 
653 |a Optimization 
653 |a Memory management 
653 |a Computer programming 
700 1 |a Jadhav, Neeraj 
700 1 |a Khan, Sharjeel 
700 1 |a Pande, Santosh 
773 0 |t arXiv.org  |g (Dec 9, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3143052544/abstract/embedded/ZKJTFFSVAI7CB62C?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2412.06994