Predicting Software Performance with Divide-and-Learn

Guardat en:
Dades bibliogràfiques
Publicat a:arXiv.org (Feb 4, 2024), p. n/a
Autor principal: Gong, Jingzhi
Altres autors: Chen, Tao
Publicat:
Cornell University Library, arXiv.org
Matèries:
Accés en línia:Citation/Abstract
Full text outside of ProQuest
Etiquetes: Afegir etiqueta
Sense etiquetes, Sigues el primer a etiquetar aquest registre!

MARC

LEADER 00000nab a2200000uu 4500
001 2825307092
003 UK-CbPIL
022 |a 2331-8422 
035 |a 2825307092 
045 0 |b d20240204 
100 1 |a Gong, Jingzhi 
245 1 |a Predicting Software Performance with Divide-and-Learn 
260 |b Cornell University Library, arXiv.org  |c Feb 4, 2024 
513 |a Working Paper 
520 3 |a Predicting the performance of highly configurable software systems is the foundation for performance testing and quality assurance. To that end, recent work has been relying on machine/deep learning to model software performance. However, a crucial yet unaddressed challenge is how to cater for the sparsity inherited from the configuration landscape: the influence of configuration options (features) and the distribution of data samples are highly sparse. In this paper, we propose an approach based on the concept of 'divide-and-learn', dubbed DaL. The basic idea is that, to handle sample sparsity, we divide the samples from the configuration landscape into distant divisions, for each of which we build a regularized Deep Neural Network as the local model to deal with the feature sparsity. A newly given configuration would then be assigned to the right model of division for the final prediction. Experiment results from eight real-world systems and five sets of training data reveal that, compared with the state-of-the-art approaches, DaL performs no worse than the best counterpart on 33 out of 40 cases (within which 26 cases are significantly better) with up to 1.94x improvement on accuracy; requires fewer samples to reach the same/better accuracy; and producing acceptable training overhead. Practically, DaL also considerably improves different global models when using them as the underlying local models, which further strengthens its flexibility. To promote open science, all the data, code, and supplementary figures of this work can be accessed at our repository: https://github.com/ideas-labo/DaL. 
653 |a Sparsity 
653 |a Accuracy 
653 |a Software 
653 |a Machine learning 
653 |a Performance prediction 
653 |a Configurable programs 
653 |a Quality assurance 
653 |a Artificial neural networks 
653 |a Configurations 
653 |a Training 
700 1 |a Chen, Tao 
773 0 |t arXiv.org  |g (Feb 4, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/2825307092/abstract/embedded/J7RWLIQ9I3C9JK51?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2306.06651