Pre-Training Representations of Binary Code Using Contrastive Learning

Guardado en:
Detalles Bibliográficos
Publicado en:arXiv.org (Dec 13, 2024), p. n/a
Autor principal: Zhang, Yifan
Otros Autores: Huang, Chen, Zhang, Yueke, Cao, Kevin, Andersen, Scott Thomas, Shao, Huajie, Leach, Kevin, Huang, Yu
Publicado:
Cornell University Library, arXiv.org
Materias:
Acceso en línea:Citation/Abstract
Full text outside of ProQuest
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!

MARC

LEADER 00000nab a2200000uu 4500
001 2724046652
003 UK-CbPIL
022 |a 2331-8422 
035 |a 2724046652 
045 0 |b d20241213 
100 1 |a Zhang, Yifan 
245 1 |a Pre-Training Representations of Binary Code Using Contrastive Learning 
260 |b Cornell University Library, arXiv.org  |c Dec 13, 2024 
513 |a Working Paper 
520 3 |a Binary code analysis and comprehension is critical to applications in reverse engineering and computer security tasks where source code is not available. Unfortunately, unlike source code, binary code lacks semantics and is more difficult for human engineers to understand and analyze. In this paper, we present ContraBin, a contrastive learning technique that integrates source code and comment information along with binaries to create an embedding capable of aiding binary analysis and comprehension tasks. Specifically, we present three components in ContraBin: (1) a primary contrastive learning method for initial pre-training, (2) a simplex interpolation method to integrate source code, comments, and binary code, and (3) an intermediate representation learning algorithm to train a binary code embedding. We further analyze the impact of human-written and synthetic comments on binary code comprehension tasks, revealing a significant performance disparity. While synthetic comments provide substantial benefits, human-written comments are found to introduce noise, even resulting in performance drops compared to using no comments. These findings reshape the narrative around the role of comment types in binary code analysis. We evaluate the effectiveness of ContraBin through four indicative downstream tasks related to binary code: algorithmic functionality classification, function name recovery, code summarization, and reverse engineering. The results show that ContraBin considerably improves performance on all four tasks, measured by accuracy, mean of average precision, and BLEU scores as appropriate. ContraBin is the first language representation model to incorporate source code, binary code, and comments into contrastive code representation learning and is intended to contribute to the field of binary code analysis. The dataset used in this study is available for further research. 
653 |a Semantics 
653 |a Source code 
653 |a Binary codes 
653 |a Interpolation 
653 |a Reverse engineering 
653 |a Algorithms 
653 |a Codes 
653 |a Binary system 
653 |a Machine learning 
653 |a Natural language (computers) 
653 |a Software 
653 |a Cybersecurity 
653 |a Representations 
653 |a Training 
700 1 |a Huang, Chen 
700 1 |a Zhang, Yueke 
700 1 |a Cao, Kevin 
700 1 |a Andersen, Scott Thomas 
700 1 |a Shao, Huajie 
700 1 |a Leach, Kevin 
700 1 |a Huang, Yu 
773 0 |t arXiv.org  |g (Dec 13, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/2724046652/abstract/embedded/ZKJTFFSVAI7CB62C?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2210.05102