Datasets
Standard Dataset
BinSD_dataset
- Citation Author(s):
- Submitted by:
- Lirong Fu
- Last updated:
- Wed, 11/01/2023 - 10:36
- DOI:
- 10.21227/38fk-z839
- License:
- Categories:
- Keywords:
Abstract
AI-powered binary code similarity detection (BinSD), which transforms intricate binary code comparison to the distance measure of code embedding through neural networks, has been widely applied to program analysis. However, due to the diversity of the adopted embedding strategies, evaluation methodologies, running environments, and/or benchmarks, it is difficult to quantitatively understand to what extent the BinSD problem has been solved, especially in real-world applications. Moreover, the lack of an in-depth investigation of the increasingly complex embedding neural networks and various evaluation methodologies has become the key factor hindering the development of AI-powered BinSD.
To fill these research gaps, in this paper, we present a systematic evaluation of state-of-the-art AI-powered BinSD approaches by conducting a comprehensive comparison of BinSD systems on similar function detection and two downstream applications, namely vulnerability search and license violation detection. Building upon this evaluation, we perform the first investigation of embedding neural networks and evaluation methodologies. The experimental results yield several findings, which provide valuable insights in the BinSD domain, including (1) despite the GNN-based BinSD systems currently achieving the best performance in similar function detection, there still exists considerable space for improvements; (2) the capability of AI-powered BinSD approaches exhibits significant variation when applied to different downstream applications; (3) existing evaluation methodologies still need substantial adjustments. For instance, the evaluation metrics (such as the widely adopted ROC and AUC) usually fall short of accurately representing the model performance of the practical use in real-world scenarios. Based on the extensive experiments and analysis, we further provide several promising future research directions. To facilitate future work in this interesting research, we will open source the entire datasets, benchmarks, and implementation details.
This dataset is used to train the AI-poweredBinSD systems and perform similar function detection. From prior works, we find that function attributes suchas the number of basic blocks can greatly influence the performance of BinSD systems. Thus, the key point to buildingthis dataset is to contain representative benchmark programs containing as various function attributes as possible. According to our empirical analysis, we first define a set of function attributes (the assembly size, the number of basic blocks, the number of contained callee functions, the architecture, and the optimization level of a binary function). Then we construct the dataset to contain binaries that represent various function attributes. Specifically, on top of the binaries released by BINKIT, we select 25 representative opensource programs (including OpenSSL, Busybox, Findutis, etc.), consisting of 34 ELFs to construct the dataset.