SpringProd and ApacheProd - executable text-code datasets

Citation Author(s):
Magdalena
Kacmajor
Submitted by:
Magdalena Kacmajor
Last updated:
Thu, 11/21/2024 - 19:00
DOI:
10.21227/zhgy-jc38
Data Format:
License:
0
0 ratings - Please login to submit your rating.

Abstract 

M. Kacmajor and J.D. Kelleher, "ExTra: Evaluation of Automatically Generated Source Code Using Execution Traces" (submitted to IEEE TSE)

In this paper we propose ExTra---a novel approach to evaluating code quality based on the comparison of execution traces of the generated code and the ground-truth code. ExTra captures the behaviour of the programs implemented with the generated code, taking into account all the internal and external dependencies. In contrast to source-code based metrics, ExTra is semantically meaningful; and in contrast to the evaluation approaches measuring the functional correctness of code,  ExTra is suitable for evaluation of code developed in the context of real-life software systems.

 The first contribution of this paper is the design, implementation, and validation of ExTra. The value of ExTra is examined via experiments in which our metric and three source-code based metrics (BLEU, Levenshtein distance and CodeBLEU) are applied to two types of automatically generated source code: test code and production code. The results show that the scores produced by the three source-code based metrics are highly correlated, while ExTra is clearly distinct. The qualitative analysis of the differences reveals a number of examples of ExTra scores being semantically more adequate than the scores computed based on token comparison. Furthermore, the quantitative analysis of the agreement between the evaluation scores and test verdicts---produced by generated test cases or by test cases applied to the generated code---shows that ExTra is a much better predictor of verdicts \textit{failed} than any of the three text-oriented metrics. On the whole, our results indicate that ExTra provides added value to the process of assessing the quality of the generated code, and we recommend it as an evaluation tool complementary to the source-code based methods.              

The second contribution of this paper are three new evaluation datasets which contain executable code extracted from large, active Github repositories and can be used for evaluting models' performance using ExTra, or for other tasks that require executable code.

Instructions: 

SpringProd and ApacheProd are two evaluation datasets consisting of production code. These datasets have been designed so that their format matched and extended the format of Concode, the benchmark dataset used within the CodeXGLUE project. As a result, these two datasets can be straightforwardly used to evaluate any model trained using the Concode training set. These datasets are comprised of short Java methods, each paired with an NL description and the “class environment”. The Java code is preprocessed by replacing all local variables and all method parameters with normalized names (locs0... locsN and arg0... argN, respectively), and replacing each string with a special token; furthermore, every method signature is reduced to its return type, the set of renamed parameters and a special token function replacing the original method name. The NL description is taken from the method Javadoc, and the class environment is represented by all member variable names and their corresponding types, along with the names and return types of all the methods within that class.

The datasets are extended with metadata that enables the execution of each code example. These metadata include: (1) information needed for reverting the pre-preprocessing process, i.e., mapping to the original function name and strings replaced with special tokens; (2) information needed for the substitution of the ground truth code with the generated code, i.e., the path to the source file and the original method signature, and (3) code execution drivers necessary for launching the execution, i.e. references to existing unit tests.