AI in Software Engineering
CodePromptEval is a dataset of 7072 prompts designed to evaluate five prompt techniques (few-shot, persona, chain-of-thought, function signature, list of packages) and their effect on the correctness, similarity, and quality of complete functions generated. Each data point in the dataset includes a function generation task, a combination of prompt techniques to be applied, the prompt in natural language that applied the prompt techniques, the ground truth of the functions (human-written functions based on CoderEval dataset by Yu et al.), the tests to evaluate the correctness of the generate
- Categories:
The database contains PROCON metrics values extracted from more than 30400 source code files (with 14950 bug reports) of GitHub repository. Various Machine earning (ML) models trained using PROCON metrics outperform the ones trained using OO metrics of PROMISE repository.
- Categories: