Datasets
Standard Dataset
CodePromptEval
- Citation Author(s):
- Submitted by:
- Ranim Khojah
- Last updated:
- Sun, 12/22/2024 - 11:51
- DOI:
- 10.21227/sj94-ez71
- Data Format:
- Links:
- License:
- Categories:
- Keywords:
Abstract
CodePromptEval is a dataset of 7072 prompts designed to evaluate five prompt techniques (few-shot, persona, chain-of-thought, function signature, list of packages) and their effect on the correctness, similarity, and quality of complete functions generated. Each data point in the dataset includes a function generation task, a combination of prompt techniques to be applied, the prompt in natural language that applied the prompt techniques, the ground truth of the functions (human-written functions based on CoderEval dataset by Yu et al.), the tests to evaluate the correctness of the generated functions. The prompts in the dataset are carefully designed to apply the five prompt techniques.
The dataset is in csv format. It is recommended to upload the dataset using Pandas dataframes.