Datasets
Standard Dataset
LLM-Generated Python Fuzzing Seeds
- Citation Author(s):
- Submitted by:
- Gavin Black
- Last updated:
- Wed, 06/19/2024 - 13:57
- DOI:
- 10.21227/z8bn-9n91
- Data Format:
- License:
- Categories:
- Keywords:
Abstract
This dataset comprises over 38,000 seed inputs generated from a range of Large Language Models (LLMs), including ChatGPT-3.5, ChatGPT-4, Claude-Opus, Claude-Instant, and Gemini Pro 1.0, specifically designed for the application in fuzzing Python functions. These seeds were produced as part of a study evaluating the utility of LLMs in automating the creation of effective fuzzing inputs, a method crucial for uncovering software defects in the Python programming environment where traditional methods show limitations. The dataset targets 50 commonly used Python functions across various libraries, highlighting the diversity and potential of LLM-generated inputs to enhance software testing processes. Each seed input within this collection has been evaluated for its effectiveness in improving code coverage and instruction count, underpinning a comprehensive framework that aids in determining the most efficient LLMs for fuzzing tasks. The results from this dataset have demonstrated the significant potential of LLMs to match or even exceed the outcomes of conventional fuzzing campaigns, thereby supporting the advancement of automated and scalable fuzzing technologies.
This dataset is intended to test coverage and resource usage of various Python functions. They can act as inputs to the included Python harness files, examples of this process are available at https://github.com/gavin-black-dsu/fuzzing_seeds and the included scripts.