LLM
Resource usage fuzzing samples and related data. Contains samples from Pythoin, random data, GPT-3.5, GPT-4, Gemini-1.0, Claude Instant, and Claude Opus. These samples are generated for 50 Python functions. Also included are resource measures for CPU time, instruction count, function calls, peak RAM usage, final RAM allocated, and coverage. These values were collected on an isolated system and account for interference from other processes.
- Categories:
This dataset has been meticulously curated to evaluate the efficiency of Retrieval-Augmented Generation (RAG) pipelines in both retrieval and generative accuracy, with a particular focus on scenarios involving overlapping contexts. The dataset comprises two primary components: Motor data and Employee data. The Motor dataset includes master data of various motor models along with their corresponding manuals, linked by the motor's model name. Similarly, the Employee dataset encompasses employee master data and associated policy documents, linked by department.
- Categories:
<p class="MsoNormal"><span lang="EN-US">The Text2RDF dataset is primarily designed to facilitate the transformation from text to RDF. It contains 1,000 annotated text segments, encompassing a total of 7,228 triplets. Utilizing this dataset to fine-tune large language models enables the models to extract triplets from text, which can ultimately be used to construct knowledge graphs. </span></p>
- Categories: