Datasets
Standard Dataset
Replication Data for: Retrieval-Augmented Generation for Service Discovery: Chunking Strategies and Benchmarking

- Citation Author(s):
- Submitted by:
- Robin Pesl
- Last updated:
- Fri, 03/07/2025 - 16:29
- DOI:
- 10.21227/vdm4-k186
- License:
- Categories:
- Keywords:
Abstract
Integrating multiple (sub-)systems is essential to create advanced Information Systems. Difficulties mainly arise when integrating dynamic environments, e.g., the integration at design time of not yet existing services. This has been traditionally addressed using a registry that provides the API documentation of the endpoints. Large Language Models (LLMs) have shown to be capable of automatically creating system integrations (e.g., as service composition) based on this documentation but require concise input due to input token limitations, especially regarding comprehensive API descriptions. Currently, it is unknown how best to preprocess these API descriptions. In the present work, we (i) analyze the usage of Retrieval Augmented Generation (RAG) for endpoint discovery and the chunking, i.e., preprocessing, of state-of-practice OpenAPIs to reduce the input token length while preserving the most relevant information. To further reduce the input token length for the composition prompt and improve endpoint retrieval, we propose (ii) a Discovery Agent that only receives a summary of the most relevant endpoints and retrieves specification details on demand. We evaluate RAG for endpoint discovery using (iii) a proposed novel service discovery benchmark SOCBench-D representing a general setting across numerous domains and the real-world RestBench benchmark, first, for the different chunking possibilities and parameters measuring the endpoint retrieval accuracy. Then, we assess the Discovery Agent using the same test data set. The prototype shows how to successfully employ RAG for endpoint discovery to reduce the token count. Our experiments show that endpoint-based approaches outperform naive chunking methods for preprocessing. Relying on an agent significantly improves precision while being prone to decrease recall, disclosing the need for further reasoning capabilities.
Results of the RAG experiments.
- Experiment results for the RAG: {benchmark}/{embedding_model}/{chunking_strategy}/results_{top-k}.json.
- Experiment results for the Discovery Agent: {benchmark}/oai/agent/results_{top-k}.json.
- FAISS store (intermediate data required for exact reproduction of results; one folder for each embedding model): {benchmark}/{embedding_model}/{chunking_strategy}/{domain}/.
- Intermediate data of the LLM-based refinement methods required for the exact reproduction of results: *_parser.json.