Skip to main content

Datasets

Standard Dataset

Tarp-data

Citation Author(s):
Tao Wang
Hao Zeng
Jiahao Huang
Yuewen Wu
Heng Wu
Wenbo Zhang
Submitted by:
Jiahao Huang
Last updated:
DOI:
10.21227/6a5f-7e87
Data Format:
No Ratings Yet

Abstract

This dataset contains the source data and experimental result data required by the tarp project. The cover depicts our proposed adaptive resource allocation approach based on graph neural networks for optimizing qos-aware interactive microservices in cloud computing. This method uses DAG topology to extract the global characteristics of microservices, and adaptively generates microservice resource allocation strategies, which can effectively use microservice resources while ensuring the quality of service. This method uses EGAT to extract microservice features and uses reinforcement learning to generate resource allocation policies.First, we define the microservice state graph.Then, we use EGAT to generate embeddings for each node in the graph by extracting the hidden features of resources and network metrics.Based on the message pass paradigm of graph neural networks (GNN), we design the microservice feature passing to capture correlations between microservices, thereby improving the transferability of our approach. Finally, we use DDPG to model microserviceas in a uniform and self-adaptive manner. 

Instructions:

Tarp experimental data

We collected the experimental data through a series of online experiments and saved it. You can directly refer to the open source project tarp on Github[1] to run the experiment without collecting the data again. If you want to know the details of the experimental data and the collection process, please contact me by email.

Directory structure

Introduction

We implemented the SVM-RL adopted by Firm [2] and CNN-RL adopted by Sinan [3] as control comparison approaches .

CNN-RL: uses CNN to extract features ofstatic topology and use RL to provision resources for key microservice

SVM-RL: uses SVM to detect key microservice and use RL to provision resources for key microservice

Ours: the method proposed in this paper

dgl_graph: Graph dataset in dgl format, used for gnn training, for Experiments I and II (effectiveness on key microservice detection and transferability)

gnn: The resulting data from Experiments I and II (effectiveness on key microservice detection and transferability)

load: Considering the complexity of the real environment of microservice applications, we use workloads by referring to the existing typical open source workload provided by Alibaba [4].

reward # Reward data for the reinforcement learning process (Experiments III )

Directory struct

│ Readme.md # readme file│├─CNNRL # Raw data for CNN-RL│ avg-160.csv│ avg.csv│ p95-160.csv│ Readme.md│ resource.csv│ tp-80.csv│├─dgl_graph # Graph dataset in dgl format│ │ 40-test.g # graph data for trainticket│ ││ ├─ms # graph data for media service│ │ 1.g│ │ 11.g│ │ 2.g│ │ 3.g│ │ 9.g│ │ no.g│ ││ └─ss # graph data for social network│ 1.g│ 2.g│ 3.g│ 4.g│ 5.g│ 6.g│ no.g│├─gnn # The resulting data from Experiments I and II│ acc.csv # accuracy source data │ loss.csv # losssource data │├─load # simulation data of workloads│ load-b.csv│ load-b.png│ load.png│ load1.csv│ load1.png│ load2.png│ load3.csv│├─ours # Raw data for our method│ avg.csv│ resource.csv│ rewards-50.csv│ slo.csv│├─reward # Reward data for the reinforcement learning process│ reward.csv│ reward.png│└─SVMRL # Raw data for SVM-RL avg-1.csv p95-1.csv resource-1.csv rewards-1.csv slo-1.csv tp-160.csv

References

[1] Tarp project. https://github.com/Zenghao-CQ/tarp.

[2] H. Qiu, S. S. Banerjee, S. Jha, Z. T. Kalbarczyk, and R. K. Iyer. FIRM: An Intelligent Fine-grained Resource Management Framework for SLO-Oriented Microservices. In OSDI.

[3] Y. Zhang, W. Hua, Z. Zhou, G. E. Suh, and C. Delimitrou. Sinan: ML-based and QoS-aware resource management for cloud microservices. In ASPLOS, pages 167–181. ACM, April 2021.

[4] Alibaba Cluster Trace Program. https://github.com/alibaba/clusterdata.

Funding Agency
National Key Research and Development Program
Grant Number
2023YFB3308402