Skip to main content

Datasets

Standard Dataset

GPL_Dataset

Citation Author(s):
Kunhao Li (AnHui University)
Submitted by:
Kunhao Li
Last updated:
DOI:
10.21227/e85y-3q38
10 views
Categories:
Keywords:
No Ratings Yet

Abstract

Graph neural networks (GNNs) are widely applied in graph data modeling. However, existing GNNs are often trained in a task-driven manner that fails to fully capture the intrinsic nature of the graph structure, resulting in sub-optimal node and graph representations. To address this limitation, we propose a novel \textbf{G}raph structure \textbf{P}rompt \textbf{L}earning method (GPL) to enhance the training of GNNs, which is inspired by prompt mechanisms in natural language processing. GPL employs task-independent graph structure losses to encourage GNNs to learn intrinsic graph characteristics while simultaneously solving downstream tasks, producing higher-quality node and graph representations. In extensive experiments on eleven real-world datasets, after being trained by GPL, GNNs significantly outperform their original performance on node classification, graph classification, and edge prediction tasks (up to 10.28\%, 16.5\%, and 24.15\%, respectively). By allowing GNNs to capture the inherent structural prompts of graphs in GPL, they can alleviate the issue of over-smooth and achieve new state-of-the-art performances, which introduces a novel and effective direction for GNN research with potential applications in various domains.

Instructions:

GPL的脚本和数据

Dataset Files

Files have not been uploaded for this dataset

DATASET SCRIPTS