login
Home / Papers / Self-Supervised Hypergraph Transformer for Recommender Systems

Self-Supervised Hypergraph Transformer for Recommender Systems

125 Citations2022
Lianghao Xia, Chao Huang, Chuxu Zhang

SHT, a novel Self-Supervised Hypergraph Transformer framework (SHT) which augments user representations by exploring the global collaborative relationships in an explicit way, is proposed for data augmentation over the user-item interaction graph, so as to enhance the robustness of recommender systems.

Abstract

Graph Neural Networks (GNNs) have been shown as promising solutions for\ncollaborative filtering (CF) with the modeling of user-item interaction graphs.\nThe key idea of existing GNN-based recommender systems is to recursively\nperform the message passing along the user-item interaction edge for refining\nthe encoded embeddings. Despite their effectiveness, however, most of the\ncurrent recommendation models rely on sufficient and high-quality training\ndata, such that the learned representations can well capture accurate user\npreference. User behavior data in many practical recommendation scenarios is\noften noisy and exhibits skewed distribution, which may result in suboptimal\nrepresentation performance in GNN-based models. In this paper, we propose SHT,\na novel Self-Supervised Hypergraph Transformer framework (SHT) which augments\nuser representations by exploring the global collaborative relationships in an\nexplicit way. Specifically, we first empower the graph neural CF paradigm to\nmaintain global collaborative effects among users and items with a hypergraph\ntransformer network. With the distilled global context, a cross-view generative\nself-supervised learning component is proposed for data augmentation over the\nuser-item interaction graph, so as to enhance the robustness of recommender\nsystems. Extensive experiments demonstrate that SHT can significantly improve\nthe performance over various state-of-the-art baselines. Further ablation\nstudies show the superior representation ability of our SHT recommendation\nframework in alleviating the data sparsity and noise issues. The source code\nand evaluation datasets are available at: https://github.com/akaxlh/SHT.\n