Title
Motifs-based recommender system via hypergraph convolution and contrastive learning
Abstract
Recently, the strategy of leveraging various motifs to model social semantic information and using self-supervised learning tasks to boost recommendation performance has been proven to be very promising. In this paradigm, each channel describes a common motif (e.g., a triangular social motif) via hypergraph convolution. Richer motifs can be encoded through multiple channels, and self-supervised learning can leverage this multichannel information to build self-supervised tasks (such as contrastive learning tasks), which can greatly improve the recommendation performance in scenarios without enough data labels. However, accurately determining the relationships between different channels and fully utilizing them, while maintaining the uniqueness of each channel, is a problem that has not been well studied or resolved in this field. This paper explores and verifies the disadvantages of directly constructing contrastive learning tasks on different channels with practical experiments and proposes a scheme of interactive modeling and matching representation across different channels. This is the first such attempt in the field of recommender systems, and we believe that this paper will inspire future self-supervised learning research based on multichannel information. To solve this problem, we propose a cross-motif matching representation model based on attentive interaction, which can efficiently model the relationships between cross-motif information. Based on this, we also propose a hierarchical self-supervised learning model that realizes self-supervised learning within and between channels, respectively, which improves the ability of self-supervised learning tasks to autonomously mine different levels of potential information. We have conducted abundant experiments, and various metrics on multiple public datasets show that the method proposed in this paper significantly outperforms the state-of-the-art methods, regardless of the general or cold-start scenario. In the model variant analysis experiment, the benefits of the cross-motif matching representation model and the hierarchical self-supervised model proposed in this paper are also fully verified.
Year
DOI
Venue
2022
10.1016/j.neucom.2022.09.102
Neurocomputing
Keywords
DocType
Volume
Self-supervised learning,Graph neural networks,Hypergraph,Contrastive learning
Journal
512
ISSN
Citations 
PageRank 
0925-2312
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Yundong Sun101.01
Dongjie Zhu244.77
Haiwen Du301.01
Zhaoshuo Tian400.34