Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Recommendation model combining self-features and contrastive learning
Xingyao YANG, Yu CHEN, Jiong YU, Zulian ZHANG, Jiaying CHEN, Dongxiao WANG
Journal of Computer Applications    2024, 44 (9): 2704-2710.   DOI: 10.11772/j.issn.1001-9081.2023091264
Abstract241)   HTML11)    PDF (1737KB)(385)       Save

Aiming at the over-smoothing and noise problems in the embedding representation in the message passing process of graph convolution based on graph neural network recommendation, a Recommendation model combining Self-features and Contrastive Learning (SfCLRec) was proposed. The model was trained using a pre-training-formal training architecture. Firstly, the embedding representations of users and items were pre-trained to maintain the feature uniqueness of the nodes themselves by fusing the node self-features and a hierarchical contrastive learning task was introduced to mitigate the noisy information from the higher-order neighboring nodes. Then, the collaborative graph adjacency matrix was reconstructed according to the scoring mechanism in the formal training stage. Finally, the predicted score was obtained based on the final embedding. Compared with existing graph neural network recommendation models such as LightGCN and Simple Graph Contrastive Learning (SimGCL), SfCLRec achieves the better recall and NDCG (Normalized Discounted Cumulative Gain) in three public datasets ML-latest-small, Last.FM and Yelp, validating the effectiveness of SfCLRec.

Table and Figures | Reference | Related Articles | Metrics
Sequential recommendation based on hierarchical filter and temporal convolution enhanced self-attention network
Xingyao YANG, Hongtao SHEN, Zulian ZHANG, Jiong YU, Jiaying CHEN, Dongxiao WANG
Journal of Computer Applications    2024, 44 (10): 3090-3096.   DOI: 10.11772/j.issn.1001-9081.2023091352
Abstract167)   HTML3)    PDF (1877KB)(438)       Save

Aiming at the problem of noise arising from user’s unexpected interactions in practical recommendation scenarios and the challenge of capturing short-term demand biases due to the dispersed attention in self-attention mechanism, a model namely FTARec (sequential Recommendation based on hierarchical Filter and Temporal convolution enhanced self-Attention network) was proposed. Firstly, hierarchical filter was used to filter noise in the original data. Then, user embeddings were obtained by combining temporal convolution enhanced self-attention networks with decoupled hybrid location encoding. The deficiencies in modeling short-term dependencies among items were supplemented by enhancing the self-attention network with temporal convolution in this process. Finally, contrastive learning was incorporated to refine user embeddings and predictions were made based on the final user embeddings. Compared to existing sequential recommendation models such as the Self-Attentive Sequential Recommendation (SASRec) and the Filter-enhanced Multi-Layer Perceptron approach for sequential Recommendation (FMLP-Rec), FTARec achieves higher Hit Rate (HR) and Normalized Discounted Cumulative Gain (NDCG) on three publicly available datasets: Beauty, Clothing, and Sports. Compared with the suboptimal DuoRec, FTARec has the HR@10 increased by 7.91%, 13.27%, 12.84%, and the NDCG@10 increased by 5.52%, 8.33%, 9.88%, respectively, verifying the effectiveness of the proposed model.

Table and Figures | Reference | Related Articles | Metrics
Cross-layer data sharing based multi-task model
Ying CHEN, Jiong YU, Jiaying CHEN, Xusheng DU
Journal of Computer Applications    2022, 42 (5): 1447-1454.   DOI: 10.11772/j.issn.1001-9081.2021030516
Abstract354)   HTML29)    PDF (1841KB)(118)       Save

To address the issues of negative transfer and difficulty of information sharing between loosely correlated tasks in multi-task learning model, a cross-layer data sharing based multi-task model was proposed. The proposed model pays attention to fine-grained knowledge sharing, and is able to retain the memory ability of shallow layer shared experts and generalization ability of deep layer specific task experts. Firstly, multi-layer shared experts were unified to obtain public knowledge among complicatedly correlated tasks. Then, the shared information was transferred to specific task experts at different layers for sharing partial public knowledge between the upper and lower layers. Finally, the data sample based gated network was used to select the needed information for different tasks autonomously, thereby alleviating the harmful effects of sample dependence to the model. Compared with the Multi-gate Mixture-Of-Experts (MMOE) model, the proposed model improved the F1-score of two tasks by 7.87 percentage points and 1.19 percentage points respectively on UCI census-income dataset. The proposed model also decreased the Mean Square Error (MSE) value of regression task to 0.004 7 and increased the Area Under Curve (AUC) value of classification task to 0.642 on MovieLens dataset. Experimental results demonstrate that the proposed model is suitable to improve the influence of negative transfer and can learn public information among complicated related tasks more efficiently.

Table and Figures | Reference | Related Articles | Metrics