1 |
KUNKEL J, DONKERS T, MICHAEL L, et al. Let me explain: impact of personal and impersonal explanations on trust in recommender systems [C]// Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. New York: ACM, 2019: No.487. 10.1145/3290605.3300717
|
2 |
PEAKE G, WANG J. Explanation mining: post hoc interpretability of latent factor models for recommendation systems [C]// Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2018: 2060-2069. 10.1145/3219819.3220072
|
3 |
朱海萍,赵成成,刘启东,等.基于互惠性约束的可解释就业推荐方法[J].计算机研究与发展, 2021, 58(12): 2660-2672. 10.7544/issn1000-1239.2021.20211008
|
|
ZHU H P, ZHAO C C, LIU Q D, et al. Reciprocal-constrained interpretable job recommendation[J]. Journal of Computer Research and Development, 2021, 58(12): 2660-2672. 10.7544/issn1000-1239.2021.20211008
|
4 |
O'MAHONY M P, HURLEY N J, SILVESTRE G C M. Detecting noise in recommender system databases [C]// Proceedings of the 11th International Conference on Intelligent User Interfaces. New York: ACM, 2006: 109-115. 10.1145/1111449.1111477
|
5 |
WANG H W, ZHANG F Z, WANG J L, et al. RippleNet: propagating user preferences on the knowledge graph for recommender systems [C]// Proceedings of the 27th ACM International Conference on Information and Knowledge Management. New York: ACM, 2018: 417-426. 10.1145/3269206.3271739
|
6 |
YANG F, LIN N H, WANG S H, et al. Towards interpretation of recommender systems with sorted explanation paths [C]// Proceedings of the 2018 IEEE International Conference on Data Mining. Piscataway: IEEE, 2018: 667-676. 10.1109/icdm.2018.00082
|
7 |
FENG F L, HUANG W R, HE X N, et al. Should graph convolution trust neighbors? a simple causal inference method [C]// Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2021: 1208-1218. 10.1145/3404835.3462971
|
8 |
PEARL J, MACKENZIE D. The Book of Why: The New Science of Cause and Effect[M]. New York: Basic Books, 2018: 103-200.
|
9 |
GHAZIMATIN A, BALALAU O, SAHA ROY R, et al. PRINCE: provider-side interpretability with counterfactual explanations in recommender systems [C]// Proceedings of the 13th International Conference on Web Search and Data Mining. New York: ACM, 2020: 196-204. 10.1145/3336191.3371824
|
10 |
TRAN K H, GHAZIMATIN A, SAHA ROY R. Counterfactual explanations for neural recommenders [C]// Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2021: 1627-1631. 10.1145/3404835.3463005
|
11 |
FRIEDRICH G, ZANKER M. A taxonomy for generating explanations in recommender systems[J]. AI Magazine, 2011, 32(3): 90-98. 10.1609/aimag.v32i3.2365
|
12 |
HERLOCKER J L, KONSTAN J A, RIEDL J. Explaining collaborative filtering recommendations [C]// Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work. New York: ACM, 2000: 241-250. 10.1145/358916.358995
|
13 |
ABDOLLAHI B, NASRAOUI O. Using explainability for constrained matrix factorization [C]// Proceedings of the 11th ACM Conference on Recommender Systems. New York: ACM, 2017: 79-83. 10.1145/3109859.3109913
|
14 |
SINGH J, ANAND A. Posthoc interpretability of learning to rank models using secondary training data[EB/OL]. (2018-06-29) [2021-09-16]. . 10.1145/3471158.3472241
|
15 |
SINGH J, ANAND A. Model agnostic interpretability of rankers via intent modelling [C]// Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. New York: ACM, 2020: 618-628. 10.1145/3351095.3375234
|
16 |
ZHANG Y F, LAI G K, ZHANG M, et al. Explicit factor models for explainable recommendation based on phrase-level sentiment analysis [C]// Proceedings of the 37th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2014: 83-92. 10.1145/2600428.2609579
|
17 |
CHEN C, ZHANG M, LIU Y Q, et al. Neural attentional rating regression with review-level explanations [C]// Proceedings of the 2018 World Wide Web Conference. Republic and Canton of Geneva: International World Wide Web Conferences Steering Committee, 2018: 1583-1592. 10.1145/3178876.3186070
|
18 |
XIN X, HE X N, ZHANG Y F, et al. Relational collaborative filtering: Modeling multiple item relations for recommendation [C]// Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2019: 125-134. 10.1145/3331184.3331188
|
19 |
WIEGREFFE S, PINTER Y. Attention is not not explanation [C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg, PA: ACL, 2019: 11-20. 10.18653/v1/d19-1002
|
20 |
JAIN S, WALLACE B C. Attention is not explanation [C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1(Long and Short Papers). Stroudsburg, PA: ACL, 2019: 3543-3556. 10.18653/v1/n18-2
|
21 |
阮利,温莎莎,牛易明,等.基于可解释基拆解和知识图谱的深度神经网络可视化[J].计算机学报, 2021, 44(9): 1786-1805. 10.11897/SP.J.1016.2021.01786
|
|
RUAN L, WEN S S, NIU Y M, et al. Deep neural network visualization based on interpretable basis decomposition and knowledge graph[J]. Chinese Journal of Computers, 2021, 44(9): 1786-1805. 10.11897/SP.J.1016.2021.01786
|
22 |
WANG X, WANG D X, XU C R, et al. Explainable reasoning over knowledge graphs for recommendation [C]// Proceedings of the 33rd AAAI Conference on Artificial Intelligence. Palo Alto, CA: AAAI Press, 2019: 5329-5336. 10.1609/aaai.v33i01.33015329
|
23 |
ZHANG Y, XU X R, ZHOU H N, et al. Distilling structured knowledge into embeddings for explainable and accurate recommendation [C]// Proceedings of the 13th International Conference on Web Search and Data Mining. New York: ACM, 2020: 735-743. 10.1145/3336191.3371790
|
24 |
RIBEIRO M, SINGH S, GUESTRIN C. “Why should I trust you?”: explaining the predictions of any classifier [C]// Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2016: 1135-1144. 10.1145/2939672.2939778
|
25 |
TSANG M, CHENG D J, LIU H P, et al. Feature interaction interpretability: a case for explaining ad-recommendation systems via neural interaction detection[EB/OL]. (2020-07-19) [2021-11-05]. .
|
26 |
SCHÖLKOPF B, LOCATELLO F, BAUER S, et al. Toward causal representation learning[J]. Proceedings of the IEEE, 2021, 109(5): 612-634. 10.1109/jproc.2021.3058954
|
27 |
NIU Y L, TANG K H, ZHANG H W, et al. Counterfactual VQA: a cause-effect look at language bias [C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 12695-12705. 10.1109/cvpr46437.2021.01251
|
28 |
XU S Y, LI Y Q, LIU S C, et al. Learning causal explanations for recommendation[EB/OL]. (2021-02-23) [2022-03-12]. .
|
29 |
CHENG W Y, SHEN Y Y, HUANG L P, et al. Incorporating interpretability into latent factor models via fast influence analysis [C]// Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2019: 885-893. 10.1145/3292500.3330857
|
30 |
YUAN H, YU H Y, GUI S R, et al. Explainability in graph neural networks: a taxonomic survey[EB/OL]. (2022-06-01) [2022-06-12]. . 10.48550/arXiv.2012.15445
|
31 |
HE X N, LIAO L Z, ZHANG H W, et al. Neural collaborative filtering [C]// Proceedings of the 26th International Conference on World Wide Web. Republic and Canton of Geneva: International World Wide Web Conferences Steering Committee, 2017: 173-182. 10.1145/3038912.3052569
|