论文标题
通过解决学习表征来解释推荐系统
Explainable Recommender Systems via Resolving Learning Representations
论文作者
论文摘要
推荐系统在Web应用程序中起着基本作用,在过滤大量信息和匹配用户兴趣方面。尽管在各种情况下都致力于开发更有效的模型,但有关推荐系统的解释性的探索正在落后。解释可以帮助改善用户体验并发现系统缺陷。在本文中,在正式介绍了与模型解释性相关的元素之后,我们通过提高表示形式学习过程的透明度提出了一种新颖的可解释建议模型。具体而言,为了克服传统模型中的纠缠问题,我们修改了传统的图形卷积以区分不同层的信息。同样,每个表示矢量都分为几个段,每个段与数据中的一个语义方面有关。与以前的工作不同,在我们的模型中,因素发现和表示形式学习是同时进行的,我们能够处理额外的属性信息和知识。通过这种方式,提出的模型可以学习用户和项目的可解释和有意义的表示形式。与需要在解释性和有效性之间进行权衡的传统方法不同,我们提出的可解释模型的性能在考虑解释性后不会受到负面影响。最后,进行了全面的实验,以验证我们的模型的性能以及忠诚的解释。
Recommender systems play a fundamental role in web applications in filtering massive information and matching user interests. While many efforts have been devoted to developing more effective models in various scenarios, the exploration on the explainability of recommender systems is running behind. Explanations could help improve user experience and discover system defects. In this paper, after formally introducing the elements that are related to model explainability, we propose a novel explainable recommendation model through improving the transparency of the representation learning process. Specifically, to overcome the representation entangling problem in traditional models, we revise traditional graph convolution to discriminate information from different layers. Also, each representation vector is factorized into several segments, where each segment relates to one semantic aspect in data. Different from previous work, in our model, factor discovery and representation learning are simultaneously conducted, and we are able to handle extra attribute information and knowledge. In this way, the proposed model can learn interpretable and meaningful representations for users and items. Unlike traditional methods that need to make a trade-off between explainability and effectiveness, the performance of our proposed explainable model is not negatively affected after considering explainability. Finally, comprehensive experiments are conducted to validate the performance of our model as well as explanation faithfulness.