论文标题

学会反合解释建议

Learning to Counterfactually Explain Recommendations

论文作者

Yao, Yuanshun, Wang, Chong, Li, Hang

论文摘要

推荐的系统从业者面临着越来越多的压力来解释建议。我们探讨了如何使用反事实逻辑来解释建议,即,如果您不与以下项目进行互动,我们不建议这样做。”与传统的解释逻辑相比,反事实解释更易于理解,更具技术性的可验证和更有信息,以使用户控制建议。产生此类解释的主要挑战是计算成本,因为它需要重复重复模型以获得对缺乏用户历史记录引起的建议的影响。我们提出了一个基于学习的框架来生成反事实解释。关键的想法是训练替代模型,以了解删除用户历史记录对建议的效果。为此,我们首先在删除历史程序子集后,在建议中人为地模拟了反事实结果。然后,我们训练一个替代模型,以了解历史记录删除与删除引起的建议的相应更改之间的映射。最后,为了产生解释,我们发现替代模型预测的历史子集最有可能消除建议。通过离线实验和在线用户研究,我们表明与基线相比,我们的方法可以生成用户对反应更有效且更令人满意的解释。

Recommender system practitioners are facing increasing pressure to explain recommendations. We explore how to explain recommendations using counterfactual logic, i.e. "Had you not interacted with the following items, we would not recommend it." Compared to the traditional explanation logic, counterfactual explanations are easier to understand, more technically verifiable, and more informative in terms of giving users control over recommendations. The major challenge of generating such explanations is the computational cost because it requires repeatedly retraining the models to obtain the effect on a recommendation caused by the absence of user history. We propose a learning-based framework to generate counterfactual explanations. The key idea is to train a surrogate model to learn the effect of removing a subset of user history on the recommendation. To this end, we first artificially simulate the counterfactual outcomes on the recommendation after deleting subsets of history. Then we train a surrogate model to learn the mapping between a history deletion and the corresponding change of the recommendation caused by the deletion. Finally, to generate an explanation, we find the history subset predicted by the surrogate model that is most likely to remove the recommendation. Through offline experiments and online user studies, we show our method, compared to baselines, can generate explanations that are more counterfactually valid and more satisfactory considered by users.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源