论文标题
基于深度强化学习的建议,插入插件 - 不合时宜的反事实策略综合
Plug-and-Play Model-Agnostic Counterfactual Policy Synthesis for Deep Reinforcement Learning based Recommendation
论文作者
论文摘要
推荐系统的最新进展已证明了强化学习的潜力(RL),以处理用户和推荐系统之间的动态演化过程。但是,在推荐系统的背景下,学习训练最佳RL代理通常是不切实际的,用户反馈数据通常是不切实际的。为了规避当前基于RL的推荐系统的相互作用,我们建议学习反事实用户互动数据增强的通用模型反事实综合(MACS)策略。反事实综合政策旨在综合反事实状态,同时在我们设计的两种不同的培训方法上保留与用户利益相关的原始状态中的重要信息:通过我们设计的两种不同的培训方法:通过专家演示和联合培训进行学习。结果,每个反事实数据的合成基于当前的推荐代理与环境的互动,以适应用户的动态兴趣。我们将提议的政策深层确定性政策梯度(DDPG),软演员评论家(SAC)和双重延迟DDPG与推荐代理一起在适应性管道中纳入,该推荐代理可以生成反事实数据以提高建议的性能。在线模拟和离线数据集的经验结果证明了我们的反事实综合政策的有效性和概括性,并验证它是否提高了RL建议剂的性能。
Recent advances in recommender systems have proved the potential of Reinforcement Learning (RL) to handle the dynamic evolution processes between users and recommender systems. However, learning to train an optimal RL agent is generally impractical with commonly sparse user feedback data in the context of recommender systems. To circumvent the lack of interaction of current RL-based recommender systems, we propose to learn a general Model-Agnostic Counterfactual Synthesis (MACS) Policy for counterfactual user interaction data augmentation. The counterfactual synthesis policy aims to synthesise counterfactual states while preserving significant information in the original state relevant to the user's interests, building upon two different training approaches we designed: learning with expert demonstrations and joint training. As a result, the synthesis of each counterfactual data is based on the current recommendation agent's interaction with the environment to adapt to users' dynamic interests. We integrate the proposed policy Deep Deterministic Policy Gradient (DDPG), Soft Actor Critic (SAC) and Twin Delayed DDPG in an adaptive pipeline with a recommendation agent that can generate counterfactual data to improve the performance of recommendation. The empirical results on both online simulation and offline datasets demonstrate the effectiveness and generalisation of our counterfactual synthesis policy and verify that it improves the performance of RL recommendation agents.