论文标题
基于梯度的政策学习的进化行动选择
Evolutionary Action Selection for Gradient-based Policy Learning
论文作者
论文摘要
最近已经整合了进化算法(EAS)和深入强化学习(DRL),以利用这两种方法以更好地探索和剥削。这些混合方法中的进化部分维持政策网络的群体维持策略网络的群体。但是,现有的方法着重于在策略网络中的优化参数,该参数远高于此类的参数,该参数是从这个问题上转移的,我们可以从该范围内进行trimition trimition trogimentim from.ie.ie.ien.in.ien.in.ien.in.ien compoy from tocemition from tocement from tocement from tocement from.in ea compoy from tocement from.in compoy from tocement from.in。低维动作空间。我们提出了进化行动选择,两国延迟了深层确定性策略梯度(EAS-TD3),这是一种新型的EA和DRL.IN EAS的混合方法,我们专注于优化政策网络选择的动作,并试图通过进化的AlgorithMiThm a angorithm angorithms促进策略学习高素质行动以促进政策学习。我们对挑战连续控制任务进行了几项实验。结果表明,EAS-TD3表现出优于其他最先进方法的性能。
Evolutionary Algorithms (EAs) and Deep Reinforcement Learning (DRL) have recently been integrated to take the advantage of the both methods for better exploration and exploitation.The evolutionary part in these hybrid methods maintains a population of policy networks.However, existing methods focus on optimizing the parameters of policy network, which is usually high-dimensional and tricky for EA.In this paper, we shift the target of evolution from high-dimensional parameter space to low-dimensional action space.We propose Evolutionary Action Selection-Twin Delayed Deep Deterministic Policy Gradient (EAS-TD3), a novel hybrid method of EA and DRL.In EAS, we focus on optimizing the action chosen by the policy network and attempt to obtain high-quality actions to promote policy learning through an evolutionary algorithm. We conduct several experiments on challenging continuous control tasks.The result shows that EAS-TD3 shows superior performance over other state-of-art methods.