论文标题

神经进化是强化学习技能发现的竞争替代方案

Neuroevolution is a Competitive Alternative to Reinforcement Learning for Skill Discovery

论文作者

Chalumeau, Felix, Boige, Raphael, Lim, Bryan, Macé, Valentin, Allard, Maxime, Flajolet, Arthur, Cully, Antoine, Pierrot, Thomas

论文摘要

深度强化学习(RL)已成为培训神经政策以解决复杂控制任务的强大范式。但是,这些策略往往符合他们接受过培训的任务和环境的确切规范,因此在条件略微偏差或分层构成以解决更复杂的任务时表现不佳。最近的工作表明,与单个政策相比,培训政策的混合物被驱使探索国家行动空间的不同区域可以通过产生多种行为(称为技能)来解决这一缺点,这些行为可以集体用于适应任务或层次策划。这通常是通过在RL优化的目标函数中包括一个多样性项(通常源自信息理论)来实现的。但是,这些方法通常需要仔细的高参数调整才能有效。在这项工作中,我们证明了不太广泛使用的神经进化方法,特别是质量多样性(QD),是信息理论的竞争性替代方法,可替代信息理论,以探索技能。通过广泛的经验评估,根据(i)指标直接评估技能的多样性,(ii)适应性任务的技能表现,以及(iii)在用作层次结构的精学范围时,(iii)技能的表现;发现QD方法可提供相等的,有时改善的性能,同时对超参数敏感,更可扩展。由于没有发现任何一种方法可以在所有环境中提供近乎最佳的性能,因此我们通过提出未来的方向并提供优化的开源实现来支持进一步研究的丰富范围。

Deep Reinforcement Learning (RL) has emerged as a powerful paradigm for training neural policies to solve complex control tasks. However, these policies tend to be overfit to the exact specifications of the task and environment they were trained on, and thus do not perform well when conditions deviate slightly or when composed hierarchically to solve even more complex tasks. Recent work has shown that training a mixture of policies, as opposed to a single one, that are driven to explore different regions of the state-action space can address this shortcoming by generating a diverse set of behaviors, referred to as skills, that can be collectively used to great effect in adaptation tasks or for hierarchical planning. This is typically realized by including a diversity term - often derived from information theory - in the objective function optimized by RL. However these approaches often require careful hyperparameter tuning to be effective. In this work, we demonstrate that less widely-used neuroevolution methods, specifically Quality Diversity (QD), are a competitive alternative to information-theory-augmented RL for skill discovery. Through an extensive empirical evaluation comparing eight state-of-the-art algorithms (four flagship algorithms from each line of work) on the basis of (i) metrics directly evaluating the skills' diversity, (ii) the skills' performance on adaptation tasks, and (iii) the skills' performance when used as primitives for hierarchical planning; QD methods are found to provide equal, and sometimes improved, performance whilst being less sensitive to hyperparameters and more scalable. As no single method is found to provide near-optimal performance across all environments, there is a rich scope for further research which we support by proposing future directions and providing optimized open-source implementations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源