论文标题
持续学习的模块化相关性
Modular-Relatedness for Continual Learning
论文作者
论文摘要
在本文中,我们提出了一种持续的学习(CL)技术,该技术通过提高保留的准确性并减少灾难性遗忘,对序列任务学习者有益。我们方法的主要目标是自动提取神经网络的模块化部分,然后估计给定这些模块化组件的任务之间的相关性。该技术适用于不同的CL方法家族,例如基于正则化(例如,弹性重量巩固)或基于彩排的基于渐变的(例如,梯度发作的内存)方法,需要进行情节记忆。经验结果表明,基于我们的技术,诸如EWC和GEM之类的方法的性能增长(就忘记的稳健性而言),尤其是在记忆预算非常有限的情况下。
In this paper, we propose a continual learning (CL) technique that is beneficial to sequential task learners by improving their retained accuracy and reducing catastrophic forgetting. The principal target of our approach is the automatic extraction of modular parts of the neural network and then estimating the relatedness between the tasks given these modular components. This technique is applicable to different families of CL methods such as regularization-based (e.g., the Elastic Weight Consolidation) or the rehearsal-based (e.g., the Gradient Episodic Memory) approaches where episodic memory is needed. Empirical results demonstrate remarkable performance gain (in terms of robustness to forgetting) for methods such as EWC and GEM based on our technique, especially when the memory budget is very limited.