论文标题

建模对象检测中的丢失注释以增量学习

Modeling Missing Annotations for Incremental Learning in Object Detection

论文作者

Cermelli, Fabio, Geraci, Antonino, Fontanel, Dario, Caputo, Barbara

论文摘要

尽管对象检测领域最近取得了进步,但随着时间的推移,常见架构仍然不适合逐步检测新类别。他们容易受到灾难性遗忘的影响:他们忘记了在没有原始培训数据的情况下更新参数时已经学到了什么。先前的作品在对象检测任务中扩展了标准分类方法,主要采用知识蒸馏框架。但是,我们认为对象检测引入了一个额外的问题,该问题被忽略了。虽然属于新类的对象是由于其注释而学习的,但如果没有为仍可能存在的其他对象提供监督,则该模型学会将它们与背景区域相关联。我们建议通过重新审视标准知识蒸馏框架来处理这些缺失的注释。我们的方法在Pascal-voc数据集的每种情况下都优于当前最新方法。我们进一步提出了对实例分割的扩展,表现优于其他基线。代码可以在此处找到:https://github.com/fcdl94/mma

Despite the recent advances in the field of object detection, common architectures are still ill-suited to incrementally detect new categories over time. They are vulnerable to catastrophic forgetting: they forget what has been already learned while updating their parameters in absence of the original training data. Previous works extended standard classification methods in the object detection task, mainly adopting the knowledge distillation framework. However, we argue that object detection introduces an additional problem, which has been overlooked. While objects belonging to new classes are learned thanks to their annotations, if no supervision is provided for other objects that may still be present in the input, the model learns to associate them to background regions. We propose to handle these missing annotations by revisiting the standard knowledge distillation framework. Our approach outperforms current state-of-the-art methods in every setting of the Pascal-VOC dataset. We further propose an extension to instance segmentation, outperforming the other baselines. Code can be found here: https://github.com/fcdl94/MMA

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源