论文标题

探索高光谱图像分类的跨域预识别模型

Exploring Cross-Domain Pretrained Model for Hyperspectral Image Classification

论文作者

Lee, Hyungtae, Eum, Sungmin, Kwon, Heesung

论文摘要

预处理策略被广泛用于减少CNN培训数据不足时可能发生的过度拟合。在大规模RGB数据集上预测的CNN的前几层能够获取一般图像特征,这些特征在针对不同RGB数据集的任务中非常有效。但是,当涉及到每个域具有其独特频谱特性的高光谱域时,不再可以以常规方式部署预处理策略,同时提出三个主要问题:1)域之间的光谱特征不一致(例如,频率范围)(例如,频率范围),2)在域之间的数据频道数量不一致,并且域中的数据频道数量不一致。 我们试图训练通用跨域模型,后来可以为各种光谱域部署。为了实现,我们实际上为模型提供了多个入口,同时具有通用部分,该部分旨在处理不同域之间不一致的光谱特征。请注意,仅在Finetune过程中使用通用部分。这种方法自然可以同时了解我们的模型在多个领域上,这是问题缺乏大规模数据集的有效解决方法。 我们进行了一项研究,以广泛的比较使用跨域方法和从头开始训练的模型进行了比较。我们的方法在准确性和训练效率方面都非常出色。此外,我们已经验证了我们的方法有效地减少了过度拟合的问题,从而使我们能够将模型加深至13层(从9层),而不会损害准确性。

A pretrain-finetune strategy is widely used to reduce the overfitting that can occur when data is insufficient for CNN training. First few layers of a CNN pretrained on a large-scale RGB dataset are capable of acquiring general image characteristics which are remarkably effective in tasks targeted for different RGB datasets. However, when it comes down to hyperspectral domain where each domain has its unique spectral properties, the pretrain-finetune strategy no longer can be deployed in a conventional way while presenting three major issues: 1) inconsistent spectral characteristics among the domains (e.g., frequency range), 2) inconsistent number of data channels among the domains, and 3) absence of large-scale hyperspectral dataset. We seek to train a universal cross-domain model which can later be deployed for various spectral domains. To achieve, we physically furnish multiple inlets to the model while having a universal portion which is designed to handle the inconsistent spectral characteristics among different domains. Note that only the universal portion is used in the finetune process. This approach naturally enables the learning of our model on multiple domains simultaneously which acts as an effective workaround for the issue of the absence of large-scale dataset. We have carried out a study to extensively compare models that were trained using cross-domain approach with ones trained from scratch. Our approach was found to be superior both in accuracy and in training efficiency. In addition, we have verified that our approach effectively reduces the overfitting issue, enabling us to deepen the model up to 13 layers (from 9) without compromising the accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源