论文标题

域概括具有相关样式的不确定性

Domain Generalization with Correlated Style Uncertainty

论文作者

Zhang, Zheyuan, Wang, Bin, Jha, Debesh, Demir, Ugur, Bagci, Ulas

论文摘要

域概括(DG)方法旨在提取域不变特征,这些特征可以导致更强大的深度学习模型。在这方面,样式增强是一种强大的DG方法,利用了特定于实例的特征统计数据,其中包含合成新颖域的信息性特征。尽管它是最先进的方法之一,但有关样式增强的先前作品要么忽略了不同特征通道之间的相互依存关系,要么完全限制了样式的增强来实现线性插值。为了解决这些研究差距,在这项工作中,我们引入了一种新颖的增强方法,称为相关样式不确定性(CSU),超过了风格统计空间中线性插值的局限性,并同时保留了重要的相关信息。我们的方法的功效是通过对多样化的跨域计算机视觉和医学成像分类任务进行的广泛实验来确定的:PAC,办公室家庭和Camelyon17数据集以及Duke-Market1501实例检索任务。结果显示了与现有最​​新技术相比的显着改善余量。源代码可用https://github.com/freshman97/csu。

Domain generalization (DG) approaches intend to extract domain invariant features that can lead to a more robust deep learning model. In this regard, style augmentation is a strong DG method taking advantage of instance-specific feature statistics containing informative style characteristics to synthetic novel domains. While it is one of the state-of-the-art methods, prior works on style augmentation have either disregarded the interdependence amongst distinct feature channels or have solely constrained style augmentation to linear interpolation. To address these research gaps, in this work, we introduce a novel augmentation approach, named Correlated Style Uncertainty (CSU), surpassing the limitations of linear interpolation in style statistic space and simultaneously preserving vital correlation information. Our method's efficacy is established through extensive experimentation on diverse cross-domain computer vision and medical imaging classification tasks: PACS, Office-Home, and Camelyon17 datasets, and the Duke-Market1501 instance retrieval task. The results showcase a remarkable improvement margin over existing state-of-the-art techniques. The source code is available https://github.com/freshman97/CSU.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源