论文标题
学习在单个测试样本上跨域概括
Learning to Generalize across Domains on Single Test Samples
论文作者
论文摘要
我们努力从一组源域中学习一个模型,这些模型可以很好地推广到看不见的目标域。在这种域概括方案中,主要的挑战是训练过程中任何目标域数据的不可用,导致学习的模型未明确适应看不见的目标域。我们建议学习在单个测试样本上跨域概括。我们利用元学习范式来学习我们的模型,以在训练时间获得单个样品适应的能力,以便在测试时进一步适应每个单个测试样本。我们将对单个测试样本的适应为变异贝叶斯推理问题,该问题将测试样本作为条件纳入模型参数的生成。对每个测试样品的适应性只需要在测试时间进行一个进发porthard的计算,而无需对来自看不见域的其他数据进行任何微调或自我监督培训。广泛的消融研究表明,我们的模型可以通过模仿训练过程中的域移动来使模型适应每个样本。此外,我们的模型至少比在域概括的多个基准测试基准上的最先进方法相当,而且通常会更好。
We strive to learn a model from a set of source domains that generalizes well to unseen target domains. The main challenge in such a domain generalization scenario is the unavailability of any target domain data during training, resulting in the learned model not being explicitly adapted to the unseen target domains. We propose learning to generalize across domains on single test samples. We leverage a meta-learning paradigm to learn our model to acquire the ability of adaptation with single samples at training time so as to further adapt itself to each single test sample at test time. We formulate the adaptation to the single test sample as a variational Bayesian inference problem, which incorporates the test sample as a conditional into the generation of model parameters. The adaptation to each test sample requires only one feed-forward computation at test time without any fine-tuning or self-supervised training on additional data from the unseen domains. Extensive ablation studies demonstrate that our model learns the ability to adapt models to each single sample by mimicking domain shifts during training. Further, our model achieves at least comparable -- and often better -- performance than state-of-the-art methods on multiple benchmarks for domain generalization.