论文标题
相交作为早期停止标准的交集
Intersection of Parallels as an Early Stopping Criterion
论文作者
论文摘要
避免在监督学习中过度拟合的一种常见方法是尽早停止,在训练期间,将持有的设置用于迭代评估,以在训练步骤数量中找到最大概括的训练步骤的最佳位置。但是,这样的方法需要一个不相交的验证集,因此通常为此目的遗漏了训练集的标记数据的一部分,当训练数据稀缺时,这并不理想。此外,当训练标签嘈杂时,模型在验证集中的性能可能不是准确的泛化代理。在本文中,我们提出了一种方法,可以在训练迭代中发现早期停止点而无需进行验证集。我们首先表明,在过度参数化的方向上,线性模型的随机初始化权重收敛到训练期间相同的方向。使用此结果,我们建议训练用不同的随机种子初始初始化的线性模型的两个并行实例,并将其交点作为信号来检测过度拟合。为了检测相交,我们在训练迭代期间使用平行模型的重量之间的余弦距离。注意到NN的最后一层是输出逻辑的前层层激活的线性图,我们使用反事实权重的新概念构建了线性模型的标准,并提出了对多层网络的扩展。我们对两个领域进行实验,这些领域早期停止对防止过度拟合NN具有明显的影响:(i)从嘈杂的标签中学习; (ii)学习在IR中排名。我们在四个广泛使用的数据集上进行的实验证实了我们的泛化方法的有效性。对于广泛的学习率,我们的方法称为余弦距离标准(CDC),平均而言,通常比我们几乎所有测试的病例中与之比较的方法都更好地泛化。
A common way to avoid overfitting in supervised learning is early stopping, where a held-out set is used for iterative evaluation during training to find a sweet spot in the number of training steps that gives maximum generalization. However, such a method requires a disjoint validation set, thus part of the labeled data from the training set is usually left out for this purpose, which is not ideal when training data is scarce. Furthermore, when the training labels are noisy, the performance of the model over a validation set may not be an accurate proxy for generalization. In this paper, we propose a method to spot an early stopping point in the training iterations without the need for a validation set. We first show that in the overparameterized regime the randomly initialized weights of a linear model converge to the same direction during training. Using this result, we propose to train two parallel instances of a linear model, initialized with different random seeds, and use their intersection as a signal to detect overfitting. In order to detect intersection, we use the cosine distance between the weights of the parallel models during training iterations. Noticing that the final layer of a NN is a linear map of pre-last layer activations to output logits, we build on our criterion for linear models and propose an extension to multi-layer networks, using the new notion of counterfactual weights. We conduct experiments on two areas that early stopping has noticeable impact on preventing overfitting of a NN: (i) learning from noisy labels; and (ii) learning to rank in IR. Our experiments on four widely used datasets confirm the effectiveness of our method for generalization. For a wide range of learning rates, our method, called Cosine-Distance Criterion (CDC), leads to better generalization on average than all the methods that we compare against in almost all of the tested cases.