论文标题

均匀稳定算法的紧密下限

A Tight Lower Bound for Uniformly Stable Algorithms

论文作者

Liu, Qinghua, Lu, Zhou

论文摘要

利用算法稳定性来得出急剧的泛化界限是学习理论的经典和强大的方法。自从Vapnik和Chervonenkis [1974]首先正式化了分析SVM的想法以来,它已被用来研究许多基本学习算法(例如,$ K $ - $ - 纽约最近的邻居[Rogers and Wagner,1978],1978年,1978年],随机渐进方法,随机方法[Hardt et al。,2016],Lineareare Repression [MaureRections [Maurerrererrererrerererrererrererrererrererrererrererrererrererrererrer ess,2017]。在Feldman和Vondrak [2018,2019]以及Bousquet等人的最新作品中。 [2020b],他们证明了$ \ tilde {\ Mathcal {o}}}(γ+\ \\ \\ \\ frac {l} {\ sqrt {n}})$的高概率概括上限。尽管在证明稳定算法的上限方面取得了很大的进步,但我们对下限的了解是相当有限的。实际上,据我们所知,自从研究统一稳定性研究以来,尚无非平凡的下限[Bousquet和Elisseeff,2002]。在本文中,我们通过证明$ω(γ+\ frac {l} {\ sqrt {n}}} $的紧密概括下限,填补了空白,该范围与对数因子相匹配,该上的最著名的上限与对数因素相匹配

Leveraging algorithmic stability to derive sharp generalization bounds is a classic and powerful approach in learning theory. Since Vapnik and Chervonenkis [1974] first formalized the idea for analyzing SVMs, it has been utilized to study many fundamental learning algorithms (e.g., $k$-nearest neighbors [Rogers and Wagner, 1978], stochastic gradient method [Hardt et al., 2016], linear regression [Maurer, 2017], etc). In a recent line of great works by Feldman and Vondrak [2018, 2019] as well as Bousquet et al. [2020b], they prove a high probability generalization upper bound of order $\tilde{\mathcal{O}}(γ+\frac{L}{\sqrt{n}})$ for any uniformly $γ$-stable algorithm and $L$-bounded loss function. Although much progress was achieved in proving generalization upper bounds for stable algorithms, our knowledge of lower bounds is rather limited. In fact, there is no nontrivial lower bound known ever since the study of uniform stability [Bousquet and Elisseeff, 2002], to the best of our knowledge. In this paper we fill the gap by proving a tight generalization lower bound of order $Ω(γ+\frac{L}{\sqrt{n}})$, which matches the best known upper bound up to logarithmic factors

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源