论文标题

riemannian随机递归动量方法,用于非凸优化

Riemannian stochastic recursive momentum method for non-convex optimization

论文作者

Han, Andi, Gao, Junbin

论文摘要

我们提出了一种用于Riemannian非凸优化的随机递归动量方法,该方法实现了$ \ tilde {\ Mathcal {o}}}(ε^{ - 3})$的近乎最佳复杂性,以查找与一个样品的$ε$ - approximate解决方案。也就是说,我们的方法需要$ \ MATHCAL {O}(1)每次迭代的梯度评估,并且不需要与大批次梯度重新启动,这通常用于获得更快的速率。广泛的实验结果证明了我们提出的算法的优越性。

We propose a stochastic recursive momentum method for Riemannian non-convex optimization that achieves a near-optimal complexity of $\tilde{\mathcal{O}}(ε^{-3})$ to find $ε$-approximate solution with one sample. That is, our method requires $\mathcal{O}(1)$ gradient evaluations per iteration and does not require restarting with a large batch gradient, which is commonly used to obtain the faster rate. Extensive experiment results demonstrate the superiority of our proposed algorithm.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源