论文标题

Robins-Monro增强Lagrangian的随机凸优化方法

Robins-Monro Augmented Lagrangian Method for Stochastic Convex Optimization

论文作者

Wang, Rui, Ding, Chao

论文摘要

在本文中,我们提出了一种Robbins-Monro增强拉格朗日方法(RMALM)来求解一类约束的随机凸优化,可以将其视为Robbins-Monro型随机近似方法的混合体和增强的Lagrangian Lagrangian convex优化方法。在轻度条件下,我们表明所提出的算法表现出线性收敛速率。此外,我们没有验证计算上的棘手停止标准,而是表明,具有增加子问题迭代编号的RMALM具有全局复杂性$ \ MATHCAL {O}(1/\ VAREPSILON^{1+Q} $,用于$ \ VAREPSILON $ -SONOUSTION(I.E.,I.E., $ \ mathbb {e} \ left(\ | x^k-x^*\ |^2 \ right)<\ varepsilon $),其中$ q $是任何正数。合成和实际数据的数值结果表明,所提出的算法的表现优于现有算法。

In this paper, we propose a Robbins-Monro augmented Lagrangian method (RMALM) to solve a class of constrained stochastic convex optimization, which can be regarded as a hybrid of the Robbins-Monro type stochastic approximation method and the augmented Lagrangian method of convex optimizations. Under mild conditions, we show that the proposed algorithm exhibits a linear convergence rate. Moreover, instead of verifying a computationally intractable stopping criteria, we show that the RMALM with the increasing subproblem iteration number has a global complexity $\mathcal{O}(1/\varepsilon^{1+q})$ for the $\varepsilon$-solution (i.e., $\mathbb{E}\left(\|x^k-x^*\|^2\right) < \varepsilon$), where $q$ is any positive number. Numerical results on synthetic and real data demonstrate that the proposed algorithm outperforms the existing algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源