论文标题

大规模优化的无投射自适应梯度

Projection-Free Adaptive Gradients for Large-Scale Optimization

论文作者

Combettes, Cyrille W., Spiegel, Christoph, Pokutta, Sebastian

论文摘要

大规模优化的复杂性在于处理目标函数和处理约束集。在这方面,随机的弗兰克 - 沃尔夫算法通过仅从目标中查询近似的一阶信息并通过不使用预测而保持迭代术的可行性来减轻两种计算负担,从而占据了独特的位置。在本文中,我们通过融合自适应梯度来提高其一阶信息的质量。我们得出收敛速率,并证明了我们方法在凸和非凸目标上的最新随机弗兰克 - 沃尔夫算法上的计算优势。该实验进一步表明,我们的方法可以提高自适应梯度算法的性能,以进行约束优化。

The complexity in large-scale optimization can lie in both handling the objective function and handling the constraint set. In this respect, stochastic Frank-Wolfe algorithms occupy a unique position as they alleviate both computational burdens, by querying only approximate first-order information from the objective and by maintaining feasibility of the iterates without using projections. In this paper, we improve the quality of their first-order information by blending in adaptive gradients. We derive convergence rates and demonstrate the computational advantage of our method over the state-of-the-art stochastic Frank-Wolfe algorithms on both convex and nonconvex objectives. The experiments further show that our method can improve the performance of adaptive gradient algorithms for constrained optimization.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源