论文标题
非covex稀疏约束优化的可行水平近端方法
A Feasible Level Proximal Point Method for Nonconvex Sparse Constrained Optimization
论文作者
论文摘要
NonConvex稀疏模型在高维机器学习中受到了极大的关注。在本文中,我们研究了一个新模型,该模型由一般的凸或非凸目标以及各种连续的非凸刺激约束。对于这个受约束的模型,我们提出了一种新型的近端算法,该算法解决了逐渐松弛的约束水平的凸子问题序列。每个子问题,具有近端目标和凸替代约束,都可以根据快速例行程序进行有效解决,以投射到替代约束上。我们建立了所提出的算法与Karush-Kuhn-Tucker(KKT)溶液的渐近收敛性。我们还建立了新的收敛复杂性,以实现近似KKT解决方案,当该目标可以平滑/非平滑,确定性/随机性和凸/非convex具有复杂性,并且与梯度下降相提并论,以解决相应情况下的不受限制优化问题。据我们所知,这是对一阶方法的首次研究,该方法具有复杂性保证的稀疏限制问题。我们执行数值实验,以证明我们新模型的有效性以及提出的算法在大规模问题上的有效性。
Nonconvex sparse models have received significant attention in high-dimensional machine learning. In this paper, we study a new model consisting of a general convex or nonconvex objectives and a variety of continuous nonconvex sparsity-inducing constraints. For this constrained model, we propose a novel proximal point algorithm that solves a sequence of convex subproblems with gradually relaxed constraint levels. Each subproblem, having a proximal point objective and a convex surrogate constraint, can be efficiently solved based on a fast routine for projection onto the surrogate constraint. We establish the asymptotic convergence of the proposed algorithm to the Karush-Kuhn-Tucker (KKT) solutions. We also establish new convergence complexities to achieve an approximate KKT solution when the objective can be smooth/nonsmooth, deterministic/stochastic and convex/nonconvex with complexity that is on a par with gradient descent for unconstrained optimization problems in respective cases. To the best of our knowledge, this is the first study of the first-order methods with complexity guarantee for nonconvex sparse-constrained problems. We perform numerical experiments to demonstrate the effectiveness of our new model and efficiency of the proposed algorithm for large scale problems.