On the convergence properties of the projected gradient method for convex optimization
Projected gradient method | convex optimization | quasi-Fejer convergence
When applied to an unconstrained optimization problem with a convex objective, the steepest descent method has stronger convergence properties than in the nonconvex case: the whole sequence converges to an optimal solution under the only hypothesis of existence of minimizers (i.e. without assuming e.g. boundedness of the level sets). In this paper we look at the projected gradient method for constrained convex optimization. Convergence of the whole sequence to a minimizer assumnig only existence of solutions has also been already established for the variant in which the stepsizes are exogenously given and square summable. In this paper, we prove the same result for the more standard (and also more efficient) variant, namely the one in which the stepsizes are determined through an Armijo search.