How Projected Gradient Descent works part2(Artificial Intelligence)

Monodeep Mukherjee
3 min readOct 2, 2022
Photo by American Public Power Association on Unsplash

1.On Asymptotic Linear Convergence of Projected Gradient Descent for Constrained Least Squares (arXiv)

Author : Trung Vu, Raviv Raich

Abstract : Many recent problems in signal processing and machine learning such as compressed sensing, image restoration, matrix/tensor recovery, and non-negative matrix factorization can be cast as constrained optimization. Projected gradient descent is a simple yet efficient method for solving such constrained optimization problems. Local convergence analysis furthers our understanding of its asymptotic behavior near the solution, offering sharper bounds on the convergence rate compared to global convergence analysis. However, local guarantees often appear scattered in problem-specific areas of machine learning and signal processing. This manuscript presents a unified framework for the local convergence analysis of projected gradient descent in the context of constrained least squares. The proposed analysis offers insights into pivotal local convergence properties such as the conditions for linear convergence, the region of convergence, the exact asymptotic rate of convergence, and the bound on the number of iterations needed to reach a certain level of accuracy. To demonstrate the applicability of the proposed approach, we present a recipe for the convergence analysis of projected gradient descent and demonstrate it via a beginning-to-end application of the recipe on four fundamental problems, namely, linear equality-constrained least squares, sparse recovery, least squares with the unit norm constraint, and matrix completion

2.Blind Super-resolution of Point Sources via Projected Gradient Descent (arXiv)

Author : Sihan Mao, Jinchi Chen

Abstract : Blind super-resolution can be cast as a low rank matrix recovery problem by exploiting the inherent simplicity of the signal and the low dimensional structure of point spread functions. In this paper, we develop a simple yet efficient non-convex projected gradient descent method for this problem based on the low rank structure of the vectorized Hankel matrix associated with the target matrix. Theoretical analysis indicates that the proposed method exactly converges to the target matrix with a linear convergence rate under the similar conditions as convex approaches. Numerical results show that our approach is competitive with existing convex approaches in terms of recovery ability and efficiency.

3. Online Projected Gradient Descent for Stochastic Optimization with Decision-Dependent Distributions(arXiv)

Author : Killian Wood, Gianluca Bianchin, Emiliano Dall’Anese

Abstract : This paper investigates the problem of tracking solutions of stochastic optimization problems with time-varying costs that depend on random variables with decision-dependent distributions. In this context, we propose the use of an online stochastic gradient descent method to solve the optimization, and we provide explicit bounds in expectation and in high probability for the distance between the optimizers and the points generated by the algorithm. In particular, we show that when the gradient error due to sampling is modeled as a sub-Weibull random variable, then the tracking error is ultimately bounded in expectation and in high probability. The theoretical findings are validated via numerical simulations in the context of charging optimization of a fleet of electric vehicles.

--

--

Monodeep Mukherjee

Universe Enthusiast. Writes about Computer Science, AI, Physics, Neuroscience and Technology,Front End and Backend Development