Abstract
We propose an extension of a special form of gradient descent --- in the literature known as linearised Bregman iteration --- to a larger class of non-convex functions. We replace the classical (squared) two norm metric in the gradient descent setting with a generalised Bregman distance, based on a proper, convex and lower semi-continuous function. The algorithm's global convergence is proven for functions that satisfy the Kurdyka-Lojasiewicz property. Examples illustrate that features of different scale are being introduced throughout the iteration, transitioning from coarse to fine. This coarse-to-fine approach with respect to scale allows to recover solutions of non-convex optimisation problems that are superior to those obtained with conventional gradient descent, or even projected and proximal gradient descent. The effectiveness of the linearised Bregman iteration in combination with early stopping is illustrated for the applications of parallel magnetic resonance imaging, blind deconvolution as well as image classification with neural networks.
Original language | English |
---|---|
Journal | SIAM Journal on Imaging Sciences |
DOIs | |
Publication status | Published - 22 Jun 2021 |
Bibliographical note
Funding:This work was funded by the Leverhulme Trust Early Career Fellowship’ Learning from mistakes: a supervised feedback-loop for imaging applications’, the Isaac Newton Trust, the Engineering and Physical Sciences Research Council (EPSRC)’EP/K009745/1’, the Leverhulme Trust project ’Breaking the non-convexity barrier’, the EPSRC grant ’EP/M00483X/1’, the EPSRC centre ’EP/N014588/1’,the Cantab Capital Institute for the Mathematics of Information and CHiPS (Horizon 2020 RISE project grant).
Keywords
- math.OC
- 49M37, 65K05, 65K10, 90C26, 90C30