Gradient descent in a generalised Bregman distance framework

Martin Benning, Marta M. Betcke, Matthias J Ehrhardt, Carola Bibiane Schönlieb

Research output: Chapter or section in a book/report/conference proceedingChapter in a published conference proceeding

79 Downloads (Pure)

Abstract

We discuss a special form of gradient descent that in the literature has become known as the so-called linearised Bregman iteration. The idea is to replace the classical (squared) two norm metric in the gradient descent setting with a generalised Bregman distance, based on a more general proper, convex and lower semi-continuous functional. Gradient descent as well as the entropic mirror descent by Nemirovsky and Yudin are special cases, as is a specific form of non-linear Landweber iteration introduced by Bachmayr and Burger. We are going to analyse the linearised Bregman iteration in a setting where the functional we want to minimise is neither necessarily Lipschitz-continuous (in the classical sense) nor necessarily convex, and establish a global convergence result under the additional assumption that the functional we wish to minimise satisfies the so-called Kurdyka-Łojasiewicz property.
Original languageEnglish
Title of host publicationMI Lecture Notes series of Kyushu University
EditorsG. Reinout W. Quispel, Philipp Bader, McLaren David I., Daisuke Tagami
Pages40-45
Number of pages5
Volume74
Publication statusPublished - 31 Mar 2017

Bibliographical note

Conference proceedings of '2016 Geometric Numerical Integration and its Applications Maths Conference at La Trobe University, Melbourne Australia', MI Lecture Notes series of Kyushu University, six pages, one figure, program code: https://doi.org/10.17863/CAM.6714

Keywords

  • math.OC
  • 49M37, 65K05, 65K10, 90C26, 90C30
  • G.1.0; G.1.6

Fingerprint

Dive into the research topics of 'Gradient descent in a generalised Bregman distance framework'. Together they form a unique fingerprint.

Cite this