### Abstract

We discuss a special form of gradient descent that in the literature has become known as the so-called linearised Bregman iteration. The idea is to replace the classical (squared) two norm metric in the gradient descent setting with a generalised Bregman distance, based on a more general proper, convex and lower semi-continuous functional. Gradient descent as well as the entropic mirror descent by Nemirovsky and Yudin are special cases, as is a specific form of non-linear Landweber iteration introduced by Bachmayr and Burger. We are going to analyse the linearised Bregman iteration in a setting where the functional we want to minimise is neither necessarily Lipschitz-continuous (in the classical sense) nor necessarily convex, and establish a global convergence result under the additional assumption that the functional we wish to minimise satisfies the so-called Kurdyka-Łojasiewicz property.

Original language | English |
---|---|

Title of host publication | MI Lecture Notes series of Kyushu University |

Editors | G. Reinout W. Quispel, Philipp Bader, McLaren David I., Daisuke Tagami |

Pages | 40-45 |

Number of pages | 5 |

Volume | 74 |

Publication status | Published - 31 Mar 2017 |

### Keywords

- math.OC
- 49M37, 65K05, 65K10, 90C26, 90C30
- G.1.0; G.1.6

## Cite this

Benning, M., Betcke, M. M., Ehrhardt, M. J., & Schönlieb, C. B. (2017). Gradient descent in a generalised Bregman distance framework. In G. R. W. Quispel, P. Bader, M. David I., & D. Tagami (Eds.),

*MI Lecture Notes series of Kyushu University*(Vol. 74, pp. 40-45) https://www.imi.kyushu-u.ac.jp/eng/files/imipublishattachment/file/math_58ec341a238fe.pdf