Efficient learning methods for large-scale optimal inversion design

Julianne Chung, Matthias Chung, Silvia Gazzola, Mirjeta Pasha

Research output: Contribution to journalArticlepeer-review

4 Downloads (Pure)

Abstract

In this work, we investigate various approaches that use learning from training data to solve inverse problems, following a bi-level learning approach. We consider a general framework for optimal inversion design, where training data can be used to learn optimal regularization parameters, data fidelity terms, and regularizers, thereby resulting in superior variational regularization methods. In particular, we describe methods to learn optimal ρ and norms for Lρ - Lq
regularization and methods to learn optimal parameters for regularization matrices defined by covariance kernels. We exploit efficient algorithms based on Krylov projection methods for solving the regularized problems, both at training and validation stages, making these methods well-suited for large-scale problems. Our experiments show that the learned regularization methods perform well even when there is some inexactness in the forward operator, resulting in a mixture of model and measurement error.
Original languageEnglish
Pages (from-to)137-159
JournalNumerical Algebra, Control and Optimization
Volume14
Issue number1
Early online date31 Dec 2022
DOIs
Publication statusE-pub ahead of print - 31 Dec 2022

Fingerprint

Dive into the research topics of 'Efficient learning methods for large-scale optimal inversion design'. Together they form a unique fingerprint.

Cite this