Projects per year
Abstract
In this work, we investigate various approaches that use learning from training data to solve inverse problems, following a bi-level learning approach. We consider a general framework for optimal inversion design, where training data can be used to learn optimal regularization parameters, data fidelity terms, and regularizers, thereby resulting in superior variational regularization methods. In particular, we describe methods to learn optimal p and q norms for L p − L q regularization and methods to learn optimal parameters for regularization matrices defined by covariance kernels. We exploit efficient algorithms based on Krylov projection methods for solving the regularized problems, both at training and validation stages, making these methods well-suited for large-scale problems. Our experiments show that the learned regularization methods perform well even when there is some inexactness in the forward operator, resulting in a mixture of model and measurement error.
Original language | English |
---|---|
Pages (from-to) | 143-165 |
Number of pages | 23 |
Journal | Numerical Algebra, Control and Optimization |
Volume | 14 |
Issue number | 1 |
Early online date | 31 Dec 2022 |
DOIs | |
Publication status | Published - 31 Mar 2024 |
Funding
The first author is supported by NSF grants DMS-1654175 and DMS-1723005. The second author is supported by NSF grant DMS-1723005 and DMS-215266. The third author is supported by EPSRC grant EP/T001593/1. The fourth author is supported by NSF grant DMS-1502640. This paper is handled by Andreas Mang as the guest editor. Acknowledgments. This work was initiated as a part of the Statistical and Applied Mathematical Sciences Institute (SAMSI) Program on Numerical Analysis in Data Science in 2020. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation (NSF). MP gratefully acknowledges the support from ASU Research Computing facilities for the computing resources used for testing purposes. 2020 Mathematics Subject Classification. Primary: 65F22, 65K10; Secondary: 62F15. Key words and phrases. Bi-level learning, learning priors, variational regularization, Krylov projection methods, inverse problems. The first author is supported by NSF grants DMS-1654175 and DMS-1723005. The second author is supported by NSF grant DMS-1723005 and DMS-215266. The third author is supported by EPSRC grant EP/T001593/1. The fourth author is supported by NSF grant DMS-1502640. This paper is handled by Andreas Mang as the guest editor. ∗ Corresponding author: Matthias Chung, [email protected].
Funders | Funder number |
---|---|
SAMSI | |
National Science Foundation | DMS-215266, DMS-1723005, DMS-1654175 |
National Science Foundation | |
Engineering and Physical Sciences Research Council | DMS-1502640, EP/T001593/1 |
Engineering and Physical Sciences Research Council |
Keywords
- Bi-level learning
- Krylov projection methods
- inverse problems
- learning priors
- variational regularization
ASJC Scopus subject areas
- Control and Optimization
- Applied Mathematics
- Algebra and Number Theory
Fingerprint
Dive into the research topics of 'Efficient learning methods for large-scale optimal inversion design'. Together they form a unique fingerprint.Projects
- 1 Finished
-
Fast and Flexible Solvers for Inverse Problems
Gazzola, S. (PI)
Engineering and Physical Sciences Research Council
15/09/19 → 14/09/22
Project: Research council