Abstract
earning and inverse imaging problems and looks at the use of generative models in the framework of variational regularisation.The first chapter, the introduction, motivates why inverse problems and specifically deep learning approaches to inverse problems are exciting and relevant before setting out key research questions that this thesis hopes to contribute to. A more detailed summary of the work presented in this thesis sets out a narrative that runs through the research. The chapter is concluded with a discussion of how the research will be evaluated.
The background of this thesis is set out in chapters 2 and 3. Chapter 2 starts with a brief introduction to inverse problems, the variational regularisation approach and the optimisation methods used in this work. The chapter then moves on to introduce deep learning and provides a critical analysis of deep learning applied to inverse problems. This sets the scene for the learned regularisation methods presented in this thesis. Chapter 3 looks at generative models, deep learning approaches for producing images similar to some training set. The chapter provides an introduction to a range of state-of-the-art models, including autoencoders, variational autoencoders and generative adversarial networks, with a unified notation and approach. The derivations of the variational autoencoder, in particular, will be useful for later chapters.
Chapter 4 starts with an introduction to the main theme of this thesis: generative regularisers. Generative regularisers penalise solutions to the inverse problem that are far from the range of a trained generative model. A literature review highlights and unifies key emerging themes from the literature. The success of generative regularisers depends on the quality of the generative model and one of the main contributions of this chapter is a set of desired criteria for generators that will go on to be used in an inverse problem context. These criteria are not sufficient for success, but they are a useful starting point that could guide future generative model research. The chapter finishes with a range of numerical experiments to test and compare generative models and generative regularisation methods.
The direction of the thesis now forks into two possible extensions of the generative regularisers set out in chapter 4. Chapter 5 looks closer at how generative regularisers measure the distance from the range of a generative model and uses an adapted variational autoencoder to learn this distance in a way that is adaptive across an image and throughout the set of images. Numerical experiments demonstrate the flexibility of this approach and compare it to other learned and unlearned reconstruction methods.
Chapter 6 considers the training of the generative model and how to train generators, specifically variational autoencoders, without access to ground truth or high-quality reconstructed data. The learned generators are tested for their ability to regularise inverse imaging problems.
Finally, chapter 7 gives a summary of the contributions of this thesis, linking to the key areas highlighted in the introduction, and outlines directions for future work following on from this thesis.
Date of Award | 13 Sept 2023 |
---|---|
Original language | English |
Awarding Institution |
|
Supervisor | Matthias Ehrhardt (Supervisor) & Neill Campbell (Supervisor) |