NEURAL RENDERING AND INVERSE RENDERING USING PHYSICAL INDUCTIVE BIASES

  • Thu Nguyen Phuoc

Student thesis: Doctoral ThesisPhD

Abstract

The computer graphics rendering pipeline is designed to generate realistic 2D images from 3D virtual scenes, with most research focusing on simulating elements of the physical world using light transport models or material simulation. This rendering pipeline, however, can be limited and expensive. For example, it still takes months or years even for highly trained 3D artists and designers to produce high-quality images, games or movies. Additionally, most renderers are not differentiable, making it hard to apply to inverse rendering tasks.

Computer vision investigates the inference of scene properties from 2D images, and has recently achieved great success with the adoption of neural networks and deep learning. It has been shown that representations learnt from these computer vision models are also useful for computer graphics tasks. For example, powerful image generative models are capable of creating images with a quality that can rival those created by traditional computer graphics approaches. However, these models make few explicit assumptions about the physical world or how images are formed from it and therefore still struggle in tasks such as novel-view synthesis, re-texturing or relighting. More importantly, they offer almost no control over the generated images, making it non-trivial to adapt them for computer graphics applications.

In this thesis, we propose to combine inductive biases about the physical world and the expressiveness of neural networks for the task of neural rendering and inverse rendering. We show that this results in a differentiable neural renderer that can both achieve high image quality and generalisation across different 3D shape categories, as well as recover scene structures from images. We also show that with the added knowledge about the 3D world, unsupervised image generative models can learn representations that allow explicit control over object positions and poses without using pose labels, 3D shapes, or multiple views of the same objects or scenes. This suggests the potential of learning representations specifically for neural rendering tasks, which offer both powerful priors about the world and intuitive control over the generated results.
Date of Award25 May 2022
Original languageEnglish
Awarding Institution
  • University of Bath
SupervisorYongliang Yang (Supervisor) & Eamonn O'Neill (Supervisor)

Cite this

'