TöRF: Time-of-Flight Radiance Fields for Dynamic Scene View Synthesis

Benjamin Attal, Eliot Laidlaw, Aaron Gokaslan, Changil Kim, Christian Richardt, James Tompkin, Matthew O'Toole

Research output: Chapter or section in a book/report/conference proceedingChapter in a published conference proceeding

21 Downloads (Pure)


Neural networks can represent and accurately reconstruct radiance fields for static 3D scenes (e.g., NeRF). Several works extend these to dynamic scenes captured with monocular video, with promising performance. However, the monocular setting is known to be an under-constrained problem, and so methods rely on data-driven priors for reconstructing dynamic content. We replace these priors with measurements from a time-of-flight (ToF) camera, and introduce a neural representation based on an image formation model for continuous-wave ToF cameras. Instead of working with processed depth maps, we model the raw ToF sensor measurements to improve reconstruction quality and avoid issues with low reflectance regions, multi-path interference, and a sensor's limited unambiguous depth range. We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions, and discuss the benefits and limitations of integrating RGB+ToF sensors that are now available on modern smartphones.
Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems
Subtitle of host publicationNeurIPS 2021
Number of pages13
Publication statusPublished - 6 Dec 2021
EventNeurIPS 2021: Conference on Neural Information Processing Systems - Virtual
Duration: 6 Dec 202112 Dec 2021


ConferenceNeurIPS 2021: Conference on Neural Information Processing Systems
Abbreviated titleNeurIPS 2021
Internet address


Dive into the research topics of 'TöRF: Time-of-Flight Radiance Fields for Dynamic Scene View Synthesis'. Together they form a unique fingerprint.

Cite this