LIME: Live Intrinsic Material Estimation

Abhimitra Meka, Maxim Maximov, Michael Zollhöfer, Hans-Peter Seidel, Christian Richardt, Christian Theobalt

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Citations (Scopus)
9 Downloads (Pure)

Abstract

We present the first end-to-end approach for real-time material estimation for general object shapes with uniform material that only requires a single color image as input. In addition to Lambertian surface properties, our approach fully automatically computes the specular albedo, material shininess, and a foreground segmentation. We tackle this challenging and ill-posed inverse rendering problem using recent advances in image-to-image translation techniques based on deep convolutional encoder–decoder architectures. The underlying core representations of our approach are specular shading, diffuse shading and mirror images, which allow to learn the effective and accurate separation of diffuse and specular albedo. In addition, we propose a novel highly efficient perceptual rendering loss that mimics real-world image formation and obtains intermediate results even during run time. The estimation of material parameters at real-time frame rates enables exciting mixed-reality applications, such as seamless illumination-consistent integration of virtual objects into real-world scenes, and virtual material cloning. We demonstrate our approach in a live setup, compare it to the state of the art, and demonstrate its effectiveness through quantitative and qualitative evaluation.
Original languageEnglish
Title of host publication2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
PublisherIEEE
Pages6315-6324
Number of pages10
ISBN (Electronic)978-1-5386-6420-9
ISBN (Print)978-1-5386-6421-6
DOIs
Publication statusPublished - 18 Jun 2018
EventInternational Conference on Computer Vision and Pattern Recognition - Salt Lake City, USA United States
Duration: 18 Jun 201822 Jun 2018
http://cvpr2018.thecvf.com/

Publication series

NameProceedings
PublisherIEEE
ISSN (Electronic)2575-7075

Conference

ConferenceInternational Conference on Computer Vision and Pattern Recognition
Abbreviated titleCVPR
CountryUSA United States
CitySalt Lake City
Period18/06/1822/06/18
Internet address

Fingerprint

Cloning
Surface properties
Mirrors
Image processing
Lighting
Color

Cite this

Meka, A., Maximov, M., Zollhöfer, M., Seidel, H-P., Richardt, C., & Theobalt, C. (2018). LIME: Live Intrinsic Material Estimation. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 6315-6324). (Proceedings). IEEE. https://doi.org/10.1109/CVPR.2018.00661

LIME: Live Intrinsic Material Estimation. / Meka, Abhimitra; Maximov, Maxim; Zollhöfer, Michael; Seidel, Hans-Peter; Richardt, Christian; Theobalt, Christian.

2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2018. p. 6315-6324 (Proceedings).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Meka, A, Maximov, M, Zollhöfer, M, Seidel, H-P, Richardt, C & Theobalt, C 2018, LIME: Live Intrinsic Material Estimation. in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Proceedings, IEEE, pp. 6315-6324, International Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA United States, 18/06/18. https://doi.org/10.1109/CVPR.2018.00661
Meka A, Maximov M, Zollhöfer M, Seidel H-P, Richardt C, Theobalt C. LIME: Live Intrinsic Material Estimation. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE. 2018. p. 6315-6324. (Proceedings). https://doi.org/10.1109/CVPR.2018.00661
Meka, Abhimitra ; Maximov, Maxim ; Zollhöfer, Michael ; Seidel, Hans-Peter ; Richardt, Christian ; Theobalt, Christian. / LIME: Live Intrinsic Material Estimation. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2018. pp. 6315-6324 (Proceedings).
@inproceedings{e82eb6da9a0a45fb89c440411629689a,
title = "LIME: Live Intrinsic Material Estimation",
abstract = "We present the first end-to-end approach for real-time material estimation for general object shapes with uniform material that only requires a single color image as input. In addition to Lambertian surface properties, our approach fully automatically computes the specular albedo, material shininess, and a foreground segmentation. We tackle this challenging and ill-posed inverse rendering problem using recent advances in image-to-image translation techniques based on deep convolutional encoder–decoder architectures. The underlying core representations of our approach are specular shading, diffuse shading and mirror images, which allow to learn the effective and accurate separation of diffuse and specular albedo. In addition, we propose a novel highly efficient perceptual rendering loss that mimics real-world image formation and obtains intermediate results even during run time. The estimation of material parameters at real-time frame rates enables exciting mixed-reality applications, such as seamless illumination-consistent integration of virtual objects into real-world scenes, and virtual material cloning. We demonstrate our approach in a live setup, compare it to the state of the art, and demonstrate its effectiveness through quantitative and qualitative evaluation.",
author = "Abhimitra Meka and Maxim Maximov and Michael Zollh{\"o}fer and Hans-Peter Seidel and Christian Richardt and Christian Theobalt",
year = "2018",
month = "6",
day = "18",
doi = "10.1109/CVPR.2018.00661",
language = "English",
isbn = "978-1-5386-6421-6",
series = "Proceedings",
publisher = "IEEE",
pages = "6315--6324",
booktitle = "2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition",
address = "USA United States",

}

TY - GEN

T1 - LIME: Live Intrinsic Material Estimation

AU - Meka, Abhimitra

AU - Maximov, Maxim

AU - Zollhöfer, Michael

AU - Seidel, Hans-Peter

AU - Richardt, Christian

AU - Theobalt, Christian

PY - 2018/6/18

Y1 - 2018/6/18

N2 - We present the first end-to-end approach for real-time material estimation for general object shapes with uniform material that only requires a single color image as input. In addition to Lambertian surface properties, our approach fully automatically computes the specular albedo, material shininess, and a foreground segmentation. We tackle this challenging and ill-posed inverse rendering problem using recent advances in image-to-image translation techniques based on deep convolutional encoder–decoder architectures. The underlying core representations of our approach are specular shading, diffuse shading and mirror images, which allow to learn the effective and accurate separation of diffuse and specular albedo. In addition, we propose a novel highly efficient perceptual rendering loss that mimics real-world image formation and obtains intermediate results even during run time. The estimation of material parameters at real-time frame rates enables exciting mixed-reality applications, such as seamless illumination-consistent integration of virtual objects into real-world scenes, and virtual material cloning. We demonstrate our approach in a live setup, compare it to the state of the art, and demonstrate its effectiveness through quantitative and qualitative evaluation.

AB - We present the first end-to-end approach for real-time material estimation for general object shapes with uniform material that only requires a single color image as input. In addition to Lambertian surface properties, our approach fully automatically computes the specular albedo, material shininess, and a foreground segmentation. We tackle this challenging and ill-posed inverse rendering problem using recent advances in image-to-image translation techniques based on deep convolutional encoder–decoder architectures. The underlying core representations of our approach are specular shading, diffuse shading and mirror images, which allow to learn the effective and accurate separation of diffuse and specular albedo. In addition, we propose a novel highly efficient perceptual rendering loss that mimics real-world image formation and obtains intermediate results even during run time. The estimation of material parameters at real-time frame rates enables exciting mixed-reality applications, such as seamless illumination-consistent integration of virtual objects into real-world scenes, and virtual material cloning. We demonstrate our approach in a live setup, compare it to the state of the art, and demonstrate its effectiveness through quantitative and qualitative evaluation.

UR - http://richardt.name/publications/lime/

U2 - 10.1109/CVPR.2018.00661

DO - 10.1109/CVPR.2018.00661

M3 - Conference contribution

SN - 978-1-5386-6421-6

T3 - Proceedings

SP - 6315

EP - 6324

BT - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition

PB - IEEE

ER -