FAU  >  Technische Fakultät  >  Informatik  >  Lehrstuhl 15 (Digital Reality)

 
[3DPR: Single Image Portrait Relighting with 3D Generative Priors]

3DPR: Single Image Portrait Relighting with 3D Generative Priors

Pramod Rao1,  Xilong Zhou1,  Abhimitra Meka2,  Gereon Fox1,  Mallikarjun B R1,  Fangneng Zhan3,  Tim Weyrich4,  Bernd Bickel5,  Hanspeter Pfister3,  Wojciech Matusik6,  Thabo Beeler2,  Mohamed Elgharib1,  Marc Habermann1,  Christian Theobalt1

1 Max Planck Institute for Informatics, Saarbrücken, Germany
2 Google AR/VR
3 Harvard University
4 Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU)
5 ETH Zurich
6 Massachusetts Institute of Technology

Abstract

Rendering novel, relit views of a human head, given a monocular portrait image as input, is an inherently underconstrained problem. The traditional graphics solution is to explicitly decompose the input image into geometry, material and lighting via differentiable rendering; but this is constrained by the multiple assumptions and approximations of the underlying models and parameterizations of these scene components. We propose 3DPR, an image-based relighting model that leverages generative priors learnt from multi-view One-Light-at-A-Time (OLAT) images captured in a light stage. We introduce a new diverse and large-scale multi-view 4K OLAT dataset of 139 subjects to learn a high-quality prior over the distribution of high-frequency face reflectance. We leverage the latent space of a pre-trained generative head model that provides a rich prior over face geometry learnt from in-the-wild image datasets. The input portrait is first embedded in the latent manifold of such a model through an encoder-based inversion process. Then a novel triplane-based reflectance network trained on our lightstage data is used to synthesize high-fidelity OLAT images to enable image-based relighting. Our reflectance network operates in the latent space of the generative head model, crucially enabling a relatively small number of lightstage images to train the reflectance model. Combining the generated OLATs according to a given HDRI environment maps yields physically accurate environmental relighting results. Through quantitative and qualitative evaluations, we demonstrate that 3DPR outperforms previous methods, particularly in preserving identity and in capturing lighting effects such as specularities, self-shadows, and subsurface scattering.

Citation Style:    Publication

3DPR: Single Image Portrait Relighting with 3D Generative Priors.
Pramod Rao, Xilong Zhou, Abhimitra Meka, Gereon Fox, Mallikarjun B R, Fangneng Zhan, Tim Weyrich, Bernd Bickel, Hanspeter Pfister, Wojciech Matusik, Thabo Beeler, Mohamed Elgharib, Marc Habermann, Christian Theobalt.
Conditionally accepted to SIGGRAPH Asia 2025 Conference Papers, 12 pages, December 2025.
Pramod Rao, Abhimitra Meka, Xilong Zhou, Gereon Fox, Mallikarjun B R, Fangneng Zhan, Tim Weyrich, Bernd Bickel, Hanspeter Pfister, Wojciech Matusik, Thabo Beeler, Mohamed Elgharib, Marc Habermann, and Christian Theobalt. 3DPR: Single image 3D portrait relighting with generative priors. In Proceedings of the SIGGRAPH Asia 2025 Conference Papers, SA Conference Papers ’25, pages 108:1–108:12, New York, NY, USA, December 2025. Association for Computing Machinery.Rao, P., Meka, A., Zhou, X., Fox, G., B R, M., Zhan, F., Weyrich, T., Bickel, B., Pfister, H., Matusik, W., Beeler, T., Elgharib, M., Habermann, M., and Theobalt, C. 2025. 3DPR: Single image 3D portrait relighting with generative priors. In Proceedings of the SIGGRAPH Asia 2025 Conference Papers, Association for Computing Machinery, New York, NY, USA, SA Conference Papers ’25, 108:1–108:12.P. Rao, A. Meka, X. Zhou, G. Fox, M. B R, F. Zhan, T. Weyrich, B. Bickel, H. Pfister, W. Matusik, T. Beeler, M. Elgharib, M. Habermann, and C. Theobalt, “3DPR: Single image 3D portrait relighting with generative priors,” in Proceedings of the SIGGRAPH Asia 2025 Conference Papers, ser. SA Conference Papers ’25. New York, NY, USA: Association for Computing Machinery, Dec. 2025, pp. 108:1–108:12. [Online]. Available: https://doi.org/10.1145/3757377.3763962

Privacy: This page is free of cookies or any means of data collection. Copyright disclaimer: The documents contained in these pages are included to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.