FAU  >  Technische Fakultät  >  Informatik  >  Lehrstuhl 15 (Digital Reality)

 
[Single Image Portrait Relighting with 3D Generative Priors]

Single Image Portrait Relighting with 3D Generative Priors

Pramod Rao1,  Xilong Zhou1,  Abhimitra Meka2,  Gereon Fox1,  Mallikarjun B R1,  Fangneng Zhan3,  Tim Weyrich4,  Bernd Bickel5,  Hanspeter Pfister3,  Wojciech Matusik6,  Thabom Beeler2,  Mohamed Elgharib1,  Marc Habermann1,  Christian Theobalt1

1 Max Planck Institute for Informatics, Saarbrücken, Germany
2 Google AR/VR
3 Harvard University
4 Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU)
5 ETH Zurich
6 Massachusetts Institute of Technology

Abstract

Rendering novel, relit views of a human head, given a monocular portrait image as input, is inherently challenging. One possible solution would be to explicitly decompose the input into geometry, material and lighting via differentiable rendering, but such an approach would be constrained by the assumptions and approximations made by the respective models used for these three components. Instead, we propose 3DPR, a learned image-based relighting model, that is trained on One-Light-at-A-Time (OLAT) images. Capturing OLAT images requires a light stage, which is a device not commonly available even to researchers, so data is scarce and limited in diversity, constraining the generality of models. This is why we leverage a pre-trained generative but non-relightable head model, that has learned a rich prior of faces from large-scale in-the-wild image datasets. The input portrait is first embedded in the latent manifold of such a model through an encoder-based inversion process and then, with a novel triplane-based reflectance network, we synthesize high-fidelity OLAT images. Notably, since the reflectance network operates in the latent space of the generative head model, a relatively small number of lightstage images are sufficient to train the reflectance model. Combining the generated OLATs according to a given HDRI environment maps yields accurate relighting results. Through quantitative and qualitative evaluations, we demonstrate that 3DPR outperforms previous methods, particularly in preserving identity and in capturing lighting effects such as specularities, self-shadows, and subsurface scattering.

Citation Style:    Publication

Single Image Portrait Relighting with 3D Generative Priors.
Pramod Rao, Xilong Zhou, Abhimitra Meka, Gereon Fox, Mallikarjun B R, Fangneng Zhan, Tim Weyrich, Bernd Bickel, Hanspeter Pfister, Wojciech Matusik, Thabom Beeler, Mohamed Elgharib, Marc Habermann, Christian Theobalt.
Conditionally accepted to SIGGRAPH Asia 2025 Conference Papers, 12 pages, December 2025.


Privacy: This page is free of cookies or any means of data collection. Copyright disclaimer: The documents contained in these pages are included to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.