Projector-camera systems (ProCams) simulation aims to model the physical project-and-capture process and associated scene parameters of a ProCams, and is crucial for spatial augmented reality (SAR) applications such as ProCams relighting and projector compensation. Recent advances use an end-to-end neural network to learn the project-and-capture process. However, these network-based methods often implicitly encapsulate scene parameters, such as surface material, gamma and white balance in the network parameters, and are less interpretable and hard for novel scene simulation. Moreover, neural networks usually learn the indirect illumination implicitly in an image-to-image translation way which leads to poor performance in simulating complex projection effects such as soft-shadow and interreflection. In this paper, we introduce a novel path tracing-based differentiable projector-camera systems (DPCS), offering a differentiable ProCams simulation method which explicitly integrates multi-bounce path tracing. Our DPCS models the physical project-and-capture process using differentiable physically-based rendering (PBR), thus the scene parameters can be explicitly decoupled and learned using much fewer samples. Moreover, our physically-based method not only enables high quality downstream ProCams tasks, such as ProCams relighting and projector compensation, but also allows novel scene simulation using the learned scene parameters. In experiments, DPCS demonstrates clear advantages over previous approaches in ProCams simulation, offering better interpretability, more efficient handling of complex interreflection and shadow, and requiring fewer training samples.
@ARTICLE{Li2025TVCG,
author = {Li, Jijiang and Deng, Qingyue and Ling, Haibin and Huang, Bingyao},
journal = {IEEE Transactions on Visualization and Computer Graphics},
title = {DPCS: Path Tracing-Based Differentiable Projector-Camera Systems},
year = {2025},
pages = {1-10},
}