Filter-Adapted Spatio-Temporal Sampling for Real-Time Rendering
This paper was presented at 2024 ACM Siggraph Symposium on Interactive 3G Graphics and Games in Philadelphia, USA, on 8-10 May, 2024.
Authors: William Donnelly, Alan Wolfe, Judith Bütepage, Jon Valdés.
Filter-Adapted Spatio-Temporal Sampling for Real-Time Rendering
Download the paper (PDF 8 MB).
Stochastic sampling techniques are ubiquitous in real-time rendering, where performance constraints force the use of low sample counts, which leads to noisy intermediate results. To remove this noise, temporal and spatial denoising in post-processing is an integral part of the real-time graphics pipeline.
This paper's main insight is that we can optimize the samples used in stochastic sampling to minimize the post-processing error.
The core of our method is an analytical loss function that measures post-filtering error for a class of integrands – multidimensional Heaviside functions. These integrands are an approximation of the discontinuous functions commonly found in rendering. Our analysis applies to arbitrary spatial and spatiotemporal filters, scalar and vector sample values, and uniform and non-uniform probability distributions.
We show that the spectrum of Monte Carlo noise resulting from our sampling method is adapted to the shape of the filter, resulting in less noisy final images. We demonstrate improvements over state-of-the-art sampling methods in three representative rendering tasks: ambient occlusion, volumetric ray-marching, and color image dithering.
Common noise textures and noise-generation code are available at https://github.com/electronicarts/fastnoise.