Test Suite Overview


We provide a full-image comparison of the rendering results with different transmittance estimators, P-series CMF [Georgiev et al. 2019], Ratio Tracking [Cramer 1978], Residual Ratio Tracking [Novak et al. 2014], Unbiased Ray Marching (ours) and Biased Ray Marching (ours).
The renderings are done in 1 sample per pixel and all transmittance estimators use the same number of density evaluations per pixel. For unbiased methods (P-series CMF, Ratio Tracking, Residual Ratio Tracking, Unbiased Ray Marching) we report the variance, and for our slightly biased ray marching method we report the mean squared error (MSE) which takes the bias into account. We also report inverse efficiency, that is, variance times cost, which measures the per-density-evaluation efficiency of the estimators, with lower (purple) being better than high (yellow).
In many of the scenes, especially Cloud, the improvement from our transmittance estimators is somewhat masked by other sources of noise.
In the Plume scene, too, our unbiased ray marcher improves the image substantially compared to earlier methods. However, we can spot some residual noise in the form of darker pixels that may stand out from the otherwise relatively smooth image. These darker pixels are not the kind of strong outliers that are often present in Monte Carlo renderings as fireflies–such strong outliers would be clearly visible as variance in the variance images. Using a slightly increased tuple size (e.g. by a multiplier for the control optical thickness) should be a very effective remedy. Alternatively, these small outliers should average out fast in multi-spp images. Alternatively, our biased ray marcher can be used when a small amount of bias is acceptable, as it does not seem to exhibit these residual artifacts.

Plume

Box

Plume

Glass with Smoke