I remember someone said somewhere that "0" means the old behaviour (eye pass only), which probably means this
Yes, but why ?
I'm perfectly understanding the issue when there is only a Eye-Glossy-Light and the light is a delta one, but in the case of the kitchen scene, the path is Eye-Glossy(floor)-Diffuse(Curtain)-SSS-Light, so hitpoints may be stored on the Curtain and then allow gathering... So I'm wondering WHY does the highlight on the floor does not appear.
Also, I'm really wondering the real reason about not storing on glossy, or storing on the first diffuse. Imagine we have a EGGDL path. Depending on which surface we are storing, the light contribution are:
(here, F_X is the evaluation of the BSDF on surface X (ie, G1: first glossy, G2: second glossy, D: diffuse one). SampleF_X is the sampling of the surface, which is generally F() / pdf == a constant value)
- on the first Glossy: EyeThroughput * F_G1 * SampleF_G2 * Sample_D * LightTroughput
- on the second Glossy: EyeThroughput * SampleF_G1 * F_G2 * Sample_D * LightTroughput
- on the glossy: EyeThroughput * SampleF_G1 * SampleF_G2 * F_D * LightTroughput
We can guess that the variance of EyeTroughput and LightThrouhput are the same. We can also guess that the variance of SampleF_G1 and SampleF_G2 are the same and are constant value. So the only difference in each function is that twos involve the variance of a glossy surface and the one with the diffuse involves the variance of a diffuse surface. Usually the variance of the diffuse is less than the variance of the glossy one, so we have less variance by storing the hitpoint on the diffuse surface. So mainly I guess we can conclude that it is better to store on the surface with the lower variance of the path.
Knowing this, why do the fact of not storing on low-glossy surface imply more noise on the image ?
Our estimation comes from N paths (ie, the N photons which falls on the hitpoint). The main issue is that if we store on the first glossy, we got N paths which are really different. Else if we store on the next surface, we get N paths, but they all share the same path prefix, so we get less variance inside the path family we are sampling, but less biais (because we restrict our sampling to a very limited path family). But because each pixels may have a totally different path prefix, we get lot of variance between pixels which are closes from each other.
The issue is the same for specular surface, like glass, and can only be solved by increasing the number of eyepass.
So we have few solutions:
- a) Increase the number of eyepass (by reducing the number of photon / pass and improving code of the eyepass)
- b) Find a smarter heuristic for storing on glossy surface, but it appears that the one which better reduce variance is to store on glossy...
- c) Use a coherent eyepass sampler, which will reduce the noise of the eyepass, at the cost of a coherent bias...
- d) by the way, it does not store at all the issue of EGL paths with a delta light, in which we may be forced to store hitpoints on the glossy surface whatever happen. (And honestly I don't have any idea how to fix this smartly in the code, because it means that we need to store on glossy whatever happen in such cases :
a) the eyepath hit a glossy surface, then hit nothing, easy, we use the last intersected surface
b) the eyepath hit a glossy surface, then hit a surface which is a delta light, then we must also rollback ?. Does delta area light exists, I don't think.
So I'm proposing for now the following solutions:
a) Never storing on glossy surface, except if next non-specular-neither-glossy bounce hit nothing (to allow delta light evaluation). It will generates noise during the eyepass, so what ? Glass have the same issue.
b) Introduce a coherent sampler for the eyepass (by the way, we can use the same random number for each eyepath) it will reduce the noise (and SPPM users care only about noise
c) For material with many behavior (ie, diffuse + glossy + specular), use a russian rulette to decide between bouncing (ie, glossy/specular behavior) or storing (ie, diffuse behavior). We may be able to do this by importance sampling.
d) Definitely split the glossy material in two, a diffuse component and a glossy component.
Ideas ? Comments ?