-
-
Notifications
You must be signed in to change notification settings - Fork 36k
SSGINode: Improve GI Blending #31882
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Consider if SSRT3's Blending is necessary
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR improves the blending behavior of Screen Space Global Illumination (SSGI) by changing from additive blending to physically correct multiplicative blending. The GI contribution is now multiplied against the diffuse color instead of being simply added on top of the scene color.
- Changed GI blending from additive to multiplicative approach
- Added diffuse color buffer to the Multi-Render Target (MRT) for proper GI calculation
- Updated the composite pass to multiply GI against diffuse color for more realistic lighting
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
Hmm, mulling over something… after playing around with it for a bit… SSRT3 and initial intuition suggest the blending should have the GI*Albedo added on AFTER the AO darkens the scene… but after playing with it, this creates well lit regions that cannot be darkened by the AO… As a hack, I tried applying the AO after the GI, and it looked nice and preserved the juicy AO-ness… but it’s probably not “physically correct”. So, ideally, the AO darkened scene should be used when calculating the GI bounce lighting! This is physically plausible and I think it will make the Cornell Box scene look better than the default… Buuuut the only issue is that it would need another pass… so nevermind… I suppose it’s nice that the compositing step happens in the client code so they can decide how they want it composited depending on their game’s style 🧐 |
@@ -471,7 +471,7 @@ class SSGINode extends TempNode { | |||
|
|||
If( this.useScreenSpaceSampling.equal( true ), () => { | |||
|
|||
stepRadius.assign( RADIUS.mul( this._resolution.x.div( 2 ) ).div( float( STEP_COUNT ) ) ); | |||
stepRadius.assign( RADIUS.mul( this._resolution.x.div( 2 ) ).div( float( 16 ) ) ); // SSRT3 has a bug where stepRadius is divided by STEP_COUNT twice; fix here |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is 16
an arbitrary number? Why not using 8
or 32
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also fixed an issue where the radius would change with the step count in ScreenSpace Sampling mode. (This bug is also present in SSRT3:
/ping @cdrintherrieno Are we doing the right thing here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
16 is arbitrary… I suppose we want it to scale across the long axis of the image, so since the example’s radius caps out at 25, this number could be 12.5, so that when the radius is maxed out, it propagates light all the way from the left side to the right side.
Is 25 an arbitrary number?
As for if it’s the right thing to do… just try adjusting the step count and radius in dev
(in a better scene) and you can see that they’re intertwined in a way they shouldn’t be (and that is inconsistent with Worldspace Sampling).
const compositePass = vec4( add( scenePassColor.rgb, gi ).mul( ao ), scenePassColor.a ); | ||
|
||
const compositePass = vec4( add ( scenePassColor.rgb.mul( ao ), ( scenePassDiffuse.rgb.mul( gi ) ) ), scenePassColor.a ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sidenote: If we want to include GI and AO more physically correct into the lighting, we need something like #28863.
AO should only modulate the indirect light (diffuse+specular). The computed GI should only contribute to the indirect diffuse.
For this, I would create a different (physically correct) example though since the setup is going to be way more complex. I believe we also need enhancements in the core.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, we need an additional MRT output for the new formula.
Can we please preserve the previous composite as a comment so if developers reach the maximum number of MRT attachments , they see a more simplified formula that does not require scenePassDiffuse
? Reducing the GI intensity should approximate a similar solution and is maybe good enough for certain use cases.
When we improve packing with MRT in the future, additional MRT data should hopefully less influence performance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if the multiplication is correct though 🤔 . Isn't that producing an overall too weak indirect diffuse? As far as I understand, SSGI should add an additional source of indirect diffuse light to the scene. If areas receive no indirect diffuse, the SSGI can't contribute any light because of the multiplication. You would have to brighten up the areas first e.g. with an ambient light so the SSGI is visible. That workflow seems strange to me.
I think if we want to treat SSGI as an alternative to an ambient/hemisphere light, light probe or indirect diffuse environment map, we need to stick to an additive formula.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In dev, if you bounce white light onto a black surface, it will turn the surface white. You HAVE to at least multiply it by the albedo; it’s what SSRT3 does. I tried using the normalized scene color instead and it didn’t look nearly as correct.
What are the challenges with an extra MRT attachment? What is the maximum? Is it a WebGL fallback issue? Maybe we can make a sceneColor that has its deviation from albedo in the alpha channel (which would approximate monochromatic lighting intensity)?
I’m all for a proper GBuffer compositing example, but it’s worth noting that people will come to this example to incorporate the GI into their larger project. It shouldn’t be too wrong… 💀
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are the challenges with an extra MRT attachment? What is the maximum?
If you additionally need for example emissive or metal/roughness, you already reach the maximum. We lately had a case were this happened with WebGPU:
So it's not a fallback issue, it happens with WebGPU as well. We just need to be aware of that restriction and try to keep the number of attachment as low as possible, optimize the texture formats like below and start using packing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@WestLangley Does the new formula for incorporation the additional indirect diffuse from the SSGI look plausible to you? Like outlined in #31882 (comment), it is not physically correct but is multiplying the GI with the scene's diffuse color more sensible that just adding the GI on the scene's color? Do you see other options that we could use?
@cdrintherrieno It would be good to know for us what the albedo
term in Unity shader represents. Is it just the diffuse or the total reflected light of a pixel? Is it a combined direct and indirect light term?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Mugen87 Albedo is the base color of a material (it does not contain any lighting). It's typically stored directly in the G-Buffer in deferred mode, or accessible as an input material texture in forward. You should multiply the diffuse lighting by the albedo to obtain the correct lit values (any form of diffuse light needs to be multiplied by the albedo, wether it's direct, ambient or indirect diffuse). Here is a basic lighting example: https://learnopengl.com/PBR/Lighting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah… with packing we can probably also save on the normal map by just computing it from the depth map, and the depth map can probably be packed into the alpha of the velocity or albedo channels.
I assume we can also just render to more targets across multiple passes… but I guess that’s slow?
Perhaps when the number of MRT attachments goes above the WebGPU limit, it should transparently split the rendering into multiple passes?
AAA compositing pipelines aren’t going to want fewer GBuffers 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
with packing we can probably also save on the normal map by just computing it from the depth map
Normal reconstruction is already supported. If you pass no normal node to the GI pass, it automatically reconstructs the data from the depth. However, if you are not bandwidth limited, I would recommend the normal map usage since the reconstruction is more expensive than a single texture lookup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Mugen87 AO should only attenuate indirect (ambient) light sources, and GI should add to indirect light sources. Total indirect light is then attenuated by the diffuse reflectance of the material.
Given the model appears to have both direct and indirect lighting baked into diffuse, I think this PR is a reasonable improvement.
normal: normalView, | ||
velocity: velocity | ||
} ) ); | ||
|
||
const scenePassColor = scenePass.getTextureNode( 'output' ); | ||
const scenePassDiffuse = scenePass.getTextureNode( 'diffuseColor' ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mind adding this below the getTextureNode()
block? This will change the format from RGBA16 to RGBA8 saving half of the memory and bandwidth. For modulating the GI, this precision should be sufficient.
const diffuseTexture = scenePass.getTexture( 'diffuseColor' );
diffuseTexture.type = THREE.UnsignedByteType;
Related issue: #31839 @Mugen87
Description
The current behavior of the SSGI just adds the GI color on top of the scene:

But with this fix, it multiplies the GI color against the diffuse color:

This makes the GI's contribution to the current demo scene much more subtle (we should replace this scene with something like the Cornell Box scene I put together).
Also, I fixed the issue where the TRAA would be active even though the
temporal filtering
checkbox was disabled.Also fixed an issue where the radius would change with the step count in ScreenSpace Sampling mode.
(This bug is also present in SSRT3: https://github.com/cdrinmatane/SSRT3/blob/main/HDRP/Shaders/Resources/SSRTCS.compute#L239-L240 )
TODO: Eventually SSRT3's Full Blending function accounting for fog should be included (but I don't know anything about how to work with fog in TSL and it's not in the example scene, so I'll leave this to somebody else):
https://github.com/cdrinmatane/SSRT3/blob/main/HDRP/Shaders/Resources/SSRT.shader#L263-L264