I brought this up on discord, but I never posted this here.
some quotes from Beyond3D
these quotes are interesting given RedGamingTech's supposed "leaks" about DLSS 3.0 and how it heavily ties to ray tracing. if my interpretation is correct, and I'm not making really bad assumptions about how real time rt works, one could, theoretically, use AI to better fill in the gaps of lower resolution RT. for example, if you look at UE4's GI or Metro Exodus EE's GI up close, you can see "boiling". those blotches are gaps that are being filled in through temporal information (more noticeable the lower the resolution of the RT). an AI temporal filter could give you better results, leading to better quality at a given setting, or better performance and acceptable quality with a lower ray count. now how good a quality one can get out of 270p quality GI for a 540p internal res that's DLSSed to 1080p?
then again, how about a 180p reflection on a 360p frame to dlss to 720p?
some quotes from Beyond3D
At this point the biggest downside of DLSS (besides improper integrations which are still happening unfortunately) is it's inability to properly reconstruct ray traced portions of the frame I'd say. They need to do something about that next.
RT is stochastic, uses its own spatial denoising, temporal accumulation and jittering too, so DLSS is kind of another layer of reconstruction on top of the ray-tracing reconstruction == more losses.
1, 2, 3My guess would be because most ray traced pixels themselves use various forms of temporal accumulation which means that DLSS can't produce "native" results from the same amount of frames as for rasterized pixels.
Generally with DLSS it's a good idea to increase the amounts of rays per pixel where possible since that would allow DLSS to reconstruct back to native better.
But there likely is a better solution for that - some form of DLSS and RT denoising integration or something.
these quotes are interesting given RedGamingTech's supposed "leaks" about DLSS 3.0 and how it heavily ties to ray tracing. if my interpretation is correct, and I'm not making really bad assumptions about how real time rt works, one could, theoretically, use AI to better fill in the gaps of lower resolution RT. for example, if you look at UE4's GI or Metro Exodus EE's GI up close, you can see "boiling". those blotches are gaps that are being filled in through temporal information (more noticeable the lower the resolution of the RT). an AI temporal filter could give you better results, leading to better quality at a given setting, or better performance and acceptable quality with a lower ray count. now how good a quality one can get out of 270p quality GI for a 540p internal res that's DLSSed to 1080p?
then again, how about a 180p reflection on a 360p frame to dlss to 720p?
![B0sH4OW.png](https://i.imgur.com/B0sH4OW.png)