Nintendo wouldn't have to disable the rt cores, just not use it. I don't think it's worth it since it can still be viable for some lighter rt tasks like assisting with some GI solutions or shadow testing
I suppose you're right - my thinking was that Nintendo can't control what 3rd parties do, and if they want to manage battery life, disabling the cores prevents their use entirely
I'd love for a more in depth explanation! Your posts are great to read and I usually always learn something
(Caveat -
@ILikeFeet is the resident RT expert here)
Imagine a
picture of a basketball. If you're not American you might not have played with a basketball but they're heavily dimpled so they're easy to grip onto.
Now imagine rendering that picture as a texture in a video game. And imagine that, because of the resolution of the game, you're going to lose some detail. That dimpling gets lost, or turns into a low res mush.
What if the player steps very very slightly to the left? It's still a low res mush, but different low res mush. Why? Because all those dimples are
sub pixel detail. The tiny curves and shadows have detail that are smaller than a single pixel on the final screen, so when the player moves, some of the detail gets captured -
sampled - by the new camera angle, and other detail gets lost. If you've ever been playing a game and something like a fence or trees in the distances seem to fizz, this is why. An edge of a leaf is suddenly appearing, but the tiny, 1 pixel branch which connects it to the tree vanishes.
One of the things DLSS does is keep that sub-pixel detail from previous frames. In fact, developers introduce tiny, unnoticeable camera jitter every frame, so even if the player is standing still, DLSS sees new detail every frame. In the case of the basketball, that gives DLSS a full picture of all those dimples. This is the
super sampling part of Deep Learning Super Sampling.
Now, let's add a Ray Traced reflection of that basketball texture. Ray Tracing draws lines (rays) between light sources and the various objects in a scene. When a ray hits an object it
samples the color at that pixel, and then carries that color data along the rest of the ray's path. This emulates how light takes on the color of things it bounces off. In the case of a reflection, you take all the bounced colors off the basketball texture, and apply them to the reflective surface. Ta dah! Ray Traced reflection.
Each ray cast is expensive, and more rays mean higher resolution reflections. By default, most games increase the number of rays with increased resolution, and drop them with lower resolution. So far so good. But let's add DLSS to the mix.
The game starts with a 1080p image, and an appropriate number of rays for 1080p. It displays the basketball texture and its reflection on frame one. On frame two, the camera gets jittered, and DLSS begins combining the frames to generate a 4K image. Our basketball gets more and more detailed.
But the reflection doesn't.
Because the reflection's maximum level of detail is determined not by the
texture, but by the
number of rays you cast. When you move the camera, you get a new angle on the texture, and the texture is higher res than the game is actually displaying, so more sub-pixel detail gets exposed. But the reflection doesn't have that deeper detail to expose.
Developers have two options - keep the reflection low res, or cast a number of rays based on the
output resolution, rather than the input resolution, which would give DLSS more detail to work with. And, to (finally) bring this around to [REDACTED] both of those situations favor handheld mode over docked.
When you slow down the GPU for handheld mode, you slow down the tensor cores and the RT cores by the same amount. For RT, as long as the ratio of performance matches the ratio of resolution, then there isn't really a compromise. If you're half as powerful, but running at half the res, you can probably just cast half as many rays, and your RT effects will scale along with the res of the image.
Tensor cores see a similar drop in performance, but DLSS performance isn't linear in the same way. When switching between handheld mode and docked mode, you probably want to change the DLSS scaling factor as well. So a 4x factor (1080p->4k) in docked mode probably becomes something like a 2x factor in handheld mode (540p->720p) in handheld.
Okay, so remember, devs have two options - run RT at the input resolution or the output resolution. If you chose input resolution, then in the handheld case, RT is half the resolution of the final image - but in docked mode it is only a
quarter. On the other hand, if they choose output resolution, consider the gap between 720p and 4K. That's
800%. No way handheld mode is only 1/8th of docked mode's power. Anything that a game can do in docked mode at 4K* should be a breeze at 720p.
* I'm not suggesting that [REDACTED] can do 4k RT reflections, by the way. I'm just saying "whatever RT effects developers choose to enable" at 4K.