I would like to ask a technical question, can drake's 48 tensor cores theoretically render results comparable to 9th gen consoles (ps5/xsx)?
Though if someone wants to explain why this is the case, please feel free to.
You've already gotten some good answers, but just to add on top (and assuming you did mean RT, not DLSS)
Let's start with how ray tracing works:
Light has color - we all know this from every neon sign you've ever seen. When light hits an object, it bounces, but it also changes color, mixing the original color of the light with the color of the object.
If white light from the sun comes in through my office window and hits my grey Ikea dresser, the light becomes grey. When that light reaches my eye I see the "grey". Places where more light hits are brighter than others. When the sun starts to set, the light from the sun turns from white, to something closer to orange, and all of the objects I see - even the grey dresser - have an orange tint.
Ray tracing is a computer simulation of light bouncing and changing color. It makes the lighting in a game very accurate to real life. Ray tracing could be used to draw every part of a screen - that's how movie CGI works - but it's too expensive to do that in real time in most games.
Instead, game engines use traditional rendering for parts of the game, and ray tracing for other parts, where traditional rendering struggles to get good results. Things like reflections, or shadows.
How ray tracing cores work
RT cores don't draw
anything. The regular shader cores of the GPU that draw everything else also draw the tray traced bits.This is important, because those shaders are how artists control what a scene looks like. For example, cel-shading, like
Breath of the Wild - the artist wants to take an accurate version of the light in the scene, but render it in that cel-shaded cartoon look.
RT cores accelerate the math used to compute the light bounces, and the color changes. The RT core will take a ray of light, and check if it hits a certain part of an object. If it hits, it will calculate how the light color changes, and the angle at which the light bounces. AMD and Nvidia's RT cores both do these things.
But there are
so many lights, and
so many objects in even a simple scene, the question "does this ray of light hit this corner of this particular object" is almost always
no. Ray tracing can spend a lot of time chasing paths of light that can either never happen, or would never be visible, even if they did. Ray tracing is expensive, this is a huge waste.
The trick is to use a special description of the objects in your scene (called a
BVH tree) that allows you to very quickly eliminate big chunks of the scene where a certain path of light will never get to. So instead of checking every ray of light against every triangle in every object, you can use this quick scan to cut it down to only (for example) 20% of the objects in the scene, and then do the individual tests on those.
Both AMD and Nvidia use BVH trees -
but AMD's RT cores don't accelerate that. AMD's solution uses the CPU to do this culling. Not only is this slower, but it creates a back-and-forth conversation between the CPU and the GPU.
Game engine: cast a ray, please
CPU: Let me run a BVH search... okay, I've whittled it down.
RT cores: Okay, let me run all the bounces on this small section... Here you go!
CPU: Great, shaders, can you draw some reflections with this?
Shader cores: On it!
Nvidia cores
do accelerate the BVH searches. It also places the RT cores and the shader cores together in such a way that they can communicate back and forth, instead of just one way. So the conversation looks like this:
Game engine: cast a ray, please
RT cores: Let me run a BVH search... great, I can check the bounces in this section... here you go shaders
Shaders: Drawing reflections now
You can see from just this how much faster the Nvidia solution should be. But it's not just that the conversation is shorter and more efficient, but Nvidia's RT cores can do the BVH search faster than AMD's drivers can do it on the CPU. This leads to a huge increase in performance.
So RT on Drake is as good as PS5???
Well, no. But also yes. But not really, though. Let's zoom out, some.
If you looked at just the ray casting part of the pipeline - from when the game engine starts casting a ray to when the shader gets the result - Drake will probably perform about twice as fast as the Series S. Not quite PS5 good, but pretty damned good. But that's just one part of the pipeline.
Light needs
objects to bounce off of. Objects in a video game are made of
geometry - the number of polygons in model. We've seen downports to Switch use lower quality models, we should expect the same in the future. How good an effect looks to your eye will be impacted by that geometry.
Shaders still need to actually perform the drawing operations. Just because the RT cores can handle all the rays of light in the scene doesn't mean the shaders can draw every RT effect in the same amount of time. Nvidia's efficiency won't overcome just how big the GPUs are in the other consoles.
And GPUs still need to draw the the non-RT parts of a scene. Even if somehow, Nvidia's RT hardware in Drake could do every RT effect, beginning to end, at the same speed as the PS5, developers would still likely need to pull the effects back to make room for the rest of the rendering.
What do we expect then?
Well, I will only speak for me. And I will say that I don't have a sense of what Nintendo will do for first party games. There has never been a game engine that only ever had to support Nvidia's hardware before, so I don't have a sense of how far they could push it. Nintendo might choose to go all in, or to be more subtle, not for performance reasons, but artistic ones.
But when it comes to ports, a rule of thumb that I think will hold up - Nintendo's console can keep ray tracing
on where Series S ports need to turn them
off. I realize I wrote a lot of words here for a pretty short conclusion, but I think that's the most succinct answer I can give.
That doesn't mean that every port will have RT, or that it will look as good. Some games have RT as an afterthought, and it's not transformative. Discarding it will give a performance boost for a low visual cost. And just because the RT effects are identical, doesn't mean they'll look as good, when there are other cutbacks to geometry and resolution.
But where RT
is transformative, I think Drake's RT performance will be high enough that keeping them on - along with the cutbacks elsewhere - will be worth it, and technically possible, in a way it's not on Series S.
Can you give me an example, so it's something I can see?
Actually, I kinda can!
Control is a game that 1) has RT, 2) has versions on lots of hardware, and 3) was tested by Digital Foundry in their T239 video.
Here is a comparison of the Series S version with the Series X RT mode. On the left you see the Series S running at 60fps. On the right, you see the Series X running at half that with the RT reflections on. You can see that the differences are night and day. Series X can do the higher frame rate by turning the RT off, and still at higher resolution.
Series S just has the high frame rate, low res, no RT mode. Implying that even at 30fps, the Series S couldn't enable RT, without dropping to a resolution so low it was unplayable.
Here is Control tested on an ultra low spec Nvidia GPU. You'll see a upscaled-to-1080p version of the game, just like Series S, but with those high quality, RT reflections enabled, at the same frame rate as the PS5. The PS5 version still looks better, because it's running at higher resolution. But the resolution cutback that wasn't enough to get RT on Series S is absolutely enough on this low spec hardware, that is in the same performance bracket as Drake.
Does this make sense? You still need all the resolution cutbacks, the frame rate tweaks, because fundamentally, it's still a much smaller piece of kit. But where RT falls away on the AMD hardware, it stays on here, because of the superior design.