ArchedThunder
Uncle Beerus
- Pronouns
- He/Him
The number you’re giving is for 4k performance mode at 60fps, while the cost at 30fps is much lower, and this isn’t even taking into account the benefits of going with Ultra Performance or a DLSS branch optimized specifically for the hardware (which were also things he mentioned in the video). I also mentioned something that was also mentioned in the video, for 60fps games they would likely target a lower output resolution, like 1440p, thus decreasing the render time cost of DLSS. On top of that the system seems like it might be a good bit beefier than the theoretical he used as reference, so his render time cost numbers for DLSS on Switch 4k may not be accurate. It’s also weird that you use that video as your evidence when he was rather happy with what the theoretical Switch 4K in his video would be able to achieve with DLSS.There is a user who loves using bullet points. He is not the only one though.
This is a misconception. DLSS has a fixed cost that is tied to the output resolution. Depending on how many tensor cores you have, the cost is more or less high. In the case of a portable console like the Switch, Alex Battaglia has estimated that applying DLSS would eat up around 10 ms in each frame, leaving a paltry 6 ms to render the game at base resolution (in this case, the former would be 4k and the latter 1080p).
A game that would render frames at 6 ms outputs at 160 fps. If this happens at 1080p, there is a fat chance it would do so quicker than at 60 fps at 4k. So, there is no clear way to predict (in this example at least) that DLSS is indeed a smarter way to render graphics. And that leads to the next question:
For the reason above. DLSS is simply not a net gain for all usages.
A fixed render time cost based on the hardware and better performance and lower power draw with DLSS are not mutually exclusive. For example, if a Switch 2 game could hit 4k 60fps natively then it would be able to reduce the power draw by dropping the base resolution and using DLSS. The system would have to have fewer tensor cores than literally anybody expects for DLSS to not be worth it compared to native.
Last edited: