• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

Unless Nvidia has a use case for RT in self driving cars/ robotics/ AI, there is no reason to highlight the rt cores
RT cores are supposed to execute physics and mathematical calculations very efficiently and quickly, and in the case of self-driving cars, utilizing physics for calculations is imperative to make sure it’s as safe as possible, along with the other features.

Since this is more interesting than the conversation about screens (to me anyway), what we can gather is that Drake seems like it’s a hybrid layout/architecture of Ampere and Lovelace. Of course, the difference between the two isn’t a world of difference, but the amount of RT cores present in Drake lines up with Ampere (1 per SM), however the Cache setup and the number of SMs per GPC seems to line up with Lovelace.
 
RT cores are supposed to execute physics and mathematical calculations very efficiently and quickly, and in the case of self-driving cars, utilizing physics for calculations is imperative to make sure it’s as safe as possible, along with the other features.
The RT cores are very specialised, though. They are uniquely focused on traversing BVH trees and evaluating ray-triangle intersections in order to quickly find which object in the scene the ray through a given pixel will hit, after which the ALUs can do all the shading computations. This presupposes that the system already knows a scene description and needs to render an image, which is the case in computer graphics but is different from most AI applications (where it needs to discover the scene description from an input sequence of images). As such, I'm not sure the RT cores can be of use in those applications.
 
From what I recall, the leak for Drake mentioned RT cores, correct?
How many RT cores per SM was it suppose to have?
 
Do we think Nintendo would use Drake as an opportunity to update or improve the joycons? Maybe have analog triggers or add a mic somewhere?
Update or improve how? Since by their own words they are constantly doing that anyways. I think they would improve the components already in there. Doubt they wanna shove a mic in them. Analog triggers are dead since Nintendo probably see them as too cumbersome to fit the profile while also not using them for two generations now.
 
0
Found a place that says GA10F has 128 KB of L1 cache per SM compared to GA10B's 192 KB.

Perpetual reminder, all of the values I've seen are coming from disparate software layers that have been updated at different times, not authoritative datasheets.
 
But Control uses 1080p to 4K dlss when it was only at 2.0. That video was what convinced me dlss is legit. Does Super Switch not have enough tensor cores to pull it off, or are the tensor cores not strong enough to pull that off?
Yes, dlss is great, but you need many tensor cores to upscale to much higher resolution. Sometimes it would be faster to render to that resolution than dlss upscaling. You can't have many RT, Tensor and CUDA cores on a 15W chip.
 
Yes, dlss is great, but you need many tensor cores to upscale to much higher resolution. Sometimes it would be faster to render to that resolution than dlss upscaling. You can't have many RT, Tensor and CUDA cores on a 15W chip.
Doesn't DLSS offer several different quality settings?

What are the requirements to using Ultra Performance compared to the Quality preset? (Perhaps Ultra Performance does not have as high a Tensor core requirement as one suspects, since the quality of the upscale isn't as good as the other 3 presets.)
 
The RT cores are very specialised, though. They are uniquely focused on traversing BVH trees and evaluating ray-triangle intersections in order to quickly find which object in the scene the ray through a given pixel will hit, after which the ALUs can do all the shading computations. This presupposes that the system already knows a scene description and needs to render an image, which is the case in computer graphics but is different from most AI applications (where it needs to discover the scene description from an input sequence of images). As such, I'm not sure the RT cores can be of use in those applications.
I think what nVidia executes for the ORIN platform for the self driving cars would be utilizing the RT for better spatial perception data.

Yes, dlss is great, but you need many tensor cores to upscale to much higher resolution. Sometimes it would be faster to render to that resolution than dlss upscaling. You can't have many RT, Tensor and CUDA cores on a 15W chip.
If they couldn’t do it, they wouldn’t have it supported in the graphics API :p
 
Doesn't DLSS offer several different quality settings?

What are the requirements to using Ultra Performance compared to the Quality preset? (Perhaps Ultra Performance does not have as high a Tensor core requirement as one suspects, since the quality of the upscale isn't as good as the other 3 presets.)

If they couldn’t do it, they wouldn’t have it supported in the graphics API :p
You don't want ultra performance mode. At least quality mode. If Super Switch docked is Series S level of performance, then you need very little dlss for sharp and clean image. I'm sure that Super Switch can render modern games at native 720p handheld. Next Switch will be a little beast for Nintendo games. 4K 60 maybe some games even 120.
 
You don't want ultra performance mode. At least quality mode. If Super Switch docked is Series S level of performance, then you need very little dlss for sharp and clean image. I'm sure that Super Switch can render modern games at native 720p handheld. Next Switch will be a little beast for Nintendo games. 4K 60 maybe some games even 120.
Quality mode is the highest preset though. I'd say Balanced could also be a viable option should the requirements for Quality seem too steep. As for "not wanting ultra performance mode", that should be an option left to the player if they want to play under VRR/high refresh rates, and don't mind the sacrifice in visual fidelity.

Who knows? There's probably a bespoke "Drake" preset(s) that is(/are) optimized towards the SoC's particular hardware configuration, thus wouldn't fall under the same tensor core requirements as desktop Ampere.
 
0
Yes, dlss is great, but you need many tensor cores to upscale to much higher resolution. Sometimes it would be faster to render to that resolution than dlss upscaling. You can't have many RT, Tensor and CUDA cores on a 15W chip.
This is the thing that was missing imo in Alex Battaglia's video. If a game must run at xxx FPS at the base resolution for DLSS to be operable at playing frame rates, when does it start making sense to render it rather at the higher resolution in the first place?
 
0
So Nintendo filed an interesting patent regarding the dock on 12 July 2021, which was published on 20 January 2022.

I wonder if the DLSS model*'s dock could inherit some of the interesting features mentioned in the patent. (Although very unlikely to happen, I would personally like to see Nintendo release a smaller, more compact dock in the similar vein to the Insignia Dock Kit.)
We have images of this in the patent document, it's just a normal dock with this swivel block of connectors on the back:

FJkWFVTWYAAtuLd
This dock patent is intended for a very specific use case—allowing cable connection from two opposite directions (left or right). It is unlikely to later be productized because the OLED model dock already addressed the "troublesome" issue. The design process probably went like this in the Nintendo HQ:

Product Manager: [walks in] Remember I asked you to support cabling both ways?
Product Designer: [beams] Yep, my design proposal is rad, isn't it?
Product Manager: [checks spreadsheet] The cost would eat too much into our profit margin though.
Product Design: [crestfallen] What do you suppose we do?
Product Manager: [thoughtful] ... Remove the panel so the cables can bend freely?
Product Designer: [grimaces] But that'd look terrible!
Product Manager: [sighs] It's on the backside. Nobody would see it.
Product Designer: [calms down] Ok, sure.
Product Manager: [checks spreadsheet again] With the money saved, we may bundle a more flexible HDMI cable.
Product Designer: [perks up] Can we also make the dock in white?
Product Manager: ... [Arthur fist]

Why bother with such an obscure problem though? An inflexible cable requires more torque to bend; when the cable is moved even slightly (e.g., when docking/undocking), the stored torque may be released, causing the dock and Switch to lose balance and possibly fall on the floor. While this seems an infrequent occurrence, the cost of repair is high when it does occur. Nintendo must have done a cost analysis and determined it worth fixing. Apple's MagSafe is another example.
 
I've seen DLSS upgrade performance at 4K from 25 fps to 55 fps on an RTX2060 (using performance mode 1080p- > 4K), which is the closest card to the Drake specs - although depending on the clocks Drake could be as low as 55% of the tensor core performance. I think it is very much an open question whether DLSS all the way up to 4K is possible. DLSS is potent, but definitely not a magic button you can press for free extra resolution. The Ampere architecture might advantage the Drake chip in comparison with the RTX2060 if overlapped computation is possible, but in the end there are more things that just the number of native pixels you have to render (not everything scales linearly with resolution).

To get a bit more into the numbers: We have seen the following numbers from NVIDIA:
DLSS2_5-1920x1080.jpg


The RTX2060S is slightly more performant than the vanilla RTX2060, but not by much. For rounding purposes, let us say that DLSS from 1080p to 4K takes 6 ms on Drake. For a 60fps target, that would leave about 10 ms for native rendering if we assume no overlapping for the tensor core and ALU execution. This would mean a game would need to render at an equivalent 100fps to make the jump to a 4K DLSS result at 60 fps. For the 30 fps game, you have 27 ms left, which would be the equivalent of a 37 FPS 1080p game. This is the basic calculation I think. Now, we need to problematise it. First off, the tensor cores are used for more than just DLSS: they perform 16 fps computations for the ALUs as well, so that will take up some of the tensor core performance. Next, we need to release that the post-processing still has to happen on the 4K output image, so the post-processing step will be more expensive as well. Thirdly, the relative importance of the native resolution is not a constant. Scene complexity does not necessary scale completely linearly with target resolution (before DLSS), since since any number of N objects in the scene need to be mapped to a fragment, and thus the number of candidate fragments for each pixel will increase for a smaller resolution. That's one big reason why the performancer won't scale completely linearly. Despite that, I believe there could definitely be a decent number of games for which a 4K output via DLSS is in the cards.

I think it's very well possible that not all games (especially the more demanding ones) will do 4K and will go for 1440p. And that will make sense on a game by game basis for sure. Personally, I'm not yet convinced (nor adamantly opposed to the notion or anything) that 4K DLSS would be a priori impossible for games due to the intrinsic computational cost of DLSS for some of the reasons I mentioned above. Edit: I noticed that this last paragraph reads like I expect most except the most demanding ones to do 4K. I'm not really of that opinion, but I think there could be a good number of games that do 4K with 1080p -> 4K DLSS based on my reading of the specs of the Drake chip (and an estimated 900 MHz clock frequency).
 
Last edited:
I've seen DLSS upgrade performance at 4K from 25 fps to 55 fps on an RTX2060 (using performance mode 1080p- > 4K), which is the closest card to the Drake specs - although depending on the clocks Drake could be as low as 55% of the tensor core performance. I think it is very much an open question whether DLSS all the way up to 4K is possible. DLSS is potent, but definitely not a magic button you can press for free extra resolution. The Ampere architecture might advantage the Drake chip in comparison with the RTX2060 if overlapped computation is possible, but in the end there are more things that just the number of native pixels you have to render (not everything scales linearly with resolution).

To get a bit more into the numbers: We have seen the following numbers from NVIDIA:
DLSS2_5-1920x1080.jpg


The RTX2060S is slightly more performant than the vanilla RTX2060, but not by much. For rounding purposes, let us say that DLSS from 1080p to 4K takes 6 ms on Drake. For a 60fps target, that would leave about 10 ms for native rendering if we assume no overlapping for the tensor core and ALU execution. This would mean a game would need to render at an equivalent 100fps to make the jump to a 4K DLSS result at 60 fps. For the 30 fps game, you have 27 ms left, which would be the equivalent of a 37 FPS 1080p game. This is the basic calculation I think. Now, we need to problematise it. First off, the tensor cores are used for more than just DLSS: they perform 16 fps computations for the ALUs as well, so that will take up some of the tensor core performance. Next, we need to release that the post-processing still has to happen on the 4K output image, so the post-processing step will be more expensive as well. Thirdly, the relative importance of the native resolution is not a constant. Scene complexity does not necessary scale completely linearly with target resolution (before DLSS), since since any number of N objects in the scene need to be mapped to a fragment, and thus the number of candidate fragments for each pixel will increase for a smaller resolution. That's one big reason why the performancer won't scale completely linearly. Despite that, I believe there could definitely be a decent number of games for which a 4K output via DLSS is in the cards.

I think it's very well possible that not all games (especially the more demanding ones) will do 4K and will go for 1440p. And that will make sense on a game by game basis for sure. Personally, I'm not yet convinced (nor adamantly opposed to the notion or anything) that 4K DLSS would be a priori impossible for games due to the intrinsic computational cost of DLSS for some of the reasons I mentioned above. Edit: I noticed that this last paragraph reads like I expect most except the most demanding ones to do 4K. I'm not really of that opinion, but I think there could be a good number of games that do 4K with 1080p -> 4K DLSS based on my reading of the specs of the Drake chip (and an estimated 900 MHz clock frequency).
These performance numbers were before the introduction of the Ultra Performance mode, which was around the time Ampere debuted. Ultra Performance allows for a more flexible/wider factor of scaling, at the expense of a bit more visual fidelity compared to the other 3 modes.

There was also the introduction of being able to scale from variable resolutions with DLSS 2.2 (or was it 2.1?) if I'm not mistaken, so Tensor cores do not have to be taxed as much if a scene can be rendered at a higher base resolution.

I can expect Drake to utilize DLSS Performance/Ultra Performance/Variable Resolution mode to be able to hit 4k targets, with the most intense and busy scenes having a hit to visual fidelity (it will still scale approrpriately to 4k, just with more noticable upscalling artifacts). Think of the way Xenoblade Chronicles 2 got really blurry because it lowered resolution down so much in addition to applying TAA: While yes, (Ultra Performance) DLSS is no magic bullet, it can mitigate most of the issues caused from such downscalling in dynamic resolution games. It won't mitigate it completely, but it will at least hide many of its flaws as opposed to not having it enabled.

If so, games can be running at 1080p when upscalled to 4k, with more demanding games utilizing Ultra Performance if a game utilizes variable resolutions that drop down as low as 900p. I do think Nvidia might create a special driver configuration for Drake which allows it to optimizes DLSS quality dynamically depending on rendering resolution + tensor load (DLSS 2.4?).
 
Last edited:
This dock patent is intended for a very specific use case—allowing cable connection from two opposite directions (left or right). It is unlikely to later be productized because the OLED model dock already addressed the "troublesome" issue. The design process probably went like this in the Nintendo HQ:

Product Manager: [walks in] Remember I asked you to support cabling both ways?
Product Designer: [beams] Yep, my design proposal is rad, isn't it?
Product Manager: [checks spreadsheet] The cost would eat too much into our profit margin though.
Product Design: [crestfallen] What do you suppose we do?
Product Manager: [thoughtful] ... Remove the panel so the cables can bend freely?
Product Designer: [grimaces] But that'd look terrible!
Product Manager: [sighs] It's on the backside. Nobody would see it.
Product Designer: [calms down] Ok, sure.
Product Manager: [checks spreadsheet again] With the money saved, we may bundle a more flexible HDMI cable.
Product Designer: [perks up] Can we also make the dock in white?
Product Manager: ... [Arthur fist]

Why bother with such an obscure problem though? An inflexible cable requires more torque to bend; when the cable is moved even slightly (e.g., when docking/undocking), the stored torque may be released, causing the dock and Switch to lose balance and possibly fall on the floor. While this seems an infrequent occurrence, the cost of repair is high when it does occur. Nintendo must have done a cost analysis and determined it worth fixing. Apple's MagSafe is another example.
Any sort of even semi-unique idea is quite often something most companies will attempt to patent, even if it is in practice a pretty poor solution to a previously solved problem.

That's just how it is, the more patents and patents pending you have the better it looks for your company and the more opportunity you have for infringement suits, regardless of whether or not you have any intention of selling the patented product.
 
0
Yes, dlss is great, but you need many tensor cores to upscale to much higher resolution. Sometimes it would be faster to render to that resolution than dlss upscaling. You can't have many RT, Tensor and CUDA cores on a 15W chip.
I have seen this issue literally once and it was a very stupid use case in Edge of Eternity. Not to mention we don't know the lower bounds for this particular algorithm. It's possible Drake gets a bespoke algorithm to make do with fewer tensor cores
 
0
You don't want ultra performance mode. At least quality mode. If Super Switch docked is Series S level of performance, then you need very little dlss for sharp and clean image. I'm sure that Super Switch can render modern games at native 720p handheld. Next Switch will be a little beast for Nintendo games. 4K 60 maybe some games even 120.
Now this makes less sense. You mentioned that doing RT and DLSS would be too much for the device to handle within the 15W range it is in, even to say that it’s better to just go native to get the result since it’s faster. But now give the idea that 4K60 or even 120 could be possible?



When Alex did his video, he took the very lowest common denominator and compared it to the 2060, finding that 4K60 would be unlikely but 4K30 is very much on the table. This thing seems to be close in the theoretical ML TOPs of the Series X once all features are leveraged, and when he compared the Series X to the 2060, found that it would require 5.5ms to actually super sample an image to 4K from 1080p, and this is well below the 16.6ms frametime needed for 60FPS.

Now you can argue whether Nintendo will actually utilize some features like fine grained sparsity that comes with Ampere GPUs (I believe they can be done in Turing as well), or if they’ll exceed the 15W range they are in for the device to target a 20-25W range when docked, these are fair. But I don’t think that it’s going to be impossible for them to reach 4K30 or 4K60 with their available tools.


Performance, Balanced and Quality modes are the ones I expect them to mostly utilize anyway, especially third party titles.
 
Unless Nvidia have designed an even less expensive option, Nintendo will go for ultra performance DLSS.
 
Unless Nvidia have designed an even less expensive option, Nintendo will go for ultra performance DLSS.
I doubt Nintendo will go for that option for all but their most demanding games later in the life of this device. We're talking about a 3TF-ish machine, so it should be able to render very demanding, above-gen-8-level visuals at least at 1080p and then potentially apply DLSS from there.

Unless Nintendo start developing Horizon Forbidden West-level visuals (as they are on PS5), then they are imo likely to get to 1080p before DLSS.
 
Now this makes less sense. You mentioned that doing RT and DLSS would be too much for the device to handle within the 15W range it is in, even to say that it’s better to just go native to get the result since it’s faster. But now give the idea that 4K60 or even 120 could be possible?



When Alex did his video, he took the very lowest common denominator and compared it to the 2060, finding that 4K60 would be unlikely but 4K30 is very much on the table. This thing seems to be close in the theoretical ML TOPs of the Series X once all features are leveraged, and when he compared the Series X to the 2060, found that it would require 5.5ms to actually super sample an image to 4K from 1080p, and this is well below the 16.6ms frametime needed for 60FPS.

Now you can argue whether Nintendo will actually utilize some features like fine grained sparsity that comes with Ampere GPUs (I believe they can be done in Turing as well), or if they’ll exceed the 15W range they are in for the device to target a 20-25W range when docked, these are fair. But I don’t think that it’s going to be impossible for them to reach 4K30 or 4K60 with their available tools.


Performance, Balanced and Quality modes are the ones I expect them to mostly utilize anyway, especially third party titles.
DLSS and raster are separate things. They will most likely go with more "meat" and less tensor and rt cores. Why wouldn't they have 4K 60 first party games with maybe 10x stronger hardware?
 
DLSS and raster are separate things. They will most likely go with more "meat" and less tensor and rt cores. Why wouldn't they have 4K 60 first party games with maybe 10x stronger hardware?
This does not in any fashion address why you think 120 is even on the table and seemingly DLSS is not possible for use in titles because “it doesn’t have enough tensor cores”
 
Honestly I don't see much issue with a 1080p screen. Nintendo's never maintained resolution in previous handheld generations, with the one exception being the DS/3DS lower touch panel, we've always gotten scaled or cropped images with BC games. And with Drake they could always just run Switch games in docked mode anyway which targets 1080p.

720p or 1080p (or maybe even something else) will probably be more a cost concern over the picture itself. More about balancing price and scale of manufacturing than anything.
 
Honestly I don't see much issue with a 1080p screen. Nintendo's never maintained resolution in previous handheld generations, with the one exception being the DS/3DS lower touch panel, we've always gotten scaled or cropped images with BC games. And with Drake they could always just run Switch games in docked mode anyway which targets 1080p.

720p or 1080p (or maybe even something else) will probably be more a cost concern over the picture itself. More about balancing price and scale of manufacturing than anything.
Yeah, that's actually a very good point. I don't think there are many problems with switching to a docked mode, considering most games don't change a lot in terms of interface between the two modes (looking at you, tiny-text Three Houses).

On an unrelated note, just had a look at some YT videos, and Witcher 3 can run at 100 fps on a GTX 1650 at low settings and 60 fps at high settings, which is around 3 TF Turing. Just imagine a performance patch for that game that goes to or beyond 1080p/60fps...
 
0
I doubt Nintendo will go for that option for all but their most demanding games later in the life of this device. We're talking about a 3TF-ish machine, so it should be able to render very demanding, above-gen-8-level visuals at least at 1080p and then potentially apply DLSS from there.

Unless Nintendo start developing Horizon Forbidden West-level visuals (as they are on PS5), then they are imo likely to get to 1080p before DLSS.
In performance or ultra performance mode, DLSS takes exactly the same time to render an image at a given upscaled resolution, going by Alex's video. The difference is that there are fewer pixels to shade with the latter.

Personally, I can see a base resolution of 540p that is then upscale to 1620p at either 30 FPS or 40 FPS
 
Last edited:
In performance or ultra performance mode, DLSS takes exactly the same time to render an image at a given upscaled resolution, going by Alex's vkdeo. The difference is that there are fewer pixels to shade with the latter.

Personally, I can see a base resolution of 540p that is then upscale to 1620p at either 30 FPS or 40 FPS
...You know that the GPU in Drake (adjusted for converting to compare to other uArchs) is pretty much a PS4 Pro's GPU in raster performance if they can get it to 1Ghz right?

That isn't far behind the Series S, and GA10F has 12RT cores which would allow it to RT far better than the PS5 even, and the Tensor cores to enable DLSS.etc

So I say 720p when docked will be the average bottom-floor for resolution, and even then if it dips to 540p docked it would only be in an RT mode with expensive RT effects like RT reflections or something, or in a DRS+DLSS solution where it can bottom-out at 540p with the output scaling up to 4k via DRS+DLSS and shifting between DLSS's modifiers.
 
4K 120 is possible, but not in graphics intensive games. DLSS is possible and it will be used, but not from 540p to 4K as many people think
Yeah, 720p to 4K is DLSS Ultra Performance.

Although I will say 480p to 1440p (or 540 to 1620p) Ultra Performance would also be interesting to see moreso as a bottom in a DRS+DLSS implementation.
 
In performance or ultra performance mode, DLSS takes exactly the same time to render an image at a given upscaled resolution, going by Alex's vkdeo. The difference is that there are fewer pixels to shade with the latter.

Personally, I can see a base resolution of 540p that is then upscale to 1620p at either 30 FPS or 40 FPS
Alex' video is awesome, but he starts from the assumption that the chip will be a good deal weaker than the Drake chip we are currently talking about. If Drake is the chip is is looking like it will be, and Nintendo don't clock it unexpectedly low, then the need for ultra performance mode will likely be limited to the most demanding gen 9 titles like GTA 6 that could be ported to the system imo.

Ultra performance mode is visually less appealing than the higher quality modes (4x, 3x and 2x), so if native rendering allows you to start with a higher base resolution, that would definitely be preferable. A great use of ultra performance would be a performance mode with 60 fps, though.

Edit: Oh, and Alex also uses a Turing GPU, which allegedly don't have overlapped DLSS evaluation while Ampere does. Whether that translates to anything real remains to be seen, but I don't think anyone has tested it yet.
 
Last edited:
Yeah, 720p to 4K is DLSS Ultra Performance.

Although I will say 480p to 1440p (or 540 to 1620p) Ultra Performance would also be interesting to see moreso as a bottom in a DRS+DLSS implementation.
Super Switch will have enough power to run even PS5/SE games at much higher resolutions than 540p. 720p minimum. I would not worry about DLSS performance for Super Switch. It will have custom dlss settings that suit best for hardware.
 
4K 120 is possible, but not in graphics intensive games. DLSS is possible and it will be used, but not from 540p to 4K as many people think.
No one here actually expects 540 to 2160

Most of us are expecting 720 to 1440 or 1080 to 2160

And everything in between.

Nothing ridiculous like 360p to 1440-2160.
 
a 16x scale is ridiculous even for movies and they have all the time in the world to spend on upscaling
Yeah, even then my testing of it with DLSS+NIS is a bit...not great, at least from 360p.

Now, I do Say going from say, 540p to 1440/1620p then using NIS Quality/Ultra Quality to 4K is more reasonable.
 
0
Super Switch will have enough power to run even PS5/SE games at much higher resolutions than 540p. 720p minimum. I would not worry about DLSS performance for Super Switch. It will have custom dlss settings that suit best for hardware.

This really depends on the game, don’t you think? I would t be surprised if a 1080p PS5/Series X game came out. Best we could hope for is like 540p in that scenario.
 
They should make a souls compilation plus elden ring for switch plus.
Good luck with that as the compilation will be split full price releases that are somehow last gen versions and stuff that never gets patched like audio stutter with weird marketing ala the first game. Elden Ring won’t come for another decade because reasons, not limited to somehow still complaining about system power.
 
Small correction: the article states that 1440p is the average resolution (4K is max resolution) and 900p is the minimum resolution in split mode (not average or max). But things can go down to significantly lower resolutions than native 4K or close to it, indeed.

But this was made by a smaller studio, so I'm hesitant to take it as a solid indicator for the game hitting the limits of the hardware per se, rather than this being the maximum amount of polish they could do based on their team size and being so early in the generation.
 
Small correction: the article states that 1440p is the average resolution (4K is max resolution) and 900p is the minimum resolution in split mode (not average or max). But things can go down to significantly lower resolutions than native 4K or close to it, indeed.
I did mention "without DRS" since John mentions DRS scaling averages out to ~1440p for full screen mode for the Xbox Series X|S. The point I was trying to make is that a game that runs at less than 4K natively (without DRS) for at least the Xbox Series X|S already exists.
 
Not sure if this has been remarked upon already, but what just struck me about Alex' video is that the 1.9 ms time that he found for DLSS and higher res postprocessing is noticeably smaller than the number that NVIDIA cited for the RTX2060S (a better card!), which was 2.55 ms for 1080p -> 4K. This suggest to me that DLSS has gotten quite a bit more efficient compared with the number I've cited earlier.
 
DLSS and raster are separate things. They will most likely go with more "meat" and less tensor and rt cores. Why wouldn't they have 4K 60 first party games with maybe 10x stronger hardware?
I wanted to re-address this.

While the hardware is 10x stronger, that does not indicate that Nintendo plans to only offer Nintendo switch fidelity visuals. If they did, then you’d get a 4-6CPU cores and 4SMs offering a “Pro” upgrade in which it is a nice bump over the current switch and just helps resolve switch games to a resolution above 1080p.

Which in consumer speak is the equivalent of needing a 4K TV to display the content.

Nintendo doesn’t need 10x the performance to achieve 4K of switch games, Drake is well above what even the PS4 can do.
 
0
Honestly I don't see much issue with a 1080p screen. Nintendo's never maintained resolution in previous handheld generations, with the one exception being the DS/3DS lower touch panel, we've always gotten scaled or cropped images with BC games. And with Drake they could always just run Switch games in docked mode anyway which targets 1080p.

720p or 1080p (or maybe even something else) will probably be more a cost concern over the picture itself. More about balancing price and scale of manufacturing than anything.
GBA and DS had similar resolution
 
0
Not sure if this has been remarked upon already, but what just struck me about Alex' video is that the 1.9 ms time that he found for DLSS and higher res postprocessing is noticeably smaller than the number that NVIDIA cited for the RTX2060S (a better card!), which was 2.55 ms for 1080p -> 4K. This suggest to me that DLSS has gotten quite a bit more efficient compared with the number I've cited earlier.

Isn’t that what Nvidia has done with with the 2.x updates? Makes it a bit more efficient which it gives a few % boost to frame rate depending on game
 
Isn’t that what Nvidia has done with with the 2.x updates? Makes it a bit more efficient which it gives a few % boost to frame rate depending on game
I imagine so. It's just that we now appear to have a somewhat concrete estimate of that performance improvement of the algorithm itself.
 
0
This video from a year ago tests DLSS at crazy low resolutions. He starts off with 240p -> 720p. The results are pretty damn decent.

 
Honestly I don't see much issue with a 1080p screen. Nintendo's never maintained resolution in previous handheld generations, with the one exception being the DS/3DS lower touch panel, we've always gotten scaled or cropped images with BC games. And with Drake they could always just run Switch games in docked mode anyway which targets 1080p.

720p or 1080p (or maybe even something else) will probably be more a cost concern over the picture itself. More about balancing price and scale of manufacturing than anything.
Hmm, now that you mentioned it, I wonder if Drake could simply run base Switch clocks as "handheld" (hence the beefier specs), and then have it's "docked" mode as a third teir of clocks.

so
OG handheld -> [ OG docked -> "Super" docked ]
The brackets indicating the range this new chip operates.
This video from a year ago tests DLSS at crazy low resolutions. He starts off with 240p -> 720p. The results are pretty damn decent.


I actually wanted to reference this video with regard to the successor needing to upscale varying degrees magnitude from 4k. The "target" could still be 4k, but the quality would be closer to 720p just like at the start of the video, and even then, a 720p/1080p-ish Ultra Performance DLSS still looks order of magnitudes cleaner than Xenoblade Chronicles 2 at its worst.

There's Vaseline, and then there's Petrolium Jelly. DLSS is more like splotches of Vaseline on a base 240p image, and that was the old version... Petrolium Jelly would be more Gormott Province on a rainy day.

(And if all else fails, there's always FSR 2.0...)
 
Last edited:
Hmm, now that you mentioned it, I wonder if Drake could simply run base Switch clocks as "handheld" (hence the beefier specs), and then have it's "docked" mode as a third teir of clocks.

so
OG handheld -> [ OG docked -> "Super" docked ]
The brackets indicating the range this new chip operates.

I actually wanted to reference this video with regard to the successor needing to upscale varying degrees magnitude from 4k. The "target" could still be 4k, but the quality would be closer to 720p just like at the start of the video, and even then, a 720p/1080p-ish Ultra Performance DLSS still looks order of magnitudes cleaner than Xenoblade Chronicles 2 at its worst.

There's Vaseline, and then there's Petrolium Jelly. DLSS is more like splotches of Vaseline on a base 240p image, and that was the old version...

(And if all else fails, there's allways FSR 2.0...)
The rumoured specs of Drake should punch well above the OG Switch docked mode, so they should have a new profile for that I feel.
 
Please read this new, consolidated staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited by a moderator:


Back
Top Bottom