You're still not responding to anything I said in my post about the nature of the test cases, so I guess this is pointless to continue. Just pointing to the power consumption numbers and the fact (which I acknowledged repeatedly) that the profiles as a whole are almost certainly based around handheld/docked profiles, doesn't justify anything beyond what I already said here:
So, okay.
Now, for the benefit of other people who aren't clear on the full context, I will address the subject of the power consumption numbers (this is not an argument about the meaning of the clock speeds past this point, since regardless of the power consumption numbers, everything I said about the test cases and why the clocks aren't significant to them still applies). If you want the actual numbers, see the
original post, which shouldn't be quoted here as it was in hide tags.
The power consumption numbers are
filenames. They're labels for a collection of test case settings. They aren't outputs or estimates and aren't measured or tracked in any way.
Given the relationship between the settings they define, there's a pretty clear pattern that would be relevant to Nintendo's hardware: lower power entails lower (
relative) clocks and a 1080p output resolution, higher power entails higher (
relative) clocks and a 4K output resolution. It makes sense to benchmark the DLSS code under these conditions, when this is very close to how it would run in practice on Nintendo's hardware. The goal is to track the code's KPIs, so you're starting from a baseline on day 1 and then checking each change you make against the last baseline. If your next code change produces better results than the previous test run, great; if it gives you worse results, you need to work on it some more before you commit. You need the
path through your code to be as close to the final hardware as possible, since that way you know any improvements will carry over to it, and you aren't missing any slowdowns that will only manifest on the final hardware because the code runs differently.
But since the power consumption isn't actually part of the test, where did these specific wattage numbers come from, and why is power consumption used as the topline definition of the test cases in the first place? Well, my belief is that they came directly from Nintendo.
Beginning with the Switch (or actually with the Indy/Switch prototypes), power consumption is basically Nintendo's starting point for each component or block in new hardware. They have a power budget and everything has to fit within it. So long before something like clock speeds would be determined, or core count or even architecture, Nintendo could tell Nvidia that the board is going to have a certain power draw limit and that there will be X watts allocated to the SoC, and then continue dividing that into the budget for each the CPU and GPU. Nvidia's job then isn't to deliver a GPU hitting certain clocks, reaching certain TFLOPs, or outputting certain resolutions and FPS. It's to deliver the best possible GPU that can fit into that power budget (as well as the, er, money budget).
So as far as Nvidia is concerned, "handheld mode" is really "X watt mode." And that's what I think those test case labels are saying. Not everything is clear -- for example, why are there are three profiles instead of two? Or, even if the wattages are real values, what part of the power budget do they represent; GPU only, the whole SoC, or something else? As I said at the time, we can't draw firm conclusions from the specific numbers (although we may have had some validation of one of them via the 1080p screen rumors), but the overall pictures was obvious in showing Nvidia testing DLSS execution suitable for both the handheld and docked modes of a future hybrid console in the same vein as the Switch and with the same performance areas of interest.