• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

You found the test and shared it with the community, if you look at my posts where I use your name, I do say that my conclusion is in conflict with yours. Now these tests were done in summer 2021, when there was no card that could match the test's configuration.

The reality is that this test gives more than just random clocks, it estimates power consumption at those clocks. That is something that can't be explained away by your explanation. It's far more likely that they were just adding the DLSS function to the NVN2 API and that they chose target clocks because they tied them to estimated power consumption. If they were not sure of the clocks and power consumption they would be targeting with T239, they wouldn't have been designing the chip yet, but we know they were.
Your statement about the power consumption "estimate" is incorrect and backwards, but I won't get into that right now because it's a red herring.

You didn't address anything I said in my last post. So I have to conclude that you don't have any reasons for why they would use T239 clocks in these tests. Just that they may have arbitrarily chosen to do so, if we assume the clocks were known, because they were already simulating other aspects of the potential handheld/docked profiles, so why not use the same clock speeds too, even though those clock speeds won't actually contribute to simulating anything because they're being applied to unrelated hardware and are completely irrelevant to the test in the first place.

And that is, in fact, not impossible. But we have no reason to make that conclusion beyond "hey, what if?" It's miles away from "the only reasonable conclusion" and not grounds to continually tell people that I found numbers that prove something or other.
 
The tests were KPI profiling of DLSS execution under NVN2. That means they needed to keep everything the same between runs so that any change in the output metrics -- execution time, memory usage, and image tests -- could be tracked as they made changes to the code. They wanted to see improvements and guard against regressions. That code is what was being tested, not clocks, not power consumption, not T239. The resolution, quality mode, and P-State are relevant for exercising the code paths closely to what T239 would do, but the clocks are not relevant because they have zero effect on how the test executes, only how fast it executes. And how fast or slow the test runs is meaningless in absolute terms, only relative ones. But even if you hypothetically wanted the absolute time to match T239 for some reason, you would not be able to accomplish that by applying its clock speeds to a random RTX 20 series GPU.

I could go on. We could talk about how timeline-wise, T239 didn't even physically exist at the time of the hack, and the test cases could have been written months or years earlier on top of that. But I've already made enough of an argument. If you want to keep going around telling people that these are T239 clocks -- invoking my name, no less -- then you really need to offer an argument for why they would use T239 clocks in these tests in light of everything I've laid out here. My belief is that the best possible argument one could make is "they did it arbitrarily," which is pretty weak, but I guess better than having no argument at all.

I think this is what's confusing me because if everyone agrees that this DLSS test was KPI (Key Performance Indicators) of how the DLSS source code is running in nvn2.

Why is it then crazy to believe that the metrics for which the test were running under were to see how feasible the DLSS code would hold up under specific resolutions and clocks? Nintendo and Nvidia have been playing around with DLSS for the next Switch by even testing Xavier hardware. Something led them to going with a 12SM GPU (which we know because Nvn2 targets that) and these arbitrary clocks (that apparently are random) even though these clock profiles would assume the target of handheld and docked profiles...

A part of Nvidia's talk at GTC today was in using Ai to simulate and emulate through the process of chip fabrication to speed things along in an efficient manner. How is any of this different in how they believe T239 might behave in final production vs any of their other SoC's that they give early preliminary specs, TDP and process node? (before any actual hardware is present)...
 
Your statement about the power consumption "estimate" is incorrect and backwards, but I won't get into that right now because it's a red herring.

You didn't address anything I said in my last post. So I have to conclude that you don't have any reasons for why they would use T239 clocks in these tests. Just that they may have arbitrarily chosen to do so, if we assume the clocks were known, because they were already simulating other aspects of the potential handheld/docked profiles, so why not use the same clock speeds too, even though those clock speeds won't actually contribute to simulating anything because they're being applied to unrelated hardware and are completely irrelevant to the test in the first place.

And that is, in fact, not impossible. But we have no reason to make that conclusion beyond "hey, what if?" It's miles away from "the only reasonable conclusion" and not grounds to continually tell people that I found numbers that prove something or other.
You found a test, in that test there were clocks with power consumption for names. I've said before in the post that you don't believe those to be clocks used for Switch 2, I do. I have given you my reasoning, I'm not trying to change your mind, I'm perfectly fine with my conclusion. If you want to do the same, that is cool with me, one day we will probably know what the clocks are, and hey I might be proven right, that is cool enough for me. In the meantime it makes very little sense to not use those clocks in our speculation thread, I have absolutely used the tag "we don't have complete context for this test". However, I don't see any reasonable conclusion that isn't these are target clocks for Switch 2. I've worked in sales, and Nvidia is adding DLSS to NVN2/Switch 2. Using a Turing card with target specs, and later using T239 and showing how much of an improvement T239 is over that Turing test, would absolutely be a sales tactic, even after the sale of T239, because you want continued business with Nintendo if you are Nvidia. Jensen sought a 20 year business from the on set... we are less than halfway through that right now.

"The only reasonable conclusion" is in regards to these clocks having power consumptions that Turing obviously could NEVER hit. 4.2w 660MHz in Drake's configuration would be absolutely impossible for Turing, and it doesn't get any better at the higher clocks, they actually track very closely to a 5nm Ampere power curve, which makes the argument that these clocks are related to T239 even more hard to deny. You do you, but don't tell me that there is another reasonable explanation for this information (The test with these clocks and these expected power consumptions) if they were not referencing a target spec for a future chip that was already being designed for NVN2 at this time, and literally ~9 months before physical samples started showing up, and they just randomly named clocks on even more random power consumptions, because they were testing on a Turing card and not T239... Isn't that far less reasonable than what I'm suggesting. Heck you even want to say "hey what if?" I'm just more convinced by the data then you are. I'm unable to change the outcome and perfectly content in all rumors so far, matching my expectations, so why even bother with trying to change my mind? It's best if this thread is not a hivemind anyways.

EDIT (hopefully this catches you before any reply you might make): We can't know for 100% that these clocks are used in Switch 2, if that is your position, I agree. I believe that it's very close to a certainty, but it isn't quite one yet. This is also 3 years old at this point, clocks might change.
 
Last edited:
Why is it then crazy to believe that the metrics for which the test were running under were to see how feasible the DLSS code would hold up under specific resolutions and clocks?
"Under specific clocks" is fundamentally missing the key point here. Running a DLSS test app at 600 MHz on a 30 SM Turing desktop RTX 2060 doesn't tell you anything meaningful that will carry over to running it at 600 MHz on a 12 SM Ampere Tegra T239. There is simply no reason to choose that clock. The only reason a clock speed is even present in the test case at all is that the frequency needs to be locked to prevent it from being dynamically adjusted, and so that you get reproducible results across runs.
 


This isn't really directly applicable, since it's focused on PC and Intel, but it does provide a pretty decent overview of how GPU drivers generally function.

Notably, console games will typically have a built in shader cache, rather than compiling shaders on the fly (though this is not universal), and on Switch in particular, it's generally understood from dataminers that the user mode driver comes packaged with each individual game for compatibility purposes.
 
In order to avoid my speculation being taken as fact. I'm going to refrain from posting in this thread.

Did you just confirm that Nintendo bought Square Enix?

Hidden content is only available for registered users. Sharing it outside of Famiboards is subject to moderation.
 
You found a test, in that test there were clocks with power consumption for names. I've said before in the post that you don't believe those to be clocks used for Switch 2, I do. I have given you my reasoning, I'm not trying to change your mind, I'm perfectly fine with my conclusion. If you want to do the same, that is cool with me, one day we will probably know what the clocks are, and hey I might be proven right, that is cool enough for me. In the meantime it makes very little sense to not use those clocks in our speculation thread, I have absolutely used the tag "we don't have complete context for this test". However, I don't see any reasonable conclusion that isn't these are target clocks for Switch 2. I've worked in sales, and Nvidia is adding DLSS to NVN2/Switch 2. Using a Turing card with target specs, and later using T239 and showing how much of an improvement T239 is over that Turing test, would absolutely be a sales tactic, even after the sale of T239, because you want continued business with Nintendo if you are Nvidia. Jensen sought a 20 year business from the on set... we are less than halfway through that right now.

"The only reasonable conclusion" is in regards to these clocks having power consumptions that Turing obviously could NEVER hit [snip] in Drake's configuration would be absolutely impossible for Turing, and it doesn't get any better at the higher clocks, they actually track very closely to a 5nm Ampere power curve, which makes the argument that these clocks are related to T239 even more hard to deny. You do you, but don't tell me that there is another reasonable explanation for this information (The test with these clocks and these expected power consumptions) if they were not referencing a target spec for a future chip that was already being designed for NVN2 at this time, and literally ~9 months before physical samples started showing up, and they just randomly named clocks on even more random power consumptions, because they were testing on a Turing card and not T239... Isn't that far less reasonable than what I'm suggesting. Heck you even want to say "hey what if?" I'm just more convinced by the data then you are. I'm unable to change the outcome and perfectly content in all rumors so far, matching my expectations, so why even bother with trying to change my mind? It's best if this thread is not a hivemind anyways.

EDIT (hopefully this catches you before any reply you might make): We can't know for 100% that these clocks are used in Switch 2, if that is your position, I agree. I believe that it's very close to a certainty, but it isn't quite one yet. This is also 3 years old at this point, clocks might change.
You're still not responding to anything I said in my post about the nature of the test cases, so I guess this is pointless to continue. Just pointing to the power consumption numbers and the fact (which I acknowledged repeatedly) that the profiles as a whole are almost certainly based around handheld/docked profiles, doesn't justify anything beyond what I already said here:

Just that they may have arbitrarily chosen to do so, if we assume the clocks were known, because they were already simulating other aspects of the potential handheld/docked profiles, so why not use the same clock speeds too, even though those clock speeds won't actually contribute to simulating anything because they're being applied to unrelated hardware and are completely irrelevant to the test in the first place.
So, okay.



Now, for the benefit of other people who aren't clear on the full context, I will address the subject of the power consumption numbers (this is not an argument about the meaning of the clock speeds past this point, since regardless of the power consumption numbers, everything I said about the test cases and why the clocks aren't significant to them still applies). If you want the actual numbers, see the original post, which shouldn't be quoted here as it was in hide tags.

The power consumption numbers are filenames. They're labels for a collection of test case settings. They aren't outputs or estimates and aren't measured or tracked in any way.

Given the relationship between the settings they define, there's a pretty clear pattern that would be relevant to Nintendo's hardware: lower power entails lower (relative) clocks and a 1080p output resolution, higher power entails higher (relative) clocks and a 4K output resolution. It makes sense to benchmark the DLSS code under these conditions, when this is very close to how it would run in practice on Nintendo's hardware. The goal is to track the code's KPIs, so you're starting from a baseline on day 1 and then checking each change you make against the last baseline. If your next code change produces better results than the previous test run, great; if it gives you worse results, you need to work on it some more before you commit. You need the path through your code to be as close to the final hardware as possible, since that way you know any improvements will carry over to it, and you aren't missing any slowdowns that will only manifest on the final hardware because the code runs differently.

But since the power consumption isn't actually part of the test, where did these specific wattage numbers come from, and why is power consumption used as the topline definition of the test cases in the first place? Well, my belief is that they came directly from Nintendo.

Beginning with the Switch (or actually with the Indy/Switch prototypes), power consumption is basically Nintendo's starting point for each component or block in new hardware. They have a power budget and everything has to fit within it. So long before something like clock speeds would be determined, or core count or even architecture, Nintendo could tell Nvidia that the board is going to have a certain power draw limit and that there will be X watts allocated to the SoC, and then continue dividing that into the budget for each the CPU and GPU. Nvidia's job then isn't to deliver a GPU hitting certain clocks, reaching certain TFLOPs, or outputting certain resolutions and FPS. It's to deliver the best possible GPU that can fit into that power budget (as well as the, er, money budget).

So as far as Nvidia is concerned, "handheld mode" is really "X watt mode." And that's what I think those test case labels are saying. Not everything is clear -- for example, why are there are three profiles instead of two? Or, even if the wattages are real values, what part of the power budget do they represent; GPU only, the whole SoC, or something else? As I said at the time, we can't draw firm conclusions from the specific numbers (although we may have had some validation of one of them via the 1080p screen rumors), but the overall pictures was obvious in showing Nvidia testing DLSS execution suitable for both the handheld and docked modes of a future hybrid console in the same vein as the Switch and with the same performance areas of interest.
 
In order to avoid my speculation being taken as fact. I'm going to refrain from posting in this thread.
"We found a X00 MHz clock speed for Switch 2 in the Nvidia hack" is a statement of fact. I disagree with that interpretation of what was in the hack, so I think the statement is incorrect. That's not the same as telling someone to stop speculating. You should post what you want and you can just ignore me if you don't want to have a debate about it. I'm still just going to tell people my opinion about the information the hack when it comes up.
 
Because AAA developers make their games for PC, PS and Xbox, most probably won't make day and date Nintendo releases. Just later ports. They have their priority platforms, and Switch 2 will be a bonus platform for them that comes later.

Capcom isn’t just any random AAA developer, MH series isn’t just any random IP. For the most part since Tri days have been basically synonymous with Nintendo consoles and handhelds.

Multiple MH games (Tri, 3U, 4, G, GU, Rise) were released exclusively for Nintendo console or handheld initially.

Some people have been playing MH series for much longer than the existence of MH World.

I don’t know why it would be such a shocker if Nintendo and Capcom has decided to make MH:Wilds part of the Switch 2 launch lineup. Maybe it’s not in the launch lineup or window but if it is that really shouldn’t be so surprising either.
 
"Under specific clocks" is fundamentally missing the key point here. Running a DLSS test app at 600 MHz on a 30 SM Turing desktop RTX 2060 doesn't tell you anything meaningful that will carry over to running it at 600 MHz on a 12 SM Ampere Tegra T239. There is simply no reason to choose that clock. The only reason a clock speed is even present in the test case at all is that the frequency needs to be locked to prevent it from being dynamically adjusted, and so that you get reproducible results across runs.
What If they ran it on a 12SM ampere card, how would that be so different than Drake? It's easy for Nvidia to disable hardware.
 
In order to avoid my speculation being taken as fact. I'm going to refrain from posting in this thread.
Forgive me for butting in without being invited. I respect your decision, it is sound to me,

Actually, I would go as far as say that you should have taken it earlier.
 
0
What If they ran it on a 12SM ampere card, how would that be so different than Drake? It's easy for Nvidia to disable hardware.
If there was any evidence of that, I would have included it in the original post. There could have been a comment stating it or even another script to configure the GPU that way, but there isn't. It's still not an important part of the test, since as a local test script checked in alongside the source code, its purpose was for each developer to compare before and after their own code changes, which means the differences are relative and don't have to be consistent across different PCs. But yeah, there are plenty of comments for how a developer needs to set things up to make use of the scripts, such as a link to download the (Windows) executable that locks the GPU clocks, but nothing about configuring the GPU beyond that.

Hidden content is only available for registered users. Sharing it outside of Famiboards is subject to moderation.
 
You can't just say they can "reasonably be assumed to be" something without justifying your interpretation.

There is no reason to use T239 target clocks on a different GPU on Windows. There's literally no value to doing that. You might as well use a random number generator for how meaningful those clocks would be in the test. Using clocks that are relative to each other similar to the (not-yet-existent) handheld and docked profiles means that you'll get execution times distinct from each other, which makes looking at your results more intuitive, and that's the reason why the clocks are different at all. Beyond that, they're arbitrary.

The tests were KPI profiling of DLSS execution under NVN2. That means they needed to keep everything the same between runs so that any change in the output metrics -- execution time, memory usage, and image tests -- could be tracked as they made changes to the code. They wanted to see improvements and guard against regressions. That code is what was being tested, not clocks, not power consumption, not T239. The resolution, quality mode, and P-State are relevant for exercising the code paths closely to what T239 would do, but the clocks are not relevant because they have zero effect on how the test executes, only how fast it executes. And how fast or slow the test runs is meaningless in absolute terms, only relative ones. But even if you hypothetically wanted the absolute time to match T239 for some reason, you would not be able to accomplish that by applying its clock speeds to a random RTX 20 series GPU.

I could go on. We could talk about how timeline-wise, T239 didn't even physically exist at the time of the hack, and the test cases could have been written months or years earlier on top of that. But I've already made enough of an argument. If you want to keep going around telling people that these are T239 clocks -- invoking my name, no less -- then you really need to offer an argument for why they would use T239 clocks in these tests in light of everything I've laid out here. My belief is that the best possible argument one could make is "they did it arbitrarily," which is pretty weak, but I guess better than having no argument at all.
So if I get you right, you think the relevant numbers here might be 70,45% and 109%? Because that's the relative inchrease between the "portable" and the 2 docked profiles in these tests.
 
"We found a X00 MHz clock speed for Switch 2 in the Nvidia hack" is a statement of fact. I disagree with that interpretation of what was in the hack, so I think the statement is incorrect. That's not the same as telling someone to stop speculating. You should post what you want and you can just ignore me if you don't want to have a debate about it. I'm still just going to tell people my opinion about the information the hack when it comes up.
I responded to this thread because of this post. It's not a statement of fact. It's actually a premise, a statement of opinion based on a set of logic.
You're still not responding to anything I said in my post about the nature of the test cases, so I guess this is pointless to continue. Just pointing to the power consumption numbers and the fact (which I acknowledged repeatedly) that the profiles as a whole are almost certainly based around handheld/docked profiles, doesn't justify anything beyond what I already said here:


So, okay.



Now, for the benefit of other people who aren't clear on the full context, I will address the subject of the power consumption numbers (this is not an argument about the meaning of the clock speeds past this point, since regardless of the power consumption numbers, everything I said about the test cases and why the clocks aren't significant to them still applies). If you want the actual numbers, see the original post, which shouldn't be quoted here as it was in hide tags.

The power consumption numbers are filenames. They're labels for a collection of test case settings. They aren't outputs or estimates and aren't measured or tracked in any way.

Given the relationship between the settings they define, there's a pretty clear pattern that would be relevant to Nintendo's hardware: lower power entails lower (relative) clocks and a 1080p output resolution, higher power entails higher (relative) clocks and a 4K output resolution. It makes sense to benchmark the DLSS code under these conditions, when this is very close to how it would run in practice on Nintendo's hardware. The goal is to track the code's KPIs, so you're starting from a baseline on day 1 and then checking each change you make against the last baseline. If your next code change produces better results than the previous test run, great; if it gives you worse results, you need to work on it some more before you commit. You need the path through your code to be as close to the final hardware as possible, since that way you know any improvements will carry over to it, and you aren't missing any slowdowns that will only manifest on the final hardware because the code runs differently.

But since the power consumption isn't actually part of the test, where did these specific wattage numbers come from, and why is power consumption used as the topline definition of the test cases in the first place? Well, my belief is that they came directly from Nintendo.

Beginning with the Switch (or actually with the Indy/Switch prototypes), power consumption is basically Nintendo's starting point for each component or block in new hardware. They have a power budget and everything has to fit within it. So long before something like clock speeds would be determined, or core count or even architecture, Nintendo could tell Nvidia that the board is going to have a certain power draw limit and that there will be X watts allocated to the SoC, and then continue dividing that into the budget for each the CPU and GPU. Nvidia's job then isn't to deliver a GPU hitting certain clocks, reaching certain TFLOPs, or outputting certain resolutions and FPS. It's to deliver the best possible GPU that can fit into that power budget (as well as the, er, money budget).

So as far as Nvidia is concerned, "handheld mode" is really "X watt mode." And that's what I think those test case labels are saying. Not everything is clear -- for example, why are there are three profiles instead of two? Or, even if the wattages are real values, what part of the power budget do they represent; GPU only, the whole SoC, or something else? As I said at the time, we can't draw firm conclusions from the specific numbers (although we may have had some validation of one of them via the 1080p screen rumors), but the overall pictures was obvious in showing Nvidia testing DLSS execution suitable for both the handheld and docked modes of a future hybrid console in the same vein as the Switch and with the same performance areas of interest.
This conclusion I made wasn't in a vacuum, we have years of information about Switch 2 gathered here. For instance, I know (I'll say I specifically for this bit, because I don't know how wide spread this information is, but some people in this thread know it) Nintendo is targeting ~3 hours of battery life for Switch 2. The battery for the Switch 2 is very unlikely to be half the size of the Switch, thus the "4.2w" listed here is the GPU's power budget, I agree given by Nintendo, and Nvidia is targeting 660MHz in this test for this power budget. That is what I've always said here.
Beginning with the Switch (or actually with the Indy/Switch prototypes), power consumption is basically Nintendo's starting point for each component or block in new hardware. They have a power budget and everything has to fit within it. So long before something like clock speeds would be determined, or core count or even architecture"
Yes, but in actual context of this test. The power consumption is a year+ after Nintendo and Nvidia would have chosen architecture and designed T239 to some specs, with some target goals in mind. The engineering sample was less than a year away and we know that T239 was in testing at the time, thanks to another user that I won't name here. Clocks, Power consumption were clearly estimated at this time. That user even told us that they wouldn't be producing the chip in 2021, and engineer samples didn't happen until April 2022. How do we know that? We don't KNOW that, we infer it with great speculation around the information we have for T239 at that time. Things like hardware components being added for support, this could still all be done in preparation of real hardware at a later time, but doesn't make a lot of sense to do so ahead of time.

They knew clocks, they knew power consumption targets for Switch 2, but these tests which are suppose to improve overtime, were done without certain GPU parameters in place? unlikely. The assumption that should be made is the one with the least amount of assumptions in the first place, and that is that these clocks and power consumptions were listed in the test because they were targeting them. This is all almost 3 years ago now, so even if this were 100% fact, we can't say for sure that clocks didn't change, however that shouldn't be the assumption either. The assumption/speculation, for this thread is that these are Switch 2 clocks, yes you reported the test, but you don't have to tie yourself to the conclusion that the thread comes to, and telling people not to come to that conclusion because it's no longer speculation somehow, is just an opinion.



Speculation is a theory, a set of logic that leads to a premise. We obviously should speculate about the DLSS test found in NVN2, that leads to us speculating on the clocks being for different profile modes (portable, docked, boost?), that leads to the naming of these profiles being power consumptions, which lead to them being unrelated to the hardware they were tested on (no Nvidia hardware could in summer 2021, reach these performance targets), which lead to the invoked Switch 2, as the target for NVN2's hardware, and as you said, these are numbers Nintendo and Nvidia would have known at least a year earlier before an architecture was even decided on.

A premise is a statement or an idea that serves as the basis of an argument. I've laid out my argument for why I've concluded that these are Switch 2 specs. I state it, because it's a solid premise. Premises are not facts, and not all statements are facts either. The statement that Switch 2 GPU clocks are known, is simply a statement of my opinion, based on my premise, built on speculation... All made in a speculation thread for Switch 2. What you want is facts, and we should discuss facts as often as we can, but we should never shy away from good speculation, and I fully believe my speculation is good and valid. I get why it would annoy people when used in the news for instance, but inside of a speculation thread, these types of statements should absolutely be valid.

The reason why I was moving to refrain from posting in this thread, is simply because me making these types of statements has rubbed people the wrong way, many people will make a statement like Switch 2 isn't coming until 2026 for instance, it's not too different from what I'm doing, my statement just has in my opinion, a much stronger premise to back it up. You not wanting to make these statements, doesn't mean you can control me making them, but if we are going to drain the fun out of these speculation threads, and only discuss facts, I'm not interested in that... It's not what I've spent 25 years of my 39 years alive, doing.
 
Last edited:
So if I get you right, you think the relevant numbers here might be 70,45% and 109%? Because that's the relative inchrease between the "portable" and the 2 docked profiles in these tests.
I don't think we can draw any conclusions from the clock numbers, absolute or percentage-wise. I think the only reason the clock speeds are even in the DLSS test is because they need to be locked at some value to create reproducible results. The other (unrelated) test I mentioned above says that explicitly, and gives an example with arbitrary frequencies for an RTX 20 GPU.

How did they come up with three arbitrary values for the DLSS test? Well, it's a possibility that they arbitrarily chose the values from T239, because it just seems fitting, I suppose, even those though clocks would be of no value to the actual test. I haven't discounted that. I just don't think it's more likely than, say, choosing three adequately spaced values to have some separation in the test output that is roughly proportional to the separations you would see in the actual hardware profiles. Especially if they created this test early in development where they needed to start exercising the code under NVN before they had a target for core count, let alone clock speed.

Maybe that arbitrariness sounds silly spelled out like that, but it's definitely a software developer thing. I've often written test cases where I'm just plugging in plausible numbers that have no specific meaning. Like picking 720p for a video output test, where resolution isn't even what's being tested, because a 1080p would take longer and waste my time, but I don't want to use 360p because it feels like it wouldn't put the machine through its paces if it finishes too fast and the fan doesn't spin up.

I don't know about the "handheld" clock, but the two other ones are standard frequencies for Nvidia desktop GPUs you can find plenty of results for by just searching the number online. That's enough for me to see it as purely arbitrary.
 
Considering how much Rise sold, Capcom would be absolute fools to not make the Switch 2 a "priority platform". Hell, Rise was a Switch exclusive for 10 months, so the MH team clearly see the value in supporting – and even giving special treatment to – Nintendo platforms when the technology is there, as they have since Tri.
But Rise showed them that they don't even have to release day and date the main game on Switch. They can make a seperate game much later and still sell lots of copies. Switch 2 could very well just get Rise 2 in a few years.
 
I don't think we can draw any conclusions from the clock numbers, absolute or percentage-wise. I think the only reason the clock speeds are even in the DLSS test is because they need to be locked at some value to create reproducible results. The other (unrelated) test I mentioned above says that explicitly, and gives an example with arbitrary frequencies for an RTX 20 GPU.

How did they come up with three arbitrary values for the DLSS test? Well, it's a possibility that they arbitrarily chose the values from T239, because it just seems fitting, I suppose, even those though clocks would be of no value to the actual test. I haven't discounted that. I just don't think it's more likely than, say, choosing three adequately spaced values to have some separation in the test output that is roughly proportional to the separations you would see in the actual hardware profiles. Especially if they created this test early in development where they needed to start exercising the code under NVN before they had a target for core count, let alone clock speed.

Maybe that arbitrariness sounds silly spelled out like that, but it's definitely a software developer thing. I've often written test cases where I'm just plugging in plausible numbers that have no specific meaning. Like picking 720p for a video output test, where resolution isn't even what's being tested, because a 1080p would take longer and waste my time, but I don't want to use 360p because it feels like it wouldn't put the machine through its paces if it finishes too fast and the fan doesn't spin up.

I don't know about the "handheld" clock, but the two other ones are standard frequencies for Nvidia desktop GPUs you can find plenty of results for by just searching the number online. That's enough for me to see it as purely arbitrary.

Thanks for the updated information and not to seem combative, I'm just not one who believes in coincidences when it comes to science and engineering situations. Test are almost always an expansive way to gage the boundaries of where something either holds up or falls apart, so curiosity is that determining factor of wanting a definitive answer...

You have been a valued and strong contributing member in deep diving through all of this information when these leaks were happening in real-time, so don't take any of this the wrong way at all.
 
Last edited:
But Rise showed them that they don't even have to release day and date the main game on Switch. They can make a seperate game much later and still sell lots of copies. Switch 2 could very well just get Rise 2 in a few years.

They can do fully multiplatform Wilds (including ReDraketed) AND do a "Rise 2" game from the portable team, also fully multiplatform!
 
They can do fully multiplatform Wilds (including ReDraketed) AND do a "Rise 2" game from the portable team, also fully multiplatform!
I'm just very sceptical about Capcom Switch 2 support, they did the whole ''Increase RAM of Switch'' schtick to Nintendo and the only thing they used that increased RAM for was....cloud ports. I would be positively suprised if that will change to such a large degree that they will make day and date releases of their new games to Switch 2.
 
I'm just very sceptical about Capcom Switch 2 support, they did the whole ''Increase RAM of Switch'' schtick to Nintendo and the only thing they used that increased RAM for was....cloud ports. I would be positively suprised if that will change to such a large degree that they will make day and date releases of their new games to Switch 2.

Pretty sure that Capcom RAM thing is just an urban myth / misinterpretion. And a funny running gag of course. ;D

The difference between Capcom's stance pre-Switch 1 and pre-Switch 2 is ~8 million MonHun Rise units sold. (This is the last number before the PC port hit, if anyone has more recent numbers for Switch only, hit me up.)
 
Moore's law, and Math.

The guy (I believe his name was Cain) who discovered Rock did more than just think of a cool way to kill people. Rock could be used to make Sharp Rock. Sharp Rock could be made into knives, knives could be attached to Stick (older technology) to make Spear, Spear could be made small to make Arrow. Sharp Rock could also be turned into Axe, Axe could fell tree, Tree plus Rock could become Trebuchet.

The early discovery of a new class of technology always leads to rapid innovation, as there are lots of obvious and untapped applications. But this era of innovation can be intensified if the new technology can be used to refine itself. Rock can be used to make better Rock.

The transistor was not just a new class of technology, but it was one that had an obvious technological improvement - make it smaller. And how would you do that? With a whole bunch of automation made possible by... the transistor.

This lead to 30 years of increasing power, simply by riding the node shrink. Which is not to be dismissive of the electrical engineers and physicists who worked hard to make this possible, but the fact that Gordon Moore was able to chart three decades of technological progress pretty accurately shows that it was an era of low hanging fruit.

Along the way came Math - Math is obviously thousands of years old, and the basis of software is Math. The neural network is 81 years old, but practical software applications didn't exist until we made powerful enough GPUs. The math was waiting there for the hardware to catch up. There have been huge advances in the field of software engineering, and of course, advances in Math - but again, Ada Lovelace wrote the first program well before the first computer. These things were ready to go the moment Electronic Math Machines were viable.
Wow! Thanks for the clarification, old puck. It really answered my question and was always question as to how technology could make leaps and bounds so fast.

Pretty sure that Capcom RAM thing is just an urban myth / misinterpretion. And a funny running gag of course. ;D
Wasn't it discovered in a leak that the Switch's RAM was well decided even before Capcom commenting on it?
 
Pretty sure that Capcom RAM thing is just an urban myth / misinterpretion. And a funny running gag of course. ;D

The difference between Capcom's stance pre-Switch 1 and pre-Switch 2 is ~8 million MonHun Rise units sold. (This is the last number before the PC port hit, if anyone has more recent numbers for Switch only, hit me up.)
That, and if say Nintendo told any publisher they would use the specs of an Nvidia Shield, the response would be "cool, but pls add some more ram".
 
Maybe that arbitrariness sounds silly spelled out like that, but it's definitely a software developer thing. I've often written test cases where I'm just plugging in plausible numbers that have no specific meaning.

“I sat at my desk, stared into the garden and thought '42 will do' I typed it out. End of story.”
 
Last edited:
and the new spatial hash radiance cache, for gpus that can't run NRC



SHARC is looking really interesting, wondering what the frametime cost is compared to NRC.

Top is no SHARC, bot is SHARC.

This slider shows SHaRC featured in Cyberpunk 2077. With SHaRC enabled, the GI lighting is able to cover more of the scene with negligible performance impact.

light-reaching-furter-off.jpg
light-reaching-furter-on.jpg
 
Did uhhh

image.png


Did people expect Blackwell to actually be good.

Feels like Kepler should have had more realistic expectations (that it would be bad)

The only interesting thing NVIDIA can do right now for their cards is release a card with a much higher percentage dedicated to the tensor cores... That would be interesting...
 
My main question for the future of Nintendo hardware is if Ray reconstruction and Super Resolution and NRC can run at FP4.

I believe they currently run at FP16? FP4 could increase the inference speed by 4x… in theory as it’s not clear they would even work at that low of precision.

Separating out the RT cores and tensor cores into separate chips (without latency) so that Nintendo could order a chip with a higher relative rate of RT cores and tensor cores would also be very helpful for Nintendo for the Switch 3.
 
Anyone have a quick vague rundown that will subsequently have to be corrected and detailed repeatedly over the next 4 pages?

(i kid obviously, don't do it - unless you're going to do it properly)
 
The delay will definitely give me enough time to solve oldpuck's puzzle hinting to the codename at least

And I'm glad I almost stopped to think about it since Rebirth released anyway!
 
Nate has a great point in his video, maybe Nintendo wants a stronger first party lineup, but it messes up all the third parties hopes for holiday season. If I was a developer and my game was planned to launch with the console this November/December and now moved to 2025, I would be pissed sooo much.
 
But Rise showed them that they don't even have to release day and date the main game on Switch. They can make a seperate game much later and still sell lots of copies. Switch 2 could very well just get Rise 2 in a few years.
Rise 2?

Seriously though, how familiar are you with MH series in general? If you've been playing various MH games for years like I have been, you would know that "Rise 2" is not going to be a thing.
 
Nintendo has always looked out for number one, and perhaps correctly since it's first party software that ultimately pushes Nintendo hardware. Still, Nintendo has had a decent relationship with third parties this gen that I hope will get better next gen, so hopefully the delay doesn't damage business relationships too much. Especially if it is a "selfish" software-caused delay vs. a manufacturing or hardware issue.
 
Nate has a great point in his video, maybe Nintendo wants a stronger first party lineup, but it messes up all the third parties hopes for holiday season. If I was a developer and my game was planned to launch with the console this November/December and now moved to 2025, I would be pissed sooo much.
if Nintendo can't sell first party games, then their own games won't sell. they can be pissed all they want, but the alternative is worst for them

Nintendo has always looked out for number one, and perhaps correctly since it's first party software that ultimately pushes Nintendo hardware. Still, Nintendo has had a decent relationship with third parties this gen that I hope will get better next gen, so hopefully the delay doesn't damage business relationships too much. Especially if it is a "selfish" software-caused delay vs. a manufacturing or hardware issue.
it's not going to do anything. not supporting a system because it launched 3-6 months late just shows you're not cut out to be in business
 
Nate Podcast

crude summary of key points:

nate doesn't know anything about an announcement timing, pretty uncertain

nintendo were asking developers for assets for trailers (switch 2 titles) (no exact date given for when this occurred)

delay could be based on many things, pretty uncertain

that's pretty much it
 
Last edited:
Please read this new, consolidated staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited by a moderator:


Back
Top Bottom