• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Furukawa Speaks! We discuss the announcement of the Nintendo Switch Successor and our June Direct Predictions on the new episode of the Famiboards Discussion Club! Check it out here!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

Seeing it all listed as “SPRed” honestly made me think of the word “spread“ like spread your wings, meaning wings on your cap. And wings on Mario's cap made me think of Mario 64

Super Mario 64 remake inbound!!!

A direct Super Mario 64 sequel (IE: In another castle with paintings leading to different worlds) would be a dream of mine.

Chance of happening:

im-not-saying-31e5d15b9e.jpg
 
How would PC help the Switch 2 more?

In which more developers will target lower end PCs, for example the RTX 2000 series?.

Also I oldpluck mentioned that Switch 2 would actually help Microsoft more, mostly because more developers will most likely first optimised for the Switch 2, then later the series s.
Yeah, except if it is Japanese game. 80% of those games love to avoid Xbox
 
0
Most likely either the P stands for "Production"...
It does not. Prod vs. dev etc is indicated by the environment identifier, the part after after 'p01' in all of these URLs:
lp1: For retail.
dp1: For development on retail(?)
dd1: Third party development.
td1: For sdk/fw development.
jd1: Firmware QA/compatibility testing.
sp1: Lotcheck on retail.
sd1: Lotcheck on development.
xd1/yd1/zd1: Internal first party game development.
 
Last edited:
If I understand what you're getting at properly, I think you're right. So if it was a 30fps game but the concurrent DLSS (and whatever further post-processing) action took less than 16.6ms, the output wouldn't need be 33.3ms behind just because that's what each frame has been given.

30fps frame output (on 60hz screen) could go from
BBCCDDEE
to
ABBCCDDE

and maybe the difference could be even less on a variable refresh rate screen? Though I don't feel I have a solid enough understanding of how all the timing stuff works there to say with certainty.

In my day we called those "blue coins"!
ej20994.gif

This is what I was thinking, but then there's this:

The thing is that when you overlap DLSS, you get the following situation: On the CPU, the updates for the current update in frame 1 (including, importantly, the user input) have finished, and now the GPU will render the base image before DLSS using the ALUs (non-tensor compute hardware), let's call it stage A. Then, frame 1 needs to go through DLSS which we call stage B, and at the end frame 1 needs to have post-processing applied to the post-DLSS image, called stage C. By the time frame 1 enters stage B, frame 2 has performed its CPU processes (and gathered input based on what is on the screen) and can enter stage A. Once stage B in frame 1 finishes, stage C can start for frame 1. Only once stage C is finished do we output frame 1 to the screen.

What this means is that we can only output frame 1 once stage C is finished. The benefit we extract from overlapping DLSS is that instead of having stage A + B + C take the total time of one frame (17 ms for 60 fps, 33 ms for 30 fps), we can now have stage A + C take up almost all of the frame time without stage B reducing the available time for native rendering and for post-processing. In practice, this likely means that there is more time to produce a high quality base image before DLSS. However, the consequence is that the full rendering of a frame, which includes stages A, B, and C, does not finish within the boundary of a frame. This means that we "miss" outputting the current frame, and have to wait one frame before we can output the current frame. Because of overlapping of the stages, this does not compound, but it does mean that each frame's display happens one frame later than the moment the CPU gathered user input for it, and therefore we experience one frame's worth of input lag. Essentially, the CPU has gathered input for frame 2 while the player is viewing frame 1, and therefore their input is mismatched by one frame.

This is my understanding of it, at least. If my understanding was wrong or someone wants to add additional nuance to it, please reply (it helps the discussion and everyone's understanding of the process)!

I do see what you're saying - though I'm not sure why the completed DLSS frame cannot be delivered in the middle of the next frame's rendering process? I do know about the issues that come with poor frame-pacing, but isn't that just when you're already hitting the framerate cap? What about when the cap isn't being hit?

Also, I believe the part about post-processing needing to come after DLSS is not actually a requirement. It's recommended by Nvidia for quality reasons, but it does come with a performance cost. Alan Wake 2 offers a toggle for it.

Edit:

Yeah. Even if the frame is only 1ms late, it will only be displayed in the next display refresh, so 16.6ms delay for a 60Hz display. The good part is not having to wait full 33.3ms for 30fps games.


Maybe they don't even need VRR. If DLSS cost is constant enough... maybe they could start the frame later to reduce input delay.

For example, let's say DLSS costs ~4ms. Instead of starting the first frame at 0ms, they start it at 28ms. Then the first frame will be ready before 66.6ms. The input delay would be under 38.3ms, just 5ms more than 33.3ms.

Ah, right, of course, the limits of the display refresh rate. Honestly, that's a solid argument for a 120Hz screen and 120Hz containers for every game. I was hoping that maybe the first frame would simply be delayed and every following frame would come after it with an equal DLSS cost, but the variable of the other rendering complicates it.

Actually, that makes me wonder - the post-processing time varies for each frame, correct? Or is it like DLSS with a fixed cost based on the output resolution?
 
Last edited:
I would be very surprised if Switch 2 CPU clock isn't below 2GHz.
Yeah, I'm expected something between 1.5-1.8. 2.5GHz is not happening lol. 2GHz is the absolute limit and I doubt that too.
I don't disagree with you, exactly, but you should know that the power curve below 1.5 GHz on these cores is nearly totally flat. For a whole 8 core cluster in Orin, the difference between 728Mhz, and 1.5GHz 0.6W. Not per core, but for the entire cluster. The difference between 1.5GHz and 2GHz is 0.8W, 0.1W per core.

Past 2GHz the power curve gets steep quick, but below that, performance is very cheap. By comparison, raising the GPU from 510Mhz to 612Mhz costs 1W in Orin. So I wouldn't expect anything below 1.5GHz, simply because the power saved isn't enough to move the needle on battery life, or to push the GPU more than an inch. And if there are a few tenths of a watt left over in the power budget once the GPU is in place, it's possible that there are huge performance gains to be squeezed out of the CPU for pennies.

So I totally agree with your ranges here, but I think the power curves favor the tops of those ranges. 1.75GHz seems pretty safe.
 
Where's the new doom for switch 2 news?
They can't just announce the game for a console that has not even been announced yet. In fact, they're not even allowed to; I'm pretty sure every studio that has a Switch 2 devkit had to sign an NDA about not making any comments about the Switch 2.

But you can rest assured, once the Switch 2 has been announced, every third party developer who has been secretly developing for it will come forward and make their own announcements too.

We can also rest assured that one way or the other, the Switch 2 will get plenty of doom.

I don't disagree with you, exactly, but you should know that the power curve below 1.5 GHz on these cores is nearly totally flat. For a whole 8 core cluster in Orin, the difference between 728Mhz, and 1.5GHz 0.6W. Not per core, but for the entire cluster. The difference between 1.5GHz and 2GHz is 0.8W, 0.1W per core.

Past 2GHz the power curve gets steep quick, but below that, performance is very cheap. By comparison, raising the GPU from 510Mhz to 612Mhz costs 1W in Orin. So I wouldn't expect anything below 1.5GHz, simply because the power saved isn't enough to move the needle on battery life, or to push the GPU more than an inch. And if there are a few tenths of a watt left over in the power budget once the GPU is in place, it's possible that there are huge performance gains to be squeezed out of the CPU for pennies.

So I totally agree with your ranges here, but I think the power curves favor the tops of those ranges. 1.75GHz seems pretty safe.

Wait, if the performance is that cheap even on Orin, what would it be like on TSMC 4N? Is there any way to work that out?
 
Ah, right, of course, the limits of the display refresh rate. Honestly, that's a solid argument for a 120Hz screen and 120Hz containers for every game.
Yeah, you got it. It's the display refresh that makes the latency always be measured in full frames rather than some sub-tick.

But 120Hz still doesn't let you send out frames "when they're ready". That creates two problems. The first is tearing. When the display refreshes, it displays all the pixels from a chunk of memory called the front buffer. If games write directly to the front buffer, you get absolute minimal latency, but you get tearing on nearly every frame, where the GPU is in the middle of writing a new frame over the front buffer, and you get part of the new frame and part of the old frame.

What games do instead is write to a second buffer - or multiple buffers, even - called the back buffer. Then when the back buffer is ready, the two buffers are "flipped". And even then, you can't just flip the instant that the back buffer is ready to. The reason being frame pacing.

A 120Hz screen is still on a steady, ticking clock of 1 refresh every 8.3ms. Imagine a game that runs at an unsteady 60fps. Over the course of a second, you get 60 frames, but a few are a little early and a few are a little late. Let's imagine a cadence of rendering times that look like this

16 ms, 12 ms, 13 ms. 13.4ms, 16ms, 18ms, 14ms, 16 ms...

But even with a super fast 120Hz screen, you have to round these times off to the nearest refresh. If you do so, the frame persistence - ie how long your eyeballs see the frame - looks like this.

16.6ms, 16.6ms, 8.3ms, 16.6ms, 16.6ms, 25ms, 16.6ms, 16.6ms....

Those drops and jumps where the amount of time your eyeball sees a frame on screen halves, then doubles, then goes up by 50% then back down again is still super jarring to the eye. Like a flipbook, instead of like a smooth animation. You will 100% notice these things as stutter.

What you want isn't 120Hz, it's VRR. VRR screens support lots of weird, close-to-each-other frame rates - things like 55Hz, 56Hz, 57Hz, etc - and can smoothly transition between them in real time. And there is a protocol that the console can support that essentially tells the display, "hey, this is when I'm actually flipping the buffer." The display can then adjust the refresh rate on a frame-by-frame basis. So the frames would presist something like

16.6ms, 14.2ms, 14.2ms. 13.8ms, 16.6ms, 18.1ms, 14.2ms, 16.6ms...

VRR isn't microflexible down to the 10th or hundredth of a millisecond, but it can get close enough that stutter is eliminated - you still get increasing and decreasing smoothness as the frame rate changes, but no shocking stutters.

And latency improves, too. With 120Hz, no VRR, you either improve latency by 8.3ms, or not at all. But with VRR, you really can just push a frame out 1 or 2ms faster, shaving that 1 or 2 (or 5 or 6) ms of latency off the experience. Of course, as @Dakhil will point out, true VRR screens are almost all 120Hz anyway...

BIG ASS SIDE NOTE: There is a way to alter latency (for good or for bad) at rates smaller than the refresh rate, but it has nothing to do with the GPU and graphics rendering, and everything to do with the CPU and the controller. But thats a tangent and this post is already too long as it is...
 
Last edited:
Wait, if the performance is that cheap even on Orin, what would it be like on TSMC 4N? Is there any way to work that out?
New process nodes make everything cheaper, but they also change the shape of the power curve. Meaning that some clock speeds might get a lot cheaper and other clock speeds might not move at all. Without someone doing some benchmarking on a A78 core built on 4N, I think our guess is just... well, a guess. 40% lower? That seems to roughly track with other products that make the jump, but when your power draw is already so low (a tenth of a watt per core, for a huge clock difference) a 40% reduction can be so small as to be meaningless.
 
Yeah, you got it. It's the display refresh that makes the latency always be measured in full frames rather than some sub-tick.

But 120Hz still doesn't let you send out frames "when they're ready". That creates two problems. The first is tearing. When the display refreshes, it displays all the pixels from a chunk of memory called the front buffer. If games write directly to the front buffer, you get absolute minimal latency, but you get tearing on nearly every frame, where the GPU is in the middle of writing a new frame over the front buffer, and you get part of the new frame and part of the old frame.

What games do instead is write to a second buffer - or multiple buffers, even - called the back buffer. Then when the back buffer is ready, the two buffers are "flipped". And even then, you can't just flip the instant that the back buffer is ready to. The reason being frame pacing.

A 120Hz screen is still on a steady, ticking clock of 1 refresh every 8.3ms. Imagine a game that runs at an unsteady 60fps. Over the course of a second, you get 60 frames, but a few are a little early and a few are a little late. Let's imagine a cadence of rendering times that look like this

16 ms, 12 ms, 13 ms. 13.4ms, 16ms, 18ms, 14ms, 16 ms...

But even with a super fast 120Hz screen, you have to round these times off to the nearest refresh. If you do so, the frame persistence - ie how long your eyeballs see the frame - looks like this.

16.6ms, 16.6ms, 8.3ms, 16.6ms, 16.6ms, 25ms, 16.6ms, 16.6ms....

Those drops and jumps where the amount of time your eyeball sees a frame on screen halves, then doubles, then goes up by 50% then back down again is still super jarring to the eye. Like a flipbook, instead of like a smooth animation. You will 100% notice these things as stutter.

What you want isn't 120Hz, it's VRR. VRR screens support lots of weird, close-to-each-other frame rates - things like 55Hz, 56Hz, 57Hz, etc - and can smoothly transition between them in real time. And there is a protocol that the console can support that essentially tells the display, "hey, this is when I'm actually flipping the buffer." The display can then adjust the refresh rate on a frame-by-frame basis. So the frames would presist something like

16.6ms, 14.2ms, 14.2ms. 13.8ms, 16.6ms, 18.1ms, 14.2ms, 16.6ms...

VRR isn't microflexible down to the 10th or hundredth of a millisecond, but it can get close enough that stutter is eliminated - you still get increasing and decreasing smoothness as the frame rate changes, but no shocking stutters.

And latency improves, too. With 120Hz, no VRR, you either improve latency by 8.3ms, or not at all. But with VRR, you really can just push a frame out 1 or 2ms faster, shaving that 1 or 2 (or 5 or 6) ms of latency off the experience. Of course, as @Dakhil will point out, true VRR screens are almost all 120Hz anyway...

BIG ASS SIDE NOTE: There is a way to alter latency (for good or for bad) at rates smaller than the refresh rate, but it has nothing to do with the GPU and graphics rendering, and everything to do with the CPU and the controller. But thats a tangent and this post is already too long as it is...

Thanks. Yeah, I knew about the frame-pacing issues, but I thought that such things might not matter if a game was already failing to hit its target framerate cap, so 120hz might help in some games.

Do you know if it would be at all possible to mitigate the problem by delaying the first frame? Since the DLSS cost is (apparently) fixed and predictable, so the frames could be delivered when it was done? Or does the post-processing coming after DLSS make that impossible?

New process nodes make everything cheaper, but they also change the shape of the power curve. Meaning that some clock speeds might get a lot cheaper and other clock speeds might not move at all. Without someone doing some benchmarking on a A78 core built on 4N, I think our guess is just... well, a guess. 40% lower? That seems to roughly track with other products that make the jump, but when your power draw is already so low (a tenth of a watt per core, for a huge clock difference) a 40% reduction can be so small as to be meaningless.

Hm, so it really could go either way. Makes me wonder if something over 2GHz might actually be feasible, if 1.8 was fine on 8nm.
 
Hm, so it really could go either way. Makes me wonder if something over 2GHz might actually be feasible, if 1.8 was fine on 8nm.
In the A78 slides, ARM says that the A78 @2.1GHz uses 50% less power than A77 @2.3. The former is on 5nm and the later is on 7nm.

This is not conclusive, but I would expect them to use the best comparison possible. So I believe 2.1GHz is the last point in 5nm the clock increase is cheap. Meanwhile, 2.3GHz was quite a bit after the point where the increases are steep on 7nm, making it a lot less efficient per clock.

So, 2.1GHz should be feasible. Anything above likely enters diminished returns, so it's significantly less likely IMO.
 
Last edited:
In the A78 slides, ARM says that the A78 @2.1GHz uses 50% less power than A77 @2.3. The former is on 5nm and the later is on 7nm.

This is not conclusive, but I would expect them to use the best comparison possible. So I believe 2.1GHz is the last point in 5nm the clock increase is cheap. Meanwhile, 2.3GHz was quite a bit after the point where the increases are steep on 7nm, making it a lot less efficient per clock.

So, 2.1GHz should be feasible. Anything above likely enters diminished returns, so it's significantly less likely IMO.

Fingers crossed that's what we get, then. Especially if the chip turns out to actually be on N4P/C rather than 4N.
 
I would say that I'm sorry for raising colour button discourse again but totally not sorry.

I had a thought about colour buttons and how it solves a novel problem that Nintendo has with its 2p out the box, anywhere, anytime kind of thing it has going on.

What if the buttons themselves were clear and used LEDs to colour them? When you use the joy con in landscape for single joy con play it could just adjust the colour scheme so it's always correct.

Not sure they would do it because the cost of putting RGB LEDs under 8 buttons per console could tip the margin over the edge, but still, would be a cool solution to a novel problem.
 
For a whole 8 core cluster in Orin, the difference between 728Mhz, and 1.5GHz 0.6W. Not per core, but for the entire cluster. The difference between 1.5GHz and 2GHz is 0.8W, 0.1W per core.
Can you share the methodology and values found (or the source if it wasn't your own tests) for this?

There's quite a bit of discrepancy between what you're saying and Thraktor's findings (2.7W for the 1.5 > 2.0 jump on Orin):
The figures I got for an 8 core CPU are as follows:

1113.6MHz - 2.2W
1267.2MHz - 2.5W
1497.6MHz - 3.1W
1651.2MHz - 3.8W
1728.0MHz - 4.1W
1881.6MHz - 4.9W
2035.2MHz - 5.8W
2188.8MHz - 7.1W
 
They can't just announce the game for a console that has not even been announced yet. In fact, they're not even allowed to; I'm pretty sure every studio that has a Switch 2 devkit had to sign an NDA about not making any comments about the Switch 2.

But you can rest assured, once the Switch 2 has been announced, every third party developer who has been secretly developing for it will come forward and make their own announcements too.
That PS5 doom post from Nate is not an official report from Ms right? It's inside information.
 
A direct Super Mario 64 sequel (IE: In another castle with paintings leading to different worlds) would be a dream of mine.

Chance of happening:

im-not-saying-31e5d15b9e.jpg

That's my thoughts exactly!
What wacky ideas would Nintendo come up with if they made Mario 64 today?
It would definitely be the perfect tech demo to show off Switch 2's capabilities. The painting portals, Metal Mario and water levels would look amazing...
 
I would say that I'm sorry for raising colour button discourse again but totally not sorry.

I had a thought about colour buttons and how it solves a novel problem that Nintendo has with its 2p out the box, anywhere, anytime kind of thing it has going on.

What if the buttons themselves were clear and used LEDs to colour them? When you use the joy con in landscape for single joy con play it could just adjust the colour scheme so it's always correct.

Not sure they would do it because the cost of putting RGB LEDs under 8 buttons per console could tip the margin over the edge, but still, would be a cool solution to a novel problem.
I was thinking about it being customizable or adjustable as well. As it is, for two player they use the direction of the button to signify which they mean, which works well but colors would obviously be better.
 
I sincerely hope nintendo ninjas aren't already on their way to ruin their life with some stupid lawsuit.

Well it wouldn't exactly be a stupid lawsuit when she's leaking private info that is crucial for their billionaire business, would it?

She should be aware of the risks, but people sometimes simply go too far.
 
Lets hope Pokemon Legends A-Z can run at 720/30fps in Switch 2 🙏
i'm interested seeing how the base Switch will handle Pókemon ZA.

Since i'm expecting the Switch 2 version looking and playing quite nicely, maybe 1440/60

But the base Switch i'm expecting 720/30 with the game texture looking like grease.
 
0
so the person who told him about the Ps5 version would know if it was coming to switch 2 or not.
I wouldn’t say so. Here's some scenarios where they wouldn't:
  • The leaker saw the trailer/presentation
  • The leaker worked on the video
  • The leaker is from the QA team and they don't have dev kits
  • Nintendo is wary of giving MS a devkit yet, so Bethesda didn't start work on the Nintendo version even if the management already decided to do one.
 
People are saying switch 2 doesnt "exist" so you cant announce a game for a console in that terms, but Nate is not Ms, so the person who told him about the Ps5 version would know if it was coming to switch 2 or not.
Depends who Nate's contact is e.g. if they're in marketing and when the multiplatform announcement for DOOM will be (like a week after the showcase) then they would only be aware of PS5/PC versions.
 
People are saying switch 2 doesnt "exist" so you cant announce a game for a console in that terms, but Nate is not Ms, so the person who told him about the Ps5 version would know if it was coming to switch 2 or not.
Microsoft will have PS5 as their priority when it comes to ports, just like all AAA publishers they see PC, Xbox and PS part of the same demographic type of gamers. Sony's Jim Ryan said that ''The people that play Mario and Zelda are not the same as those that play Call of Duty''. That is probably the view of all AAA publishers. That the Nintendo ecosystem is entirely different from the other platforms and thus automatically a low priority when it comes to porting their games over.
 
I would say that I'm sorry for raising colour button discourse again but totally not sorry.

I had a thought about colour buttons and how it solves a novel problem that Nintendo has with its 2p out the box, anywhere, anytime kind of thing it has going on.

What if the buttons themselves were clear and used LEDs to colour them? When you use the joy con in landscape for single joy con play it could just adjust the colour scheme so it's always correct.

Not sure they would do it because the cost of putting RGB LEDs under 8 buttons per console could tip the margin over the edge, but still, would be a cool solution to a novel problem.
Sounds unnecessarily over-engineered with a BOM cost penalty at no benefit to the end user besides aesthetics. Most they will do is colored buttons, if they do it, that is.
 
0
That PS5 doom post from Nate is not an official report from Ms right? It's inside information.
The information regarding Doom's upcoming game itself was a leak, let alone that it was coming to the PS5. However, we need to understand how these leaks happen in the first place - let's say it originated from promotional materials for the new Doom game which happened to contain references to the PS5 in them. That however also means said promotional materials would not contain any references to the Switch 2 since it's under NDA.

At any rate, regardless of the source, any information broken from a developer/studio will be a breach of NDA. Ofcourse insiders are always breaching others' NDAs when they do their job, but if they say "studio X is making a game for the Switch 2" it makes it pretty obvious studio X broke NDA. They wouldn't do something that obvious with Nintendo ninjas breathing down their necks.
 
0
Microsoft will have PS5 as their priority when it comes to ports, just like all AAA publishers they see PC, Xbox and PS part of the same demographic type of gamers. Sony's Jim Ryan said that ''The people that play Mario and Zelda are not the same as those that play Call of Duty''. That is probably the view of all AAA publishers. That the Nintendo ecosystem is entirely different from the other platforms and thus automatically a low priority when it comes to porting their games over.

Which is rather ironic given how successful the Switch has been for the past 7 years. But even shithead CEO Bobby Botick said they regretted not putting COD on the system, especially earlier, though in their defense, I don’t think many thought the Switch was going to sell as well as it did. That also being said, by 2019 or so, the system was clearly running multiple circles around the Wii U in terms of success, so more AAA publishers could’ve capitalized on it, but many chose not to because…reasons.

So I actually disagree with Jim Ryan on this because he’s overly generalizing the audiences who game on PS and Xbox, and Nintendo. And in generations past, Nintendo platforms were usually that 2nd console which gamers had as well as PS, or Xbox.

The success of the Switch has given way to the idea of, “Wait, you can play Mario, AND COD on a Nintendo system.” Which again, is ironic given both Medal of Honor, and COD historically were almost always on Nintendo platforms, going back to the GCN days when those franchises were in their heydays. The Wii U, and probably to an extent the Wii, made AAA publishers lose faith and interest in Nintendo, and the latter is to blame for much of that for sure. The Switch though was that pivot point, a back to basics if you were, and AAA publishers, on breaks between counting their money in their golden bathtubs, should’ve seen the Switch was not like other Nintendo platforms. Even comparing to the success of the Wii to the Switch, by year 3, the Wii was running out of gas, whereas Switch was gaining more momentum, at least partially due to the pandemic. But that would’ve been a good moment for AAA's to begin porting efforts for COD, and other franchises. And also by that point, the Switch had proven itself to be a capable system with its “impossible” ports, and talented teams who knew the hardware like the back of their hand.

I think the overall view of AAA publishers in 2024 is “Yeah, we should’ve done this.” Take-Two could’ve ported GTAV over to Switch early on, capitalized on the idea of “GTA Online on the go,” and began efforts to port over GTAIV as well. AB could've done the same for COD, perhaps even the remake-not-remake of Modern Warfare as a testing ground, and Ubisoft could’ve attempted more Ass Creed. But in all, would, shoulda, coulda, and they did not, so they’ll have to contend with that.

But hey, if it means more high quality Indie titles can take the reigns, fill that void, and allow more of these on the system, I think everybody wins. And besides, at the end of the day, does Switch actually “need” games like Ass Creed, or COD to be successful? No, not really. That’s where Xbox, and PS come in, and if Switch is that 2nd console for gamers who already own Xbox, or PS, they already have their fix of AAA 3rd party games.

And if you're a PC gamer, nowadays you have games from both Xbox, and PS, so there’s now less incentive to buy those, leaving only Switch as its own unique brand, and library of games that many cannot be found on other platforms. Given what we know of t239, the latest shipping data regarding Ram quantity and type, Switch 2 should have little issue getting more games from AAA publishers, though it’s success will not be determined by that either.

“The Nintendo Switch 2: The power of Xbox Series in your hands”
 
Can you share the methodology and values found (or the source if it wasn't your own tests) for this?

I used the Jetson AGX Orin 32GB as it is the closest to the T239 (and I believe oldpuck used the same)
Everything except CPU disabled, EMC set to 3199 and load level set to low
CPU load level set to high

Now you can play around with the active CPU core and clockspeed sliders and calculate the power consumption values. Whatever you get, halve the number to get an estimated value for TSMC 4N because these are SEC8N numbers.

There's quite a bit of discrepancy between what you're saying and Thraktor's findings (2.7W for the 1.5 > 2.0 jump on Orin):
I believe Thraktor's methodology was slightly different, because afaik he went a step further and calculated the per core power usage at different frequencies and then multiplied each by a factor of 8.

Thus there are bound to be some discrepancies; these are estimates and don't forget these are SEC8N numbers, so realistically they mean nothing for TSMC 4N. That one's Voltage-Frequency curve will be completely different and it is entirely possible we will see a similar "sweet spot" for clockspeeds like we did for SEC8N between the 1.50-1.73GHz range. What I do know is that sweet spot is likely to be higher than SEC8N's - this is probably the only useful information we can glean from the tool with a given CPU power budget.

Additionally, A78C will be a single 8-core cluster unlike the dual 4-core cluster Orin configuration we're using here, so there will be some power savings due to not having to bother with the interconnects between the two clusters.
 
Last edited:
I would not be surprised if Doom 3 is not day and date on Switch 2. It might not even be planned for the system at the moment. But it can come sometime after.

I mean, we know that MS is scrambling for a new direction right now. I would not be surprised if the PS5 version was a late addition. Is perfectly reasonable that a version for an unannounced system with unknown release date was not one of MS priorities.

If MS has Switch 2 games on the pipeline, its probably Starlink and CoD (due the contract). Maybe Sea of Thieves. I expect those games to be handed to third party studios like the ones who have handed all their Switch ports.

After those release, they will start looking into porting other games.
 
Last edited:
Do you know if it would be at all possible to mitigate the problem by delaying the first frame? Since the DLSS cost is (apparently) fixed and predictable, so the frames could be delivered when it was done? Or does the post-processing coming after DLSS make that impossible?
Delaying the first frame is inevitable anyway, because you don't have any frames in the buffer to overlap with. I would imagine that inconsistencies in rendering time mean that you can't be too precise about trying to set your frame delivery schedule up well in advance like that. Post-processing is one of the wild cards, because it can be very expensive and inconsistent.
 
Depends who Nate's contact is e.g. if they're in marketing and when the multiplatform announcement for DOOM will be (like a week after the showcase) then they would only be aware of PS5/PC versions.
Correction, Nate did say it would be revealed as a multiplatform release. If his contact had access to marketing materials (like for the showcase), they would most likely only know about the initial reveal. So there is no information either way about a Switch 2 port.
 
I was using low load when pulling my data, I believe thraktor used medium. Thraktor's number is "better", but it was 2am, sue me ;)
XD

I used high load to find out the worst possible scenario, and honestly I don't think it's that bad. Gaming shouldn't be a constant high load so the Switch 2's CPU shouldn't clock below 1.8GHz if we were to allocate a 4W power budget.
 
That's my thoughts exactly!
What wacky ideas would Nintendo come up with if they made Mario 64 today?
It would definitely be the perfect tech demo to show off Switch 2's capabilities. The painting portals, Metal Mario and water levels would look amazing...

Damnit. I didn’t need to hear that.

..
Oh hell…

Did I just find my investor question?

“Hey furakowa, How open is Nintendo to direct sequels in terms of previously released titles in a “series”. For example: Super Mario 64 pt 2. Taking the same idea and iterating on it rather than making a brand new 3D Mario with the same overall theme of worlds + missions.



Ty sir. That is very exciting considering I’ve sworn off Microsoft after their layoffs. Even donated my Xbox series X to a local orphanage. As a shareholder, I simply cannot approve layoffs but I am the minority with that opinion unfortunately.
 
Yeah, you got it. It's the display refresh that makes the latency always be measured in full frames rather than some sub-tick.

But 120Hz still doesn't let you send out frames "when they're ready". That creates two problems. The first is tearing. When the display refreshes, it displays all the pixels from a chunk of memory called the front buffer. If games write directly to the front buffer, you get absolute minimal latency, but you get tearing on nearly every frame, where the GPU is in the middle of writing a new frame over the front buffer, and you get part of the new frame and part of the old frame.

What games do instead is write to a second buffer - or multiple buffers, even - called the back buffer. Then when the back buffer is ready, the two buffers are "flipped". And even then, you can't just flip the instant that the back buffer is ready to. The reason being frame pacing.

A 120Hz screen is still on a steady, ticking clock of 1 refresh every 8.3ms. Imagine a game that runs at an unsteady 60fps. Over the course of a second, you get 60 frames, but a few are a little early and a few are a little late. Let's imagine a cadence of rendering times that look like this

16 ms, 12 ms, 13 ms. 13.4ms, 16ms, 18ms, 14ms, 16 ms...

But even with a super fast 120Hz screen, you have to round these times off to the nearest refresh. If you do so, the frame persistence - ie how long your eyeballs see the frame - looks like this.

16.6ms, 16.6ms, 8.3ms, 16.6ms, 16.6ms, 25ms, 16.6ms, 16.6ms....

Those drops and jumps where the amount of time your eyeball sees a frame on screen halves, then doubles, then goes up by 50% then back down again is still super jarring to the eye. Like a flipbook, instead of like a smooth animation. You will 100% notice these things as stutter.

What you want isn't 120Hz, it's VRR. VRR screens support lots of weird, close-to-each-other frame rates - things like 55Hz, 56Hz, 57Hz, etc - and can smoothly transition between them in real time. And there is a protocol that the console can support that essentially tells the display, "hey, this is when I'm actually flipping the buffer." The display can then adjust the refresh rate on a frame-by-frame basis. So the frames would presist something like

16.6ms, 14.2ms, 14.2ms. 13.8ms, 16.6ms, 18.1ms, 14.2ms, 16.6ms...

VRR isn't microflexible down to the 10th or hundredth of a millisecond, but it can get close enough that stutter is eliminated - you still get increasing and decreasing smoothness as the frame rate changes, but no shocking stutters.

And latency improves, too. With 120Hz, no VRR, you either improve latency by 8.3ms, or not at all. But with VRR, you really can just push a frame out 1 or 2ms faster, shaving that 1 or 2 (or 5 or 6) ms of latency off the experience. Of course, as @Dakhil will point out, true VRR screens are almost all 120Hz anyway...

BIG ASS SIDE NOTE: There is a way to alter latency (for good or for bad) at rates smaller than the refresh rate, but it has nothing to do with the GPU and graphics rendering, and everything to do with the CPU and the controller. But thats a tangent and this post is already too long as it is...

That’s a great read. OldPuck you’re a famiiboards national treasure.
 
But hey, if it means more high quality Indie titles can take the reigns, fill that void, and allow more of these on the system, I think everybody wins. And besides, at the end of the day, does Switch actually “need” games like Ass Creed, or COD to be successful? No, not really. That’s where Xbox, and PS come in, and if Switch is that 2nd console for gamers who already own Xbox, or PS, they already have their fix of AAA 3rd party games.
You are 100% correct, but there are a lot of people, myself included, who would like to make Switch 2 their 1st and only console, not their second console. With the diminishing returns Playstation and Xbox are giving, I wouldn't underestimate Switch 2's ability to take a shot at this market even if that isn't Nintendo's main goal.

I do think Indie titles are on their way to replace AAA games though for most gamers, or at least fill up that AA space.
 
Microsoft will have PS5 as their priority when it comes to ports, just like all AAA publishers they see PC, Xbox and PS part of the same demographic type of gamers. Sony's Jim Ryan said that ''The people that play Mario and Zelda are not the same as those that play Call of Duty''. That is probably the view of all AAA publishers. That the Nintendo ecosystem is entirely different from the other platforms and thus automatically a low priority when it comes to porting their games over.
Jim Ryan doesn't even have the evidence to back that up. many big IPs that hit everything do well on everything. CoD isn't any different. we can see that from CoD on Wii, 3DS, and Wii U
 
Minor nitpicking... why didn't they just keep the Nintendo Network label?
They wanted a clean break from the old account system on top of other things, but honestly, "Nintendo Network" is such a good platform agnostic name that I would be down for them recycling it for an NSO rebrand now that they've shut down online for 3DS/Wii U.
 

I used the Jetson AGX Orin 32GB as it is the closest to the T239 (and I believe oldpuck used the same)
Everything except CPU disabled, EMC set to 3199 and load level set to low
CPU load level set to high

Now you can play around with the active CPU core and clockspeed sliders and calculate the power consumption values. Whatever you get, halve the number to get an estimated value for TSMC 4N because these are SEC8N numbers.


I believe Thraktor's methodology was slightly different, because afaik he went a step further and calculated the per core power usage at different frequencies and then multiplied each by a factor of 8.

Thus there are bound to be some discrepancies; these are estimates and don't forget these are SEC8N numbers, so realistically they mean nothing for TSMC 4N. That one's Voltage-Frequency curve will be completely different and it is entirely possible we will see a similar "sweet spot" for clockspeeds like we did for SEC8N between the 1.50-1.73GHz range. What I do know is that sweet spot is likely to be higher than SEC8N's - this is probably the only useful information we can glean from the tool with a given CPU power budget.

Additionally, A78C will be a single 8-core cluster unlike the dual 4-core cluster Orin configuration we're using here, so there will be some power savings due to not having to bother with the interconnects between the two clusters.
I was using low load when pulling my data, I believe thraktor used medium. Thraktor's number is "better", but it was 2am, sue me ;)
Rather than what number is better... We're all trying to figure out a 10k pieces jigsaw puzzle with a handful of pieces.

Checking different data and results and understanding why they differ helps me getting nuances/caveats and find more pieces.

If anything I'm the one shamelessly benefitting from yours, Thraktor's and others' hard work rather than doing my own research. But please don't sue me 😖
 
People are saying switch 2 doesnt "exist" so you cant announce a game for a console in that terms, but Nate is not Ms, so the person who told him about the Ps5 version would know if it was coming to switch 2 or not.

Not saying he knows for sure for sure, but Nate is aware of the total shitstorm he would face if he was the first person to publicly confirm a Switch 2 game. So even if he knows, he's not going to welcome that to his doorstep. And he has sources he doesn't want to piss off as well.

There's a reason why once one person breaks a story first, people are more than happy to put additional info out there.
 
Probably because their server infrastructure is completely different between 3DS/Wii U and Switch.
So? Wasn't Xbox live 1.0 different infrastructure from Xbox live 2.0? The name is still great.
They wanted a clean break from the old account system on top of other things, but honestly, "Nintendo Network" is such a good platform agnostic name that I would be down for them recycling it for an NSO rebrand now that they've shut down online for 3DS/Wii U.
Exactly!
 
Rather than what number is better... We're all trying to figure out a 10k pieces jigsaw puzzle with a handful of pieces.

Checking different data and results and understanding why they differ helps me getting nuances/caveats and find more pieces.

If anything I'm the one shamelessly benefitting from yours, Thraktor's and others' hard work rather than doing my own research. But please don't sue me 😖
Nintendo fans are in many ways the opposite of Nintendo, so the probability of Famiboards Lawyers (and Ninjas) is pretty lo-…

…what was this shadow moving by my window. Maybe a-
 
So? Wasn't Xbox live 1.0 different infrastructure from Xbox live 2.0? The name is still great.
Xbox accounts weren't ditched with the upgrade were they? The problem with keeping Nintendo Network as the name is then you'd have to explain why people would need to make a new account which also wouldn't work between different systems despite the service being the same to consumers.
 
Please read this new, consolidated staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited by a moderator:


Back
Top Bottom