• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Furukawa Speaks! We discuss the announcement of the Nintendo Switch Successor and our June Direct Predictions on the new episode of the Famiboards Discussion Club! Check it out here!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

In fact, much like 20nm, I think Samsung would offer an impossibly good deal to move Drake to a 4nm/5nm Samsung node, as they have capacity that ends up costing them money if they can't fill it, and Drake would produce 10M+ chips a year for them
 
Probably because mobile OLED displays that has VRR support also happen to support a refresh rate of up to 120 Hz (e.g. iPhone 13 Pro and iPhone 13 Pro Max, etc.).
I’m aware that they have it, but their use case isn’t comparable to a console. The use case for something like VRR in a game console like this stretches farther than simply reading text on a screen which is what phones are meant for. It doesn’t automatically mean “120FPS!”, no one has suggested that they would go for 120FPS, the only use case applied for VRR in this case has been about lower frame rates not higher.


Mobile VRR works differently from TV VRR anyway.

It’s presets for the unaware, not truly variable.
Then we should already know if VRR is going tobe supported.
Supposedly there’s something about testing VRR in the NVN2 breach but nothing really pointing to Nintendo using it.
 
People thinking the plural of anecdote is data?

When so few similar hardware launches exist, using the few examples we have just to indicate what’s possible shouldn’t be belittled. It shows precedence.

We all know it doesn’t mean it’ll go down that way for certain.
 
I already left the 2022 hype train but I’m still curious about why the funcle speculated about new hardware being revealed on September
 
0
In fact, much like 20nm, I think Samsung would offer an impossibly good deal to move Drake to a 4nm/5nm Samsung node, as they have capacity that ends up costing them money if they can't fill it, and Drake would produce 10M+ chips a year for them
To play devil's advocate, Nvidia's been rumoured to have secured excessive capacity for TSMC's 4N process node and wants to reduce orders, but to no avail since TSMC refuses to make concessions; and Nvidia's responsible for finding replacement customers for secured capacity that has been vacated. Although DigiTimes isn't the most reliable source, I do think there's some grains of truth to that rumour since Nikkei Asia reported that TSMC has been warning customers of excessive capacity. So Nvidia offering Nintendo a really good deal to have Drake fabricated at TSMC's 4N process node is probable. (I think TSMC's N6 process node is also a probable choice for fabricating Drake, especially if Nintendo and Nvidia do plan to die shrink Drake in the future.)
 
PS4 Pro, September reveal, November release.

New Nintendo 3DS, August reveal, October release.

Notice a pattern?
There's also Nintendo Switch Lite, July reveal, September announcement. OLED was July reveal and October launch but November might make it August or September reveal.
 
People thinking the plural of anecdote is data?
Please don't do this to me I just finished my data science finals. 🤣

While that is true, on a serious note, it's not that I'm saying it WILL follow a pattern but rather that 2 months from reveal to release is pretty standard for this kind of thing (same family, new more powerful model.).
Switch got less than a year. DSi XL got barely any time. PS4 Slim was on the market before it was even announced.
 
I'm of the position that if it's not announced in August, like in the next 2 weeks basically, then this year is off the table. Some people think September is the cutoff but I don't think they'd do such a short announcement cycle this time of year.

Of course I'm willing to change my opinion if we hear some leaks soon.
If it’s not announced by December 31st it isn’t coming out this year.
 
To play devil's advocate, Nvidia's been rumoured to have secured excessive capacity for TSMC's 4N process node and wants to reduce orders, but to no avail since TSMC refuses to make concessions; and Nvidia's responsible for finding replacement customers for secured capacity that has been vacated. Although DigiTimes isn't the most reliable source, I do think there's some grains of truth to that rumour since Nikkei Asia reported that TSMC has been warning customers of excessive capacity. So Nvidia offering Nintendo a really good deal to have Drake fabricated at TSMC's 4N process node is probable. (I think TSMC's N6 process node is also a probable choice for fabricating Drake, especially if Nintendo and Nvidia do plan to die shrink Drake in the future.)
If Nvidia has too much 4nm capacity, then yes it is the most reasonable course of action, and if so, clocks will be drastically better than we've been talking about, docked the device would easily be 4TFLOPs+ but even portably, we could be looking at 2TFLOPs+. CPU clocks would also have to be 2GHz+ on 8 cores
 
If Nvidia has too much 4nm capacity, then yes it is the most reasonable course of action, and if so, clocks will be drastically better than we've been talking about, docked the device would easily be 4TFLOPs+ but even portably, we could be looking at 2TFLOPs+. CPU clocks would also have to be 2GHz+ on 8 cores
 

LOL I'll put it this way, I'm at least half as excited about the idea as this kid. I think the minimum we've been thinking about ~1.4TFLOPs portable and 2.36TFLOPs docked, should be the expectation, and could be reached even at 8nm IMO, but yes if it's 4nm TSMC because Nvidia ordered too much capacity for a node again and Nintendo saves the day AGAIN, then yeah clocks could be 50% higher minimum.
 
If you find your thoughts tending toward unrealistic expectations of the new hardware, please remind yourself that it's not going to cost $600.
 
If you find your thoughts tending toward unrealistic expectations of the new hardware, please remind yourself that it's not going to cost $600.
honestly it would depend on the cost of the SoC, would 4N be drastically more than 5nm Samsung? I think there is a premium sure, but it's not going to be $200 more for the SoC. Increasing the clocks don't cost anything, and if 4N is what Nvidia has an excess capacity of, just like with 20nm in 2014 when Nvidia broke the deal to Nintendo, they could offer a good deal here, which would make this make sense... Alone I don't think it does, if Nvidia is fine with their 4N capacity, this won't make sense over Samsung 5nm, which would make sense with the sort of numbers we've been discussing for months. 8nm is going to be more limiting on the CPU side I think as clocks should hit current Switch clocks, which would mean GPU performance is still relatively good.
 
I would expect at best case scenario a ~1TFlop GPU performance in portable and ~2.25 TFlops for docked mode.

It should be a fantastic machine.
 
I'm still on the mindset of ~700 GFLOPs handheld and ~1.5 TFLOPs docked, though my expectations are on the low end.
Maxwell is far better per flop than Ampere, 700GFLOPs Maxwell is like ~900GFLOPs Ampere, so ~700GFLOPs Ampere would be closer to 550GFLOPs Maxwell, basically like the docked version of the Switch right now, which just doesn't make sense for the technology we are talking about. However if your expectations are this low, you could only be positively surprised later.
 

I do wonder how Nvidia's going to keep DLSS's power consumption in check as DLSS becomes more and more accurate.
"Now let’s get to the new metrics and the beautiful new curves. While the power consumption of the GeForce RTX 2080 Ti with and without DLSS remains almost the same over the duration of the benchmark, the GPU’s curve with activated limiter fluctuates extremely, but is visibly far below the other two. This already indicates an extremely reduced load and thus also power consumption."
 
I've been setting myself up for an upper end (for docked) of about an easy 896-1024 mhz if it's on N5/N4 with 128-bit LPDDR5 ram. (how did I get those exact numbers? It's 768*7/6 and 768*8/6, paired with 6 times the SMs, it's 7x and 8x raw grunt)
Why am I settling around there?
Convert to tflops, we get 2.752-3.145 tflops.
128-bit LPDDR5 ram means a bandwidth of 102.4 GB/s.
Max theoretical bandwidth/tflops (when CPU is using 0, which is unrealistic but hey, it's an upper bound) is 37.2 GB/s/tflop for the 896 mhz scenario, 32.55 GB/s/tflop for the 1024 mhz scenario.
If the CPU were to use up to something like... I dunno, 20 GB/s*, leaving 82.4 GB/s for the GPU? Then for the 896 mhz scenario, it's 29.94 GB/s/tflop, while for 1024 mhz, it's 26.2 GB/s/tflop.

Looking at how Nvidia balanced the desktop RTX 30 series...
The 3050's at 28.19 GB/s/tflop (base) and 24.62 GB/s/tflop (boost clock).
3060's at 38.04 GB/s/tflop (base) and 28.26 GB/s/tflop (boost)
3060 TI's at 32.66 GB/s/tflop (base) and 27.66 GB/s/tflop (boost)
3070's at 25.36 GB/s/tflop (base) and 22.05 GB/s/tflop (boost)
3070 TI's at 31.31 GB/s/tflop (base) and 28.02 GB/s/tflop (boost)
3080 (original 10 GB) is at 30.32 GB/s/tflop (base) and 25.53 GB/s/tflop (boost)
3080 (12 GB) is at 40.39 GB/s/tflop (base) and 29.76 GB/s/tflop (boost)
3080 TI's at 32.62 GB/s/tflop (base) and 26.74 GB/s/tflop (boost)
3090's at 31.96 GB/s/tflop (base) and 26.31 GB/s/tflop (boost)
3090 TI's at 30.04 GB/s/tflop (base) and 25.2 GB/s/tflop (boost)

And landing in a range of high 20's to ~30 GB/s/tflop kind of worked out.

*how did I pull that number out of nowhere? Yea, it's kind of arbitrary. So one night I randomly stumbled across this page. I figured, ok, an 8700k @3.7 ghz maxed out at close to 40 GB/s. That's 6c/12 Skylake. An octa-core A78C setup with clock of mid to high 1 ghz probably shouldn't hit even reach half that in the most RAM intensive scenario (that's still realistic, as opposed to deliberately hammering RAM). Thus, an allocation of 20 GB/s for CPU.
 
"Now let’s get to the new metrics and the beautiful new curves. While the power consumption of the GeForce RTX 2080 Ti with and without DLSS remains almost the same over the duration of the benchmark, the GPU’s curve with activated limiter fluctuates extremely, but is visibly far below the other two. This already indicates an extremely reduced load and thus also power consumption."
I think Igor Wallossek's talking about after enabling Nvidia's frame rate limiter, which is certainly impressive, although not necessarily surprising, as power consumption is concerned.

Most of the graphs seem to show that the RTX 2080 Ti with DLSS enabled consumes a similar amount of power as the RTX 2080 Ti without DLSS enabled. But of course, the RTX 2080 Ti with DLSS enabled has at least considerably higher frame rates in comparison to the RTX 2080 Ti without DLSS enabled and the RTX 2080 Ti with DLSS and Nvidia's frame rate limiter enabled.

Anyway, I was talking about future iterations of DLSS when asking that rhetorical question, especially if there are grains of truth to the rumours from RedGamingTech about improved ray tracing performance being the focus for DLSS 3.0 (here and here), although I recommend taking what RedGamingTech said with a very healthy grain of salt.
 
I think Igor Wallossek's talking about after enabling Nvidia's frame rate limiter, which is certainly impressive, although not necessarily surprising, as power consumption is concerned.

Most of the graphs seem to show that the RTX 2080 Ti with DLSS enabled consumes a similar amount of power as the RTX 2080 Ti without DLSS enabled. But of course, the RTX 2080 Ti with DLSS enabled has at least considerably higher frame rates in comparison to the RTX 2080 Ti without DLSS enabled and the RTX 2080 Ti with DLSS and Nvidia's frame rate limiter enabled.

Anyway, I was talking about future iterations of DLSS when asking that rhetorical question, especially if there are grains of truth to the rumours from RedGamingTech about improved ray tracing performance being the focus for DLSS 3.0 (here and here), although I recommend taking what RedGamingTech said with a very healthy grain of salt.
Yeah RGT lives on a salt mine as far as I'm concerned.
 
0
Drake already has power management and temperature readings that would prevent this, some employee heating something in the breakroom is not going to have all those safety features.
But what if the thing the employee was heating up in the break room was a Drake..?
i'm kidding
 
What's the expectation for storage? To market the OLED as a premium model Nintendo already used 64gb instead of 32 so I'd bet on the pro/2 having at least 128gb.

Is there a big difference in cost between 128gb and 256gb modules?
 
0
LOL I'll put it this way, I'm at least half as excited about the idea as this kid. I think the minimum we've been thinking about ~1.4TFLOPs portable and 2.36TFLOPs docked, should be the expectation, and could be reached even at 8nm IMO, but yes if it's 4nm TSMC because Nvidia ordered too much capacity for a node again and Nintendo saves the day AGAIN, then yeah clocks could be 50% higher minimum.
But can Nintendo/Nvidia realistically swap the silicon at such a late stage of development? Naive question, I don't know much about hardware design and fabrication processes.
 
I've been setting myself up for an upper end (for docked) of about an easy 896-1024 mhz if it's on N5/N4 with 128-bit LPDDR5 ram. (how did I get those exact numbers? It's 768*7/6 and 768*8/6, paired with 6 times the SMs, it's 7x and 8x raw grunt)
Why am I settling around there?
Convert to tflops, we get 2.752-3.145 tflops.
128-bit LPDDR5 ram means a bandwidth of 102.4 GB/s.
Max theoretical bandwidth/tflops (when CPU is using 0, which is unrealistic but hey, it's an upper bound) is 37.2 GB/s/tflop for the 896 mhz scenario, 32.55 GB/s/tflop for the 1024 mhz scenario.
If the CPU were to use up to something like... I dunno, 20 GB/s*, leaving 82.4 GB/s for the GPU? Then for the 896 mhz scenario, it's 29.94 GB/s/tflop, while for 1024 mhz, it's 26.2 GB/s/tflop.

Looking at how Nvidia balanced the desktop RTX 30 series...
The 3050's at 28.19 GB/s/tflop (base) and 24.62 GB/s/tflop (boost clock).
3060's at 38.04 GB/s/tflop (base) and 28.26 GB/s/tflop (boost)
3060 TI's at 32.66 GB/s/tflop (base) and 27.66 GB/s/tflop (boost)
3070's at 25.36 GB/s/tflop (base) and 22.05 GB/s/tflop (boost)
3070 TI's at 31.31 GB/s/tflop (base) and 28.02 GB/s/tflop (boost)
3080 (original 10 GB) is at 30.32 GB/s/tflop (base) and 25.53 GB/s/tflop (boost)
3080 (12 GB) is at 40.39 GB/s/tflop (base) and 29.76 GB/s/tflop (boost)
3080 TI's at 32.62 GB/s/tflop (base) and 26.74 GB/s/tflop (boost)
3090's at 31.96 GB/s/tflop (base) and 26.31 GB/s/tflop (boost)
3090 TI's at 30.04 GB/s/tflop (base) and 25.2 GB/s/tflop (boost)

And landing in a range of high 20's to ~30 GB/s/tflop kind of worked out.

*how did I pull that number out of nowhere? Yea, it's kind of arbitrary. So one night I randomly stumbled across this page. I figured, ok, an 8700k @3.7 ghz maxed out at close to 40 GB/s. That's 6c/12 Skylake. An octa-core A78C setup with clock of mid to high 1 ghz probably shouldn't hit even reach half that in the most RAM intensive scenario (that's still realistic, as opposed to deliberately hammering RAM). Thus, an allocation of 20 GB/s for CPU.

Yeah I've been thinking somewhere similar. The desktop cards typically have bandwidth around 2 times the Texels/second number @ base clock. Switch's 16TMUs at 768ghz was the same
 
But can Nintendo/Nvidia realistically swap the silicon at such a late stage of development? Naive question, I don't know much about hardware design and fabrication processes.
Nvidia usually has a pretty good turn around of about 2 years per product launch, T239 has been in testing for at least a year and a half now, so yeah if the power consumption issue existed say last spring when Brainchild had brought up that it was in testing, a move to a smaller node could have easily happened by now and certainly before mass production begins in the next 2 or 3 months (speculation on my part, but I do believe now that it will launch with Zelda in mid/late spring 2023).
 
Anyway, I was talking about future iterations of DLSS when asking that rhetorical question, especially if there are grains of truth to the rumours from RedGamingTech about improved ray tracing performance being the focus for DLSS 3.0 (here and here), although I recommend taking what RedGamingTech said with a very healthy grain of salt.
Give that dlss is an anti-aliasing technique, I wonder if RGT watched this video and came up with dlss 3.0 being rt related. This uses ray tracing to reduce aliasing


 
0
Maxwell is far better per flop than Ampere, 700GFLOPs Maxwell is like ~900GFLOPs Ampere, so ~700GFLOPs Ampere would be closer to 550GFLOPs Maxwell, basically like the docked version of the Switch right now, which just doesn't make sense for the technology we are talking about. However if your expectations are this low, you could only be positively surprised later.
My thought is that the new Switch can do docked clocks while in handheld mode.
 
Nvidia usually has a pretty good turn around of about 2 years per product launch, T239 has been in testing for at least a year and a half now, so yeah if the power consumption issue existed say last spring when Brainchild had brought up that it was in testing, a move to a smaller node could have easily happened by now and certainly before mass production begins in the next 2 or 3 months (speculation on my part, but I do believe now that it will launch with Zelda in mid/late spring 2023).

I’ve had the feeling for a while now that Zelda won’t just get a resolution/frame rate boost but that it may actually feature greater graphical fidelity. Nothing too outlandish but enough for people to think ‘oh that’s a nice improvement’. It would be a great showcase for the new hardware.
 
I mean docked performance of the previous/current Switch while the new switch is in handheld mode.
This is what I expected before the Nvidia leak. It makes since to just borrow the docked profile and reuse it for the portable, but.....with the Nvidia leak this is a very very very low expectation. It would be needed to be clocked lower than required for peak efficiency.
 
That's interesting. Do we have any idea of other people (even insiders and such) receiving this gag order? Or is it specific to him and other people in the factory? The latter would possibly imply this decision being hardware related.
 
Nvidia usually has a pretty good turn around of about 2 years per product launch, T239 has been in testing for at least a year and a half now, so yeah if the power consumption issue existed say last spring when Brainchild had brought up that it was in testing, a move to a smaller node could have easily happened by now and certainly before mass production begins in the next 2 or 3 months (speculation on my part, but I do believe now that it will launch with Zelda in mid/late spring 2023).
There's something I'd like to add to this and speculate on if I may.

I know when previously talking of chip tape out timelines and the amount of work a node shrink would take we have been assuming typical ways of working for nvidia, but I read an article regarding nvidia using AI in chip design recently and one of the articles I read specifically mentioned node shrinks but I can't find it now, here's a similar one.


With nvidia using AI to reduce the time it takes to design chips, for me, it becomes more feasible for them to have shifted nodes. Its precisely the kind of work you would use an AI for, despite the article not mentioning it explicitly.

In fact, this could have been tested or been in development when Erista was shrunk to Mariko. If an early sample did exhibit unfavourable results and they were thinking of moving nodes, plus either Samsung wants to sell 5nm capacity or nvidia has excess tsmc 5nm capacity the burden in terms of engineer time to shrink the existing chip or even move across lithography is reduced.

To summarise, I don't think the notion that they couldn't possibly move nodes at a year before mass production etc holds any more with how chip design is changing.
 
My layman opinion:

Something now has changed to the point where further leaks from there would actually show meaty stuff.

Maybe production is starting.
 
Please read this new, consolidated staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited by a moderator:


Back
Top Bottom