• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

Hey, so this is just a theory, but I have a mystery that I think I just solved.

Nvidia has a Samsung 5nm contract, they secured years ago, they also secured 5nm contract for TSMC... Kopite7 has stated that Ada lovelace is built on only TSMC's 5nm node... With Ada Lovelace coming this year, there is no more time for an Ampere Super series, so what is Nvidia using Samsung's 5nm process node they signed a contract for?

Drake is the answer, Nvidia secured TSMC 5nm not that long ago, and in the leak we can see that the codename changed from Dane to Drake, the confirmation of this is that Dane is a banned search word (at least this is what I remember from last week). There is nothing else Nvidia has Samsung's 5nm node as a use for, and offering the node to Nintendo, also solves die size mystery that we have been working with, as this could literally be half the size of 8nm die, and would also solve how this chip is so big and still able to be portable, as we have yet to see any evidence of Thraktor's working theory on disabling SMs, (there is nothing listed in the document for the GPU other than 12SM, which is odd if the portable mode does disable them, that is a huge issue for developers trying to make games for the platform).

Anyways, again this is a theory, but I do think that it is correct, I was 50:50 on a die shrink, but now I favor it, I think Nvidia had the same issue here as when they moved from 20nm Maxwell desktop line (that never happened) to 16nm finfet, and offered the node to Nintendo (who they had worked with as early as 2014), this avoids a contract breach worth potential 100s of millions of dollars, just like it did then.
sounds good to me.
 
Hey, so this is just a theory, but I have a mystery that I think I just solved.

Nvidia has a Samsung 5nm contract, they secured years ago, they also secured 5nm contract for TSMC... Kopite7 has stated that Ada lovelace is built on only TSMC's 5nm node... With Ada Lovelace coming this year, there is no more time for an Ampere Super series, so what is Nvidia using Samsung's 5nm process node they signed a contract for?

Drake is the answer, Nvidia secured TSMC 5nm not that long ago, and in the leak we can see that the codename changed from Dane to Drake, the confirmation of this is that Dane is a banned search word (at least this is what I remember from last week). There is nothing else Nvidia has Samsung's 5nm node as a use for, and offering the node to Nintendo, also solves die size mystery that we have been working with, as this could literally be half the size of 8nm die, and would also solve how this chip is so big and still able to be portable, as we have yet to see any evidence of Thraktor's working theory on disabling SMs, (there is nothing listed in the document for the GPU other than 12SM, which is odd if the portable mode does disable them, that is a huge issue for developers trying to make games for the platform).

Anyways, again this is a theory, but I do think that it is correct, I was 50:50 on a die shrink, but now I favor it, I think Nvidia had the same issue here as when they moved from 20nm Maxwell desktop line (that never happened) to 16nm finfet, and offered the node to Nintendo (who they had worked with as early as 2014), this avoids a contract breach worth potential 100s of millions of dollars, just like it did then.
We still have no confirmation of Orin's node either, right? Could be that Orin is also on 5nm.
 
Damned, this would be a way more capable machine than most of us were anticipating...
So, would it be possible to hear something when the fabrication begins and which time frame for release would it point to ?
 
I think it's important to remember a lot of what kopite7kimi said about the t239 was in respect to Orin. It's possible that includes the node.

As far as NateDrake's sources goes, node isn't something they need to know so they could be regurgitating surface info, like "ampere = 8nm".
Yeah, and for those that think this way, its also important to remember that Maxwell = 28nm wasn't true with Switch either.
 
Last edited:
It's a nice theory but I wouldn't get too carried away with expectations for 5nm quite yet. Nate and Kopite both indicated 8nm for this, that may have changed or just been inaccurate but at the moment we don't really have a concrete reason to really believe it's not the case.
 
0
We still have no confirmation of Orin's node either, right? Could be that Orin is also on 5nm.
Not officially.

kopite7kimi still mentions that Orin's fabricated using Samsung's 8N process node.


I think kopite7kimi's uncertainty on whether or not Samsung's 8N process node is used is with respect to Drake (GA10F), not Orin.
 
We still have no confirmation of Orin's node either, right? Could be that Orin is also on 5nm.
Unlikely, the picture of Orin's SoC was pixel counted, it looks to be around ~400mm^2, with 21B transistors. That works out to ~50M Transistors per mm^2, which indicates 8nm, 5nm would require somewhere around half that size.
 
Not officially.

kopite7kimi still mentions that Orin's fabricated using Samsung's 8N process node.


I think kopite7kimi's uncertainty on whether or not Samsung's 8N process node is used is with respect to Drake (GA10F), not Orin.

Ah okay, that makes sense as the way to read that, yeah.
Unlikely, the picture of Orin's SoC was pixel counted, it looks to be around ~400mm^2, with 21B transistors. That works out to ~50M Transistors per mm^2, which indicates 8nm, 5nm would require somewhere around half that size.
Interesting.


Well, this could make sense then as the reason why Drake has 1 12SM GPC and Orin has 2 8SM GPCs. Different microarchitecture would be kinda odd if it was on the same node, if they're able to fit stuff in differently on a different node these different configurations make sense.
 
I think it would be a good idea for Nvidia to do what they can to get Nintendo on the best node possible to help promote their brand. Do you guys think they would be willing to foot a large amount of the bill to get Nintendo on 5nm? I think the media hype behind the performance of the Switch 4K would do good for their stock price and brand relevance. I've heard the word Tegra so many times because of the Switch lol. An even more powerful switch upgrade than expected will allow DLSS to be more likely to be used and marketed. But i don't how big that bill will have to be and if the tradeoff is worth it.
 
I think it would be a good idea for Nvidia to do what they can to get Nintendo on the best node possible to help promote their brand. Do you guys think they would be willing to foot a large amount of the bill to get Nintendo on 5nm? I think the media hype behind the performance of the Switch 4K would do good for their stock price and brand relevance. I've heard the word Tegra so many times because of the Switch lol. An even more powerful switch upgrade than expected will allow DLSS to be more likely to be used and marketed. But i don't how big that bill will have to be and if the tradeoff is worth it.
Nvidia is the leader of AI, Graphics, Compute, etc. I don't think they care about this. A lot of people doesn't even know Switch uses a Nvidia chip. Now, Nvidia might be talking to Nintendo about moving on to a new architecture, as that does help them standardize their featureset and spread DLSS usage among developers.
 
I’m thinking that Nvidia nudged Nintendo to use a more modern node so that this doesn’t become a repeat of the 20nm “fiasco”.

Perhaps Dane at 8nm wasn’t good enough for DLSS, Ray Tracing and pushing geometry/shaders on docked mode while being way under powered in undocked mode (No DLSS and RT while having a short battery life?)

It would make a ton of sense to redesign Dane for 5nm while adding much more SM, TC and so on.

Really hope it’s that way
 
I think it would be a good idea for Nvidia to do what they can to get Nintendo on the best node possible to help promote their brand. Do you guys think they would be willing to foot a large amount of the bill to get Nintendo on 5nm? I think the media hype behind the performance of the Switch 4K would do good for their stock price and brand relevance. I've heard the word Tegra so many times because of the Switch lol. An even more powerful switch upgrade than expected will allow DLSS to be more likely to be used and marketed. But i don't how big that bill will have to be and if the tradeoff is worth it.

Nvidia is the leader of AI, Graphics, Compute, etc. I don't think they care about this. A lot of people doesn't even know Switch uses a Nvidia chip. Now, Nvidia might be talking to Nintendo about moving on to a new architecture, as that does help them standardize their featureset and spread DLSS usage among developers.

Nintendo is probably Nvidia's biggest single customer right now. I'm guessing they've rarely sold 100M of any single product before (though Erista and Mariko are not exactly the same chip) so they know this will be a big part of their business going forward.

Maybe not to the extent of footing the bill but they probably have a large stake in the design and development of this.
 
Nintendo is probably Nvidia's biggest single customer right now. I'm guessing they've rarely sold 100M of any single product before (though Erista and Mariko are not exactly the same chip) so they know this will be a big part of their business going forward.

Maybe not to the extent of footing the bill but they probably have a large stake in the design and development of this.
Oh, right, that is true. I was just saying that Nvidia probably doesn't care that Nintendo Switch can't run certain games and about that affecting their image. That's very unlikely. But yeah, Nvidia probably has large stakes in the design and helping Nintendo adopting a modern featureset and DLSS. Jensen even said that the Switch(And Gaming Laptops) changed their business profile. So Nintendo partnership/designs wins aren't something Nvidia will take for granted, that's for sure.
 
Oh, right, that is true. I was just saying that Nvidia probably doesn't care that Nintendo Switch can't run certain games and about that affecting their image. That's very unlikely. But yeah, Nvidia probably has large stakes in the design and helping Nintendo adopting a modern featureset and DLSS. Jensen even said that the Switch(And Gaming Laptops) changed their business profile. So Nintendo partnership/designs wins aren't something Nvidia will take for granted, that's for sure.
It's more about adding to their image even more and protecting their investment and relationship with Nintendo as well. Just because you don't need or care about something doesn't mean it can't be very useful. It's all about cost benefit analysis.
 
Matt mentioned around the same time that it was a Day 0 buy for both handheld and docked players.

Imran mentioned it’d be more of a Pro than a successor - more for framerates and resolutions than anything else. He also said the following when the Bloomberg "4K Switch 2021" article dropped (pre-OLED announcement).

an5wL84.jpg
Great memory - I forgot about that! More fuel for the 2022 train.
 
NVIDIA sounded pretty proud because Tegra X1 will be in new Nintendo console, and about partnership with Nintendo "that will last at least 20 years",
so I am pretty sure that Nvidia is very active when comes to new Nintendo hardware and that they are vary actually pushing Nintendo towards new Nvidia hardware, technologies, processes and features.
 
Last edited:
0
All this 2022 talk but Nintendo still hasn't even met demand for the OLED model. I'm not business strategist, but how can they possibly release another revision without being able to meet demand for their latest division. I am not opposed to a 2022 release, but I really think it isn't a likely scenario.
 
All this 2022 talk but Nintendo still hasn't even met demand for the OLED model. I'm not business strategist, but how can they possibly release another revision without being able to meet demand for their latest division. I am not opposed to a 2022 release, but I really think it isn't a likely scenario.
OLED model is competing with the base model and the Lite model for SoC, this one will have its own supply of SoCs. We don't really know the bottleneck right now either, it could be that this new device doesn't have that same bottleneck.

Also the main reason for the slim OLED supply over the past few months has been shipping issues, I don't think there's a huge issue with production. They're still selling like 40k of them per week in Japan.
 
Just to frame exceptations better, what preqreuisities must be fulfilled (in terms of clock speeds, cache, node and RAM) for Drake to be as powerful as a GTX 960/GTX 1050Ti, and is there a realistic scenario in which it can match their rasterization performance?

For the record, here are the specs:

GeForce GTX 960
TSMC 28 nm
Die size 398mm2
1280 cores
Base clock core 924 MHz
4096 MB RAM GDDR5
120 GB/s bandwidth
Bus width 192 bit
L2 cache size 1.5 MB
2.3 TFlops
145 W

GeForce GTX 1050 Ti
Samsung 14 Nm FinFET
Die size 132mm2
768 cores (6 SM)
Base core clock 1290 MHz
4096 MB RAM GDDR5
Bus width 128 bit
L2 cache size 1MB
1.9 TFlops
75 W

If Drake is anything like these, then we are in for a ride. A good one I might add.
 
Just to frame exceptations better, what preqreuisities must be fulfilled (in terms of clock speeds, cache, node and RAM) for Drake to be as powerful as a GTX 960/GTX 1050Ti, and is there a realistic scenario in which it can match their rasterization performance?

For the record, here are the specs:

GeForce GTX 960
TSMC 28 nm
Die size 398mm2
1280 cores
Base clock core 924 MHz
4096 MB RAM GDDR5
120 GB/s bandwidth
Bus width 192 bit
L2 cache size 1.5 MB
2.3 TFlops
145 W

GeForce GTX 1050 Ti
Samsung 14 Nm FinFET
Die size 132mm2
768 cores (6 SM)
Base core clock 1290 MHz
4096 MB RAM GDDR5
Bus width 128 bit
L2 cache size 1MB
1.9 TFlops
75 W

If Drake is anything like these, then we are in for a ride. A good one I might add.
We don't know about RAM or clocks but it appears the core count (1536) and cache size (2MB I think?) for Drake is higher than both of those.
 
[...] and we still know the codename changed over the past two years as a data leak ban search word is Dane, which was identified to be T239 by Kopite7 around this time last year. It will be interesting to see this chip revealed.
To reiterate, there's a single file, out of several files that forbid strings like "t239" and "drake," that also forbids "dane." That's afaict the only place that mentions it in all 75 GB.

Personally, I'm not sure that a change in process node would cause a change in codename (while also not changing the name T239). The chip in the original Switch is only known as T210 or GM20B, with the distinction between Erista and Mariko seemingly not present in the leak at all, along with most other information on the "Nintendo side" of things. Also, it's my understanding that T239 refers to the design of the chip while GA10F refers to the GPU implementation, so if anything a reconfiguration of the same chip design should just lead to a GA10G GPU, whereas T239/Drake wouldn't change.

This is all guesswork though. And I understand the size/power consumption reasons to theorize about a smaller process node.
 
Last edited:
That’s where I’m at as well. I don’t see Nintendo marketing this as a new generation, not will they treat it like one. Not when they’re putting out evergreen titles like Switch Sports in a month, and the DLC to MK8 for the next 18 months.
They could do like last time they wanted to pretend they weren't releasing a successor to a strong performer and call it a third pillar.
 
That's what he flat out says around the 9 min 30 sec mark. He thinks it's going to be called a Switch 4K that's a revision and not a new gen, but it won't be BC with everything, but that they'll try to patch as many games as possible.
A revision that plays less games than base Switch is the most worthless concept for a console I've ever heard.
 
I’m thinking that Nvidia nudged Nintendo to use a more modern node so that this doesn’t become a repeat of the 20nm “fiasco”.

Perhaps Dane at 8nm wasn’t good enough for DLSS, Ray Tracing and pushing geometry/shaders on docked mode while being way under powered in undocked mode (No DLSS and RT while having a short battery life?)

It would make a ton of sense to redesign Dane for 5nm while adding much more SM, TC and so on.

Really hope it’s that way
Nintendo wouldn't have any bearing on the node. Nvidia makes the chips and Nintendo buys from them. this time, Nintendo has more say in what goes into the chip, but Nvidia still chooses the node, most likely
Just to frame exceptations better, what preqreuisities must be fulfilled (in terms of clock speeds, cache, node and RAM) for Drake to be as powerful as a GTX 960/GTX 1050Ti, and is there a realistic scenario in which it can match their rasterization performance?

For the record, here are the specs:

GeForce GTX 960
TSMC 28 nm
Die size 398mm2
1280 cores
Base clock core 924 MHz
4096 MB RAM GDDR5
120 GB/s bandwidth
Bus width 192 bit
L2 cache size 1.5 MB
2.3 TFlops
145 W

GeForce GTX 1050 Ti
Samsung 14 Nm FinFET
Die size 132mm2
768 cores (6 SM)
Base core clock 1290 MHz
4096 MB RAM GDDR5
Bus width 128 bit
L2 cache size 1MB
1.9 TFlops
75 W

If Drake is anything like these, then we are in for a ride. A good one I might add.
the funny thing is we were expecting sub 1050 Ti performance before this leak. now it's gonna exceed it, quite easily
 
Nintendo wouldn't have any bearing on the node. Nvidia makes the chips and Nintendo buys from them. this time, Nintendo has more say in what goes into the chip, but Nvidia still chooses the node, most likely

the funny thing is we were expecting sub 1050 Ti performance before this leak. now it's gonna exceed it, quite easily
The funny thing is that pre leak, this thread has been tsuw (wust backwards).
 
With what is speculated to be in this new console, would this fit inside the current shell or would this need to be expanded? If it needs a bigger shell, could we expect a bigger OLED screen or would get the same screen in the current Switch OLED and go back to having large bezels again?
 
We don't know about RAM or clocks but it appears the core count (1536) and cache size (2MB I think?) for Drake is higher than both of those.
Thanks. For the longest time, I have assumed that the equivalent of a 750Ti would be the absolute maximum to be expected for the succ, even after we realized that DLSS required some level of raw power to be usable at all.

the funny thing is we were expecting sub 1050 Ti performance before this leak. now it's gonna exceed it, quite easily
...what would be the formula (napkin calculations-style) that would take in account the clock speeds, SM counts, thermals, power consumption, node and spit a TFlops figure at the end? We could make tables, and thus frame expectations accordingly.
 
To reiterate, there's a single file, out of several files that forbid strings like "t239" and "drake," that also forbids "dane." That's afaict the only place that mentions it in all 75 GB.

Personally, I'm not sure that a change in process node would cause a change in codename (while also not changing the name T239). The chip in the original Switch is only known as T210 or GM20B, with the distinction between Erista and Mariko seemingly not present in the leak at all, along with most other information on the "Nintendo side" of things. Also, it's my understanding that T239 refers to the design of the chip while GA10F refers to the GPU implementation, so if anything a reconfiguration of the same chip design should just lead to a GA10G GPU, whereas T239/Drake wouldn't change.

This is all guesswork though. And I understand the size/power consumption reasons to theorize about a smaller process node.
I'm not entirely sure how it works, but T239 was in the design phase when the codename changed. We also had something similar happen with Parker (TX2) which was suppose to be just Denver cores and not have A57 cores, it was also stated as Maxwell 16nm (which didn't exist) then changed to Pascal and added the A57 cores but kept the the codename Parker.

So it is a bit of a mystery, as TX1 was an offshoot of Logan (TK1) while TX2 always existed in some form before TX1.
 
0
With what is speculated to be in this new console, would this fit inside the current shell or would this need to be expanded? If it needs a bigger shell, could we expect a bigger OLED screen or would get the same screen in the current Switch OLED and go back to having large bezels again?
We don't know. If this is on 8nm it might be tight, the die is probably about 50% bigger than Mariko. But still possibly doable. If it's on 5nm then yeah it definitely should fit in the same shell.

IIRC the OLED Switch is a few millimeters wider than the original Switch, and this could probably be a few mm wider than that without breaking compatibility with stuff more than the OLED does (i.e. Labo). The OLED dock also has an extra clearance in the thickness direction of a mm, this could theoretically be a few mm thicker and still fit in there too.
 
0
From the leaks, it seems its more likely we are getting 75% of a laptop RTX 3050 almost perfectly except for drake having lower ROPs but higher cache.
the A2000 shows that Nvidia can get really close to the 3050 on just 70W, which is crazy. going lower in voltage will probably produce some crazy efficiency numbers

...what would be the formula (napkin calculations-style) that would take in account the clock speeds, SM counts, thermals, power consumption, node and spit a TFlops figure at the end? We could make tables, and thus frame expectations accordingly.
there were some measures posted earlier in the thread, i'll see if I can find it

EDIT: here's one

I assune that yes, this is the number of GFLOPs of Drake for each base clock multiplier.

base clock: 235.93
x2: 471.85
x3: 707.78
x4: 943.71
x5: 1179.64
x6: 1415.57
x7: 1651.50
x8: 1887.43
x9: 2123.36
x10: 2359.29
x11: 2595.22
x12: 2831.15
x13: 3067.08

Personally, I think they will go with x4 and x8 multipliers.


Weren’t old reports that Shield TV X1 GPU can’t go passed the 1GHz because of thermal problems.
 
the A2000 shows that Nvidia can get really close to the 3050 on just 70W, which is crazy. going lower in voltage will probably produce some crazy efficiency numbers


there were some measures posted earlier in the thread, i'll see if I can find it

EDIT: here's one
If we look at the power consumption of the laptop RTX 3050 at different TGP configurations its interesting.


TGP (Power Consumption)35 W40 W45 W50 W60 W70 W80 W
Base Clock Speed (MHz)71393810651178123814031530
Boost Clock Speed (MHz)1058122313431455150016351740

Seems to be diminshing returns going from 40w to 35w as the clock speed decrease is huge vs the other jumps. This is with GDDR5 vs what would be LPDDR5 in the new switch. Throw in the possibilty of using Samsung 5LPE to manufacture drake and I believe we could see something really impressive coming from this new switch.
 
...what would be the formula (napkin calculations-style) that would take in account the clock speeds, SM counts, thermals, power consumption, node and spit a TFlops figure at the end?
The formula is just 2 x clock x number of CUDA cores. Drake has 12 SM with 128 cores each = 1536 total.

Twice that is 3,072. So, for Drake, just multiply the clock by 3,072 (or just 3k for a quick rough number, like 400MHz => 1.2 TF, 1GHz => 3 TF, 1.2GHz => 3.6 TF).
 
Damned, this would be a way more capable machine than most of us were anticipating...
So, would it be possible to hear something when the fabrication begins and which time frame for release would it point to ?
Potentially you could hear when fab begins and generally they point to when a device is releasing.
Timeframes:
  • If you hear nothing by May 2022 then it becomes less likely it releases at the end of the year.
  • If you hear nothing by around Oct then an early release 2023 is less likely
 
If we look at the power consumption of the laptop RTX 3050 at different TGP configurations its interesting.


TGP (Power Consumption)35 W40 W45 W50 W60 W70 W80 W
Base Clock Speed (MHz)71393810651178123814031530
Boost Clock Speed (MHz)1058122313431455150016351740

Seems to be diminshing returns going from 40w to 35w as the clock speed decrease is huge vs the other jumps. This is with GDDR5 vs what would be LPDDR5 in the new switch. Throw in the possibilty of using Samsung 5LPE to manufacture drake and I believe we could see something really impressive coming from this new switch.
TechPoweredUp's testing for the A2000 is very illuminating. when chips are selected for low power, they can get really low, while matching the 3050

relative-performance_1920-1080.png
power-gaming.png


clock-vs-voltage.png


GPU Clock​
Memory Clock​
GPU Voltage​
Clock Frequencies & Voltage
Idle
210 MHz​
101 MHz​
0.652 V
0.650 to 0.656 V​
Multi-Monitor
210 MHz​
101 MHz​
0.656 V​
Video Playback
227 MHz
210 to 727 MHz​
105 MHz
101 to 203 MHz​
0.662 V​
Furmark
826 MHz
772 to 1200 MHz​
1500 MHz​
0.653 V
0.650 to 0.725 V​
Gaming
(Cyberpunk 2077)
1265 MHz
1237 to 1282 MHz​
1500 MHz​
0.725 V
0.718 to 0.731 V​
V-Sync
(Cyberpunk 2077)
1342 MHz
1327 to 1357 MHz​
1500 MHz​
0.750 V
0.743 to 0.756 V​
Gaming
(23 Games)
1342 MHz
1192 to 1890 MHz​
1500 MHz​
0.737 V
0.656 to 1.056 V​

 
Idle GPU voltage is probably an interesting measurement we should look at, if we can find anything about it on 8nm Ampere. It would kinda indicate the minimum threshold for power savings when you lower the clocks.
 
All this 2022 talk but Nintendo still hasn't even met demand for the OLED model. I'm not business strategist, but how can they possibly release another revision without being able to meet demand for their latest division. I am not opposed to a 2022 release, but I really think it isn't a likely scenario.
They haven't? The OLED model has been in stock on Amazon (shipped and sold by Amazon) for like the past 72 hours. And it's still in stock as of this post.

And all Best Buys in my area have them in stock.

It's nowhere near as hard to get a Switch OLED as it is a Series X or PS5.
 
They haven't? The OLED model has been in stock on Amazon (shipped and sold by Amazon) for like the past 72 hours. And it's still in stock as of this post.

And all Best Buys in my area have them in stock.

It's nowhere near as hard to get a Switch OLED as it is a Series X or PS5.
What region? It had been in extremely short supply basically everywhere from October to February.
 
Just to frame exceptations better, what preqreuisities must be fulfilled (in terms of clock speeds, cache, node and RAM) for Drake to be as powerful as a GTX 960/GTX 1050Ti, and is there a realistic scenario in which it can match their rasterization performance?

For the record, here are the specs:

GeForce GTX 960
TSMC 28 nm
Die size 398mm2
1280 cores
Base clock core 924 MHz
4096 MB RAM GDDR5
120 GB/s bandwidth
Bus width 192 bit
L2 cache size 1.5 MB
2.3 TFlops
145 W

GeForce GTX 1050 Ti
Samsung 14 Nm FinFET
Die size 132mm2
768 cores (6 SM)
Base core clock 1290 MHz
4096 MB RAM GDDR5
Bus width 128 bit
L2 cache size 1MB
1.9 TFlops
75 W

If Drake is anything like these, then we are in for a ride. A good one I might add.
Well Drake to our knowledge from the NVN2 Driver leak is
  • Node: Unkown (Samsung 8nm or 5nm)
  • Die-Size: Unkown (Would be <100nm^2 if on 5nm likely though)
  • 1536 CUDA Cores
  • Portable clock, at least 307mghz with 460mghz as an option to boost to within power limits because B/C Boost mode in portable mode
  • Docked clock, at least 768mghz for B/C reasons.
  • RAM: Unkown
    • Although it's highly likely to be 8GB of LPDDR5 at 88gb/s or 102.4GB/s
  • Cache
    • L1: Likely 2.3MB
    • L2: 4MB
  • Effective Bandwidth: Likely ~200GB/s because of the large amount of on-die L1 and L2 Cache and Ampere's overall memory-efficient uArch that Drake inherits.
  • Bus Width: 128bit or 192bit
  • TFLOPs:
    • Can't really compare TFLOPs across uArchs really but 3TFLOP Drake should likely match up to 4TFLOP Polaris, thus why the 1Ghz clock for docked mode is a hopeful target as 1Ghz would result in a GPU like this hitting 3+TFLOPs, and therefore matching up to the PS4 Pro.
 
So have we heard or discussed this feature yet possibly being included?
It was definitely present in A100 GPU but I wonder if this will make its way over to Drake also...




"The NVIDIA Ampere architecture adds Compute Data Compression to accelerate unstructured sparsity and other compressible data patterns. Compression in L2 provides up to 4x improvement to DRAM read/write bandwidth, up to 4x improvement in L2 read bandwidth, and up to 2x improvement in L2 capacity."




:p
Likely patch. It's not like DLSS needs an overhaul in an engine to get it working correctly, though I do wish more developers/publishers tune for a proper balance of IQ/performance.
There does need to be an overhaul though, not a total or complete overhaul but it’s not like a drag and drop thing.

What if everyone is wrong and this isn’t a new Switch but rather a portable Wii U :shock:
The switch in portable mode is like the Wii U docked 😂

Also regarding CPU cores, do you think 8 cores is now far more likely? We have plenty of examples of Samsung 5nm smart phones with octocore setups (writing this on a galaxy s21 with such a setup) and whilst I appreciate phones run in burst, 8 A78s at 1ghz should be doable on such an advanced node.
If it’s on the 5nm node, then it can run at 2-1.9-2.4GHz range not the 1GHz range, at 1GHz it might be wasting battery life actually.
This would be an enormous leap over the CPU in the current switch, exciting if true. I wonder how close an 8 Core A78C setup at 2GHZ would get to the 8 Core 3.5GHZ setup in the ps5? Are we now talking somewhere around 50% of the power?
It would be around A14 Bionic range. I think.
We don't know about RAM or clocks but it appears the core count (1536) and cache size (2MB I think?) for Drake is higher than both of those.
It’s 4MB!

Which is a lot.

It’s like it’s own mini infinity cache system (which is just more L3$ rebranded for marketing)!

And 1.5-2.3MB of L1$

Personally, I'm not sure that a change in process node would cause a change in codename (while also not changing the name T239). The chip in the original Switch is only known as T210 or GM20B, with the distinction between Erista and Mariko seemingly not present in the leak at all, along with most other information on the "Nintendo side" of things.
I wonder if the code name change has to do with the CPU cores being turned off. Maybe with Erista they (A53) were just turned off, but with Mariko they were just outright disabled aka fused off plus the die shrink.

T210 has them, but T214 doesn’t have them at all.

Just a theory!

They could do like last time they wanted to pretend they weren't releasing a successor to a strong performer and call it a third pillar.
Or like the last time they pretended it was a new generation and it was, internally spec-wise, just a GameCube pro 😝.

Nintendo has options on what they can do with this. Though as for the other convos going on about it, a platform can have a next generation device without having a clean break of the platform.
A revision that plays less games than base Switch is the most worthless concept for a console I've ever heard.
Right, like it’s the thing that doesn’t make sense to me. A revision is, in essence, the base device so it does what the base device does. If it’s a revision but it can’t do what the base device does, that’s not a revision, it’s just a copy of the original. A fraud (ok harsh) in all accounts.

It would be the switch in concept only.

The funny thing is that pre leak, this thread has been tsuw (wust backwards).
The constant drive-by’s in this thread, the last thread and the thread before that, the drive by posts in other threads related to the subject, really gaslit us into believing the absolute worst, huh? 🤣

Hell, we had a few about if they would even utilize DLSS!
 
Fair, though Ada has been in the works for a while and Nvidia had problems securing TSMC for Ampere and went with Samsung, (at least it was reported that AMD bought up all of TSMC's remaining 7nm node for the console launches before the pandemic chip shortage began.

I am leaning to 5nm for the same reasons you mention, but I do think Nvidia has probably had Samsung 5nm allocation incase they weren't able to secure TSMC, also just moving away from Samsung with Ada is going to leave a hole in Samsung's chip manufacturing, where I'd expect them to have hopes to secure Ada on their process, so Drake could have just been a happy accident there. I do remember some power issues being brought up too, which would have been solved with 5nm, and we still know the codename changed over the past two years as a data leak ban search word is Dane, which was identified to be T239 by Kopite7 around this time last year. It will be interesting to see this chip revealed.

I'd assume that Nvidia are in a much more flexible position with Samsung regarding allocation, but I do think how much demand Samsung has would impact the potential process. On the side of their EUV processes, the main question is whether Qualcomm are sticking with them for future Snapdragon chips, or if they're going to move back to TSMC. If they do, then Qualcomm will have a lot of free capacity starting next year, and I'm sure Nvidia would be able to strike a very good deal. On 8nm, Nvidia will obviously be scaling down GPU production on 8nm starting at the end of this year, so that's going to free up a lot of capacity there, which is really the best reason for Drake to stick with 8nm, as it's probably the only node where manufacturing capacity wouldn't be an issue. That said, Samsung probably have plans to replace some or all of those production lines with 3nm, etc., so perhaps they won't be quite as desperate to find clients for it as we think.

Of course, it's possible that none of this would have been known when they chose a manufacturing process. They could have made the decision in late 2019, before COVID or the chip shortage, and just have to deal with whatever the economic scenario is.

I don't think TSMC's N6 process node can be ruled out either, especially since I imagine demand for TSMC's N6 process node isn't as ridiculously high as demand for TSMC's N5 process node.

Yeah, I wouldn't rule out TSMC N6 either, I was just listing N5 as a best case scenario. The only nodes I'd rule out at this point would be either TSMC's or Samsung's 4nm processes, as they're just a bit too new/risky/expensive.

Idle GPU voltage is probably an interesting measurement we should look at, if we can find anything about it on 8nm Ampere. It would kinda indicate the minimum threshold for power savings when you lower the clocks.

My (desktop) RTX 3070 idles at 681mV, and the A2000 idles a bit lower at 652mV. It could be that the A2000 is using a die that's binned to operate at lower voltages, or that the 3070 just doesn't drop down as low because it doesn't need to, but the idle voltage for 8nm Ampere should be in that range.
 
I was wrong about the TX1 revision -- there are a few references to T210/Erista vs. T214/Mariko in there (and a scant few to GM20B_B). Although, T214 is not just T210 with a die-shrunk GM20B, there were a number of other components that changed. I guess it's not impossible that Dane was the name for a previous version of T239 and that somehow changed while T239 stayed the same. But I wouldn't go so far as to say that's what the leaked files indicate.
 
Are you on the right link? This is what it's showing me right now.

Shipped and sold by Amazon. Plenty in stock.

4t0DF4q.png

Yeah it's fine on my side as well. In general I don't think there's been much of a stock issue in the US for the last month or two. Even when Arceus launched it was still available, or or available with a delayed shipment of a few days.
 
0
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom