• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

I just want to mention that I don't think die size is much of a relevant consideration when we talk about which process node nintendo will choose for the drake. They could literally double the die size from the Erista and that'd only translate to an additional 5mm increase (or 1/5th of an inch for the americans) in chasis size in each direction. This might be a consideration for smaller form factors such as phones, but for the switch that'd only correspond to a 3% increase in length and 5% increase in height. A larger die would also benefit from increases in passive cooling so there also is much more to the story with regards to heat. I imagine the only major consideration for process node selection is going to be power consumption and the associated battery life implications.
It wouldn’t be that small of an increase unless you believe that the rest will actually decrease in size :p



iavcib3yepa40wuandln.jpg

This is the TX1 and the TX1+

If the chip is 50% bigger, the substrate it sits on will also be bigger

And if the substrate is bigger, the board is bigger, and if the board is bigger then the device will also be bigger in every direction (well, except depth)

Though the length would be ok, the height wouldn’t really as that would end up breaking the Joycon compatibility, if they care for that.
 
I spent every ounce of effort to hold my tongue
I'll gladly carry the torch on that aspect. Not just out of personal preference but because there is no evidence to support a larger chassis. They knew, years ago, what they would be aiming for with this. They wouldn't have been blindsighted by the size of the SOC. They also know that the market is sensitive to aesthetics and portability, a failure they will not repeat from the Wii U. They have over a hundred million users they want to convert and every chance to make their new device a drop in replacement.

I think people don't give Nintendo's engineers enough credit. That said I say much of what I say from the perspective of marketing. Capital G Gamers might not care about how big the Steam Deck is, but you can bet Tom, Dick and Harry do! The sleekness of the Nintendo Switch is part of its market appeal. It's just on the border of products that can fit in a small handbag. Much bigger and it loses appeal to children, to some women, to lots of more "mobile" types, including, I think, a portion of the Japanese market.
 
For my simple mind, can someone make a list, from best to worst, of the process nodes that Drake could be on? My mind reels with all the 4N and N4 and Samsung this and TSMC that… I’m assuming the list would start with 3nm TSMC being best (but unlikely), and 8nm Samsung being worst, right? Where do all the 4Ns and N4s fit between there?
So there are basically 3 generations worth of suspects here.
Best: TSMC 4N, which is a Nvidia-specific variant within TSMC's N5 family which is part of the 5 nm generation. We know that Nvidia has secured some supply of this node because of the Lovelace/RTX 40 series cards.
Middle: TSMC N6, that's a refinement/variant of TSMC's N7, which is part of the 7 nm generation (which is one generation behind 5 nm). The evidence of Nvidia having some supply here is the A100.
Worst: Samsung 8LPP, that's a refinement of Samsung's 10 nm node family (and one generation behind 7 nm, or two behind 5 nm). The RTX 30 cards as well as Orin are on this node.

(gee, why do I have to say which node is part of which generation despite it seeming obvious? Because of Samsung's 5LPE. That node is a refinement of Samsung's 7 nm node, so it's not a 5 nm generation process :p)
 
Question regarding backwards compatibility, if Nintendo went the route of including a TX1 in the Switch 2 so when you play a Switch game the system reverts into ‘OG Switch mode’ would that then mean any type of boost mode or upgrade patches for Switch games would be impossible?
It would limit their options somewhat around having some types of middle ground modes, but there will only be barriers to full upgrade patches if they intentionally construct them.
 
0
For my simple mind, can someone make a list, from best to worst, of the process nodes that Drake could be on? My mind reels with all the 4N and N4 and Samsung this and TSMC that… I’m assuming the list would start with 3nm TSMC being best (but unlikely), and 8nm Samsung being worst, right? Where do all the 4Ns and N4s fit between there?
There are more possible nodes than you can shake a stick at, but most of them are just slightly tweaked versions of each other.

Samsung 8nm: the default. Orin and RTX 30’s node. It’s a long lived node, it’ll be around for a while, getting cheaper.

Samsung 7LPP/6LPP: this is highly unlikely, unless Samsung gave Nvidia a crazy deal. A “true” 7nm node, the 6nm version is just a variation.

TSMC 7nm: the wild card. Nvidia’s data center Ampere chip is here. TSMC is trying to retire it for “6nm” which is really the 7nm with a few extra bells and whistles.

Samsung 5nm LPE, 5nm LPP, 4nm LPE: Samsung’s 5nm nodes. LPE means “low power early” and “LPP” is “low power plus”. There is improvement across these things but they should be seen as fundamentally the same technology.

TSMC 5N, N5, N4: TSMC’s 5nm nodes. 5N is Nvidia’s modified version for their RTX 40 chips. Also a long lived node, by design, but currently a near-bleeding edge node.

3nm: again both Samsung and TSMC are working on something here and Nvidia is heavily rumored to be using it for RTX 50, though Nvidia is complaining about shouldering costs for developing immature nodes and at least one person has claimed on my presence that they plan on staying on 5nm

Short of something deeply wild happening, I think places where Nvidia has products are the only viable spots.
 
Formally, Samsung's 4LPE is actually separate from the 5 LPE/LPP nodes, at least as of last July, anyway.
ssf-17lpv-roadmap_wc.png


Edit: Although, I should mention that in earlier versions of these roadmap/charts, 4LPE was indeed originally listed as a further refinement of the 7LPP->(6LPP which has now disappeared)->5LPE/LPP line.
 
It wouldn’t be that small of an increase unless you believe that the rest will actually decrease in size :p

iavcib3yepa40wuandln.jpg

This is the TX1 and the TX1+

If the chip is 50% bigger, the substrate it sits on will also be bigger

And if the substrate is bigger, the board is bigger, and if the board is bigger then the device will also be bigger in every direction (well, except depth)

Though the length would be ok, the height wouldn’t really as that would end up breaking the Joycon compatibility, if they care for that.

Besides the SoC, the Switch has quite a few controllers on the board, power, peripherals, memory, display, wi-fi, etc. All of those may require quite a bit of circuitry and boilerplate. These are just guesses, I'm way out of my area here.

Could it be that Drake, being custom-made, would integrate some of those controllers, freeing some real estate on the board?
It would also simplify production and sourcing quite a bit, I believe.
 
Last edited:
So there are basically 3 generations worth of suspects here.
Best: TSMC 4N, which is a Nvidia-specific variant within TSMC's N5 family which is part of the 5 nm generation. We know that Nvidia has secured some supply of this node because of the Lovelace/RTX 40 series cards.
Middle: TSMC N6, that's a refinement/variant of TSMC's N7, which is part of the 7 nm generation (which is one generation behind 5 nm). The evidence of Nvidia having some supply here is the A100.
Worst: Samsung 8LPP, that's a refinement of Samsung's 10 nm node family (and one generation behind 7 nm, or two behind 5 nm). The RTX 30 cards as well as Orin are on this node.

(gee, why do I have to say which node is part of which generation despite it seeming obvious? Because of Samsung's 5LPE. That node is a refinement of Samsung's 7 nm node, so it's not a 5 nm generation process :p)

There are more possible nodes than you can shake a stick at, but most of them are just slightly tweaked versions of each other.

Samsung 8nm: the default. Orin and RTX 30’s node. It’s a long lived node, it’ll be around for a while, getting cheaper.

Samsung 7LPP/6LPP: this is highly unlikely, unless Samsung gave Nvidia a crazy deal. A “true” 7nm node, the 6nm version is just a variation.

TSMC 7nm: the wild card. Nvidia’s data center Ampere chip is here. TSMC is trying to retire it for “6nm” which is really the 7nm with a few extra bells and whistles.

Samsung 5nm LPE, 5nm LPP, 4nm LPE: Samsung’s 5nm nodes. LPE means “low power early” and “LPP” is “low power plus”. There is improvement across these things but they should be seen as fundamentally the same technology.

TSMC 5N, N5, N4: TSMC’s 5nm nodes. 5N is Nvidia’s modified version for their RTX 40 chips. Also a long lived node, by design, but currently a near-bleeding edge node.

3nm: again both Samsung and TSMC are working on something here and Nvidia is heavily rumored to be using it for RTX 50, though Nvidia is complaining about shouldering costs for developing immature nodes and at least one person has claimed on my presence that they plan on staying on 5nm

Short of something deeply wild happening, I think places where Nvidia has products are the only viable spots.
Thanks! It does make sense that where Nvidia is involved, it’s more likely than others to be a candidate for Drake. I just gotta keep fingers crossed for TSMC 5nm. 🤞
 
0
Middle: TSMC N6, that's a refinement/variant of TSMC's N7, which is part of the 7 nm generation (which is one generation behind 5 nm). The evidence of Nvidia having some supply here is the A100.
And Nvidia mentioned during GTC 2021 (November 2021) datacentre products (BlueField-3, Quantum-2, ConnectX-7) that were fabricated using TSMC's 7N process node and were sampling at around late 2021 to early 2022.
To me, this is the sleeper hit. Both Ampere and ARM already build on it, there are no capacity or demand questions, unlike 5nm, and the density over 8nm is 80%. At that point, even the pessimistic numbers get us to modest clock increases and modest improvement over launch Switch battery life. It's cheap and it requires a relatively small amount of invested work.
 
To me, this is the sleeper hit. Both Ampere and ARM already build on it, there are no capacity or demand questions, unlike 5nm, and the density over 8nm is 80%. At that point, even the pessimistic numbers get us to modest clock increases and modest improvement over launch Switch battery life. It's cheap and it requires a relatively small amount of invested work.
You mentioned cheap.

This is Nintendo we are talking here. I’m sure they’d be happy to go with a smaller node as long as the cost is right.

Do we know what the cost differentials are on these various node options we are discussing here?
 
Question regarding backwards compatibility, if Nintendo went the route of including a TX1 in the Switch 2 so when you play a Switch game the system reverts into ‘OG Switch mode’ would that then mean any type of boost mode or upgrade patches for Switch games would be impossible?
Not necessarily. You know that the Mariko switch can be overclocked further right? And the TX1 CPU is somehow compatible to the A78 CPU. In that sense, Nintendo can just include the GPU part of TX1, increase the clock of the GPU, and use the Drake CPU to give "boost" performance.
 
0
Besides the SoC, the Switch has quite a few controllers on the board, power, peripherals, memory, display, wi-fi, etc. All of those may require quite a bit of circuitry and boilerplate. These are just guesses, I'm way out of my area here.

Could it be that Drake, being custom-made, would integrate some of those controllers, freeing some real estate on the board?
It would also simplify production and sourcing quite a bit, I believe.
@ the bolded

Simply put, nah.

These aren’t things you can actually integrate into the SoC. Currently the switch has the Wi-Fi and Bluetooth as one chip.
 
The quality of the port matters a lot on Switch. Panic Button's efforts are okay, but meager when compared to some other developers who work much closer to the metal, such as Feral Interactive and Iron Galaxy.

That was my point yes. To me, Panic Button merit (and that's a big one) was to be the first to believe in porting those games to switch.

Not true, because I couldn't functionally play it on my Switch.

If you argue that "you couldn't functionally play Doom on your Switch", well, you'd be wrong.

Everyone has a different threshold. Ark could be played from start to finish on switch, even on release. I think it was a very bad port. I find Doom serviceable, but the frame rate cut in half despite the huge visual compromises impaired my enjoyment of the game. I find Witcher 3 great, with smart compromises where it doesn't matter too much and a roughly intact experience otherwise, especially when tuning a bit the graphical options.

The difference between you and me is that my original message was a personal opinion which I labeled as such, while your answer was your personal opinion presented as immutable truth.

I do not think that Doom is a great port.
 
Last edited:
Panic Button seems to have dropped off with their Switch output and they were superceded by other devs who made better choices in what to cut, but they were among the first studios to show what could be done on Switch and they deserve a lot of credit. I think the idea of playing a relatively new high end release like Doom 2016 was a very big revelation to the Switch userbase which wasn't quite sure even after launch what sort of games we'd be getting on the platform.

They provided a template for others devs to try to top them, and many did. But that made a lot of other ports possible.

Doom and Wolfenstein may be their highest profile port but their work on Warframe is I think the most superb. I'm not sure if they are still involved or if Switch updates and support is being done inhouse these day, but Warframe in 2023 looks even better than at launch. They've been slowly adding graphical features, improving performance with each majr patch.
 
To me, this is the sleeper hit. Both Ampere and ARM already build on it, there are no capacity or demand questions, unlike 5nm, and the density over 8nm is 80%. At that point, even the pessimistic numbers get us to modest clock increases and modest improvement over launch Switch battery life. It's cheap and it requires a relatively small amount of invested work.
Depressing to think Nintendo would use 7nm tsmc, 3+4 years after Sony/Ms.
 
While I understand people are making pessimistic predictions due to Nintendo's usual shenanigans, I'm really confident they actively want a powerful beast for the next hardware.

If Switch 1 is any indication, Switch 2 will have to carry an entire generation on its own, which means approximately 6-7 years until Switch 3 without a single power upgrade. It makes more sense for Nintendo to go all out on power now, since they won't upgrade it for a long time.
 
And Nvidia mentioned during GTC 2021 (November 2021) datacentre products (BlueField-3, Quantum-2, ConnectX-7) that were fabricated using TSMC's 7N process node and were sampling at around late 2021 to early 2022.

While these are Nvidia products now, all three of these would have started development under Mellanox prior to Nvidia's acquisition (which completed in April 2020), and the N7 process was almost certainly chosen before Nvidia's ownership of the company. We know from the Nvidia hack that Nvidia and Nintendo started work on T239 sometime between late 2019 and early 2020, which was also before Nvidia's acquisition of Mellanox completed, so if they did choose N7 for T239 it would likely have been little more than a coincidence that Mellanox also had products planned for the process.

While I don't think 4N is certain by any measure, it's worth noting that every single Nvidia product (excluding Mellanox products) planned for release from 2022 onwards is on 4N. This includes datacenter chips like Hopper and Grace, all the way down to entry-level Ada GPUs (keeping in mind that, although Jetson Orin shipped on 8nm in 2022, Orin itself had been shipping to automotive customers since 2021). The decision to move fully to 4N would have likely been made before work on T239 started, so it would be strange for Nvidia to decide that a mobile SoC, designed for a more power-limited use-case than any other Nvidia product, would be the sole exception to the decision to use this more power-efficient process.

The design of T239 also simply makes more sense from the point of view of a 4N manufacturing process. Many of us (including myself) were very surprised to learn the size of T239's GPU because we were expecting it to be manufactured on Samsung 8nm, and on 8nm it's a huge GPU for a Switch form-factor device, to the point where it's debatable if it's even viable. On 4N, though, a 12 SM GPU is perfectly reasonable. It's pretty much the sweet spot of performance and power consumption, allowing for a solid performance upgrade while keeping battery life in check, and the SoC would be compact (likely smaller than Mariko) and, for 4N, cheap.

I'm definitely not ruling out other processes, but with 4N reportedly cheaper on a per-transistor basis than Samsung 8nm as of last year, and Nvidia consolidating everything developed fully in-house onto the 4N process, I'm struggling to come up with good reasons why they wouldn't use 4N.
 
We've had the most Switch 2 activity since last year's leaks. I do wonder if it's just all people digging up the same stuff after the unverified Pokemon leak.
I am curious to see where this goes now. I still think Nintendo's strategic decision not to announce anything after July isn't an accident (granted others have pointed out that it could simply be a matter of scheduling and timing) though i concede it may not be Switch 2 released and they just want to drop from H2 bombs all in one go in a separate direct rather than teasting them 6+ months ahead of time.
 
This isn't what happened though. Just a few days after that post the leakers said it wasn't happening any time soon. The peaceful silence we're in now is very different from the mounting silence then.
It wasn't supposed to be a gotcha or anything
 
0
@Thraktor you had posted some CPU perf ranges in 2021 and I am wondering if these are still current or if the A78 numbers are higher/lower

So, that all said, single core Geekbench 5 figures for existing consoles should be:
Switch - A57 @1GHz - 140
PS4 - Jaguar @1.6GHz - 200
XBO - Jaguar @1.75GHz - 219
PS5 - Zen 2 @3.5GHz - 873

Hypothetical numbers for a new Switch with A78 cores at different clock speeds would then be:
A78 @1.2GHz - 390
A78 @1.6GHz - 520
A78 @2.0GHz - 650

I'd say 2GHz is unlikely, and only really plausible with 4 cores, but it gives you an idea of where the range of single-core performance is. Even with half as many cores, the new Switch could comfortably outperform the last gen consoles on the CPU front, and my "best case scenario" of 6x A78 @1.6GHz (with a couple of A55s for the OS), would outperform them by over a factor of 2. Of course there's still a big gap to the PS5/XBSX, but if T239 being base on Orin means using A78 cores, then there's definitely scope for a very big CPU upgrade on the new model.

Doing some basic arithmitic [REDACTED] is expected to have 8 cores to Switch's 4

Assuming 1.2 GHZ, the CPU performance leap is 390/140*8/4 = 5.6x or 6.5x for Games (390/140*7/3)
Assuming 1.6 GHZ the CPU performance leap is520/140*8/4 = 7.4x or 8.67 for games (520/140*7/3)

Performance at 1 Ghz per core is 390*1/1.2 = 325
 
0
That's one of those "wanting more only because it was compared to something else". 7nm/6nm would be fine and would give us the clocks and performance we are expecting with good battery life. Anything more is just a bonus

Would Nintendo really be able to get to a Switch 1 level of electricity consumption at 7nm?

Nintendo would want something that consumes way less electricity than the Steam Deck, but if it's on the same node as the Steam Deck then wouldn't it have to be significantly less powerful than the Steam Deck?

I'm just not seeing where the electricity savings are going to be coming from if Nintendo can't go with a smaller node.
 
No, there is an expectation by many that docked mode will offer 3Tflop of performance, but again, this requires a clock speed north of 1Ghz. I do not believe Drake at 8nm achieve this and fit within the power/thermal constraints of the Switch form factor.
Even on 8nm, it is hard to determine anything because there is no test on a SoC without the automotive features. We can’t know the extent to which they contribute to the heat, or battery and power consumption. That’s why I found the Orin tools to be deeply flawed, and why I felt they shouldn’t be taken as any kind of indicator of performance capacity when they were posted on here.

For 3TF, it doesn’t have to be “north of 1GHz” - It requires a clock speed of 977MHz, to be precise. This would represent a 27% increase on the 2017 Switch’s docked clock (768MHz). It would be 62% of the XSS’s 1.565GHz, 54% of the XSX’s 1.825GHz, 44% of the PS5’s 2.23GHz, and 61% of the Steam Deck’s 1.6GHz.

For some perspective, 768MHz was around 90% of the XB1’s 853MHz and 96% of the PS4’s 800MHz. I’m putting this out there to show that although the Switch is portable, there was battery life to consider, and the lithography process at the time presented heating issues, Nintendo and Nvidia went as hard as they could on the 2017 Switch’s SoC without risking RRoD-level product failures. It wasn’t underclocked out of some aversion to more performance, in fact, they would’ve loved even more!! (Yoshi’s Crafted World is a testament to that). Further still, an ARM SoC clocked that highly was not so common in 2017, and there isn’t another mobile SoC from that time which can play some of the games in the Switch library (BOTW, Witcher 3, NieR: Automata, etc.) - So, when IGN or The Verge tell you it’s a tech fail, or that it was dated on launch, they’re loud, wrong, and their agenda is pathetic.

With all of that in mind, I would put it to this board that they could go just as hard on the successor, and we have evidence which says “dare to expect more”. So far, nobody, not even the most optimistic, is claiming that they’ll aim for PS5/XSX clocks, or even Steam Deck portable clocks - If they did, we would be looking at a prospective Up To 7TF docked device!! But that won’t happen.

However, We have seen tests for 660MHz (2TF, portable), 1.1GHz (3.4TF, docked) and 1.38GHz (4.2TF, docked) - That 1.38GHz clock is 88% of the XSS’s 1.565GHz, putting it in almost the same percentage bracket as the current Switch’s clock speed, relative to its contemporaries, only now, we have much better processes, and if this was on 5nm, it would be a better one than the 7nm of Steam Deck and PS5/XS. All of that is to say this: 977MHz for 3TF would appear to be a relatively conservative prediction for docked performance, and it would still be very impressive alongside the hardware-specific feature sets. 1.1GHz is still realistic, too, as is 1.38GHz, which would be going as hard as they did in 2017, compared to XSS - We can say with confidence that these are realistic because these ranges would not have been tested and still out there, if they failed, or were internally considered to be “overly optimistic”. 1.38GHz would be sweet, but I feel that 977MHz for the 3TF isn’t impossible - I don’t believe it’ll be below that.
 
Last edited:
Would Nintendo really be able to get to a Switch 1 level of electricity consumption at 7nm?

Nintendo would want something that consumes way less electricity than the Steam Deck, but if it's on the same node as the Steam Deck then wouldn't it have to be significantly less powerful than the Steam Deck?

I'm just not seeing where the electricity savings are going to be coming from if Nintendo can't go with a smaller node.
you have to remember that Steam Deck is hobbled by a lot of inefficiencies: translation layers, non-native games, clocks being usage limited rather than power limited, etc. how much can power consumption be clawed back by fixing those? that's pretty impossible to really tell, but I would think that Drake would be much better on battery than the SD

GDC hasn't happened yet
 
you have to remember that Steam Deck is hobbled by a lot of inefficiencies: translation layers, non-native games, clocks being usage limited rather than power limited, etc. how much can power consumption be clawed back by fixing those? that's pretty impossible to really tell, but I would think that Drake would be much better on battery than the SD


GDC hasn't happened yet
x86 vs ARM goes a ways
 
I've mentioned this before, but one of the files in the illegal Nvidia leaks defined Samsung as the semiconductor foundry company being used for T239 (here, here, here, and here).
Could be Samsung 5nm if this holds true. It would have been obvious that 8nm was too big for 12SMs when they were doing simulations. If they had committed to Samsung for Drake, Samsung 5nm would have been a viable option for a processor being designed in 2020.
 
0
Would Nintendo really be able to get to a Switch 1 level of electricity consumption at 7nm?

Nintendo would want something that consumes way less electricity than the Steam Deck, but if it's on the same node as the Steam Deck then wouldn't it have to be significantly less powerful than the Steam Deck?

I'm just not seeing where the electricity savings are going to be coming from if Nintendo can't go with a smaller node.
Nintendo uses a CPU architecture, ARM, which is mainly used for mobile, where performance per watt is of utmost importance.

Steam Deck uses x86, which is mainly used for Desktop and laptop PCs. And even if the latter cares about battery life, smartphones with tiny battery are on a different level. PC games are made for x86, so using ARM would mean a very small library until developers ported their games to the Deck (and the vast majority wouldn't, since this is a product aiming to sell 1~2 million per year).

This alone makes a huge difference, but there's more.

Energy consumption doesn't scales linearly with clocks. If you double the clocks, it will consume a lot more than twice. For example, using real tests with Orin, 8 cores at 2.2GHz consumes 7.1W while 8 identical cores at 1.1GHz consumes 2.2W, so halving the power per core got less than 1/3 of the consumption.

Drake has 3 times the number of GPU cores and all points for it to have 2 times the number of CPU cores, if you use 1/3 and 1/2 of the Deck clocks you get roughly the same theorical power with significant battery savings.

The downside of more cores is that the chips gets bigger and more expensive. With Nintendo expected to sell 15+ mi every year and releasing 2+ years later, they will get a way better deal than Valve and can also afford to invest on customizing the chip for their needs.
 
there was one instance where x86 matched and beat ARM at low end performance per watt, but it took ditching legacy for it
do you have an article on what was cut? I've often wondered if Microsoft ever considered relegating legacy Windows apps to a subsystem and telling AMD and Intel to update x86 to gain efficiency.
 
By the by, I'm probably a bit out of the loop, but I was wondering what the thing is with the Drake chip and whether it is solidly Ampere or if there are next-gen (Lovelace) features that we have been able to divine from the leak. Can someone help me understand this?

edit: Considering Ampere is on 8nm Samsung/7nm TSMC and we are looking at a potential move to a different node for Drake, that could also be some reason to suspect they might have Ada-fied the chip in the process, right (what do you think)?
 
We've had the most Switch 2 activity since last year's leaks. I do wonder if it's just all people digging up the same stuff after the unverified Pokemon leak.
90% if it is just that. I’ve seen multiple videos poke up on YouTube in wake of @NintendoPrime’s that all quote this thread. None attribute it, but the phrasing keeps being the same.

The media environment, from part time YouTubers all the way up to the general interest press, was toxic to the subject for so long. But now it’s been long enough that the backlash wears off, and it’s no longer debatable that it’s nextgen time, so all the things we’ve been looking at suddenly seem discussable.

And oh look, once you can discuss things suddenly the fact that Nvidia keeps putting this chip in their documentation makes it seem really real, doesn’t it?
 
0
By the by, I'm probably a bit out of the loop, but I was wondering what the thing is with the Drake chip and whether it is solidly Ampere or if there are next-gen (Lovelace) features that we have been able to divine from the leak. Can someone help me understand this?

edit: Considering Ampere is on 8nm and we are looking at a potential move to a different node for Drake, that could also be some reason to suspect they might have Ada-fied the chip in the process, right (what do you think)?
I think it's going to be the chip in the Nvidia leak, period.

I don't think the chip is at 8nm, as I don't think they would have considered that chip size at that node.

Drake has Adas clock gating, along with some encoder but other than that it's ampere.

And lastly, I don't think it matters much. Most of the performance difference between Ada and Ampere comes down to node. Ampere on 5nm, woudnt be that different from Lovelace.
 
Docked, how much power do we think they would be willing to let the device consume? The Erista Switch was about 13 watts right? Is something closer to 17-18 watts reasonable?

I'd think that at 8nm, Nintendo would be more likely to spend budget on CPU vs GPU just becausr you get more appreciable gains but for docked mode, is there any reason the power envelope can't be 20 watts? Is it just that we dont feel the cooling solution is sufficient?

For handheld mode, I feel like bare mimum we get 1-1.2 Tflops which would be more than adequate.
 
do you have an article on what was cut? I've often wondered if Microsoft ever considered relegating legacy Windows apps to a subsystem and telling AMD and Intel to update x86 to gain efficiency.

oh? I hadn’t read this. neat.
there was an article a long time ago that detailed it that I can't find anymore. but I did find a review of a product showing Atom matching ARM at the time (and sometimes outperforming it in performance per watt). unfortunately, Atom didn't progress much further, being relegated to servers. Intel's efficiency cores take up the mantle, but ARM made big strides in the mean time


By the by, I'm probably a bit out of the loop, but I was wondering what the thing is with the Drake chip and whether it is solidly Ampere or if there are next-gen (Lovelace) features that we have been able to divine from the leak. Can someone help me understand this?

edit: Considering Ampere is on 8nm Samsung/7nm TSMC and we are looking at a potential move to a different node for Drake, that could also be some reason to suspect they might have Ada-fied the chip in the process, right (what do you think)?
the chip is straight up Ampere. whatever changes made to it doesn't make it Lovelace because it doesn't have much of Lovelace's changes. they don't need to turn it into Lovelace just to get it onto a different node
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom