• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

It's not 8/4nm that ultimately matters. It's whether Nintendo want to lopsidedly balance battery life over clock speeds which is exactly what they did with the redbox Switch when there was a big possibility to increase clocks due to the process shrink but they chose battery life because their data suggested people used Switch just as much as a portable as they did a console.

Personally I'm expecting a good 4x leap in CPU performance, 2x RAM, 2x speed storage wise and 1tflop in handheld mode and 2tf docked for the GPU. I know that's super, super low GPU wise but if it's anymore then I won't be disappointed lol.
I think these specs especially the storage speed, 8gb, and very mid gpu is a worse case situation and I’m just expecting 12gb minimum because 8gb really isn’t a lot. 16gb of unified lpddr5 or 5x with a super lightweight os would be ideal.
 
Last edited:
But DF in particular had some very strange ideas about the NX SoC, such as the baffling part in their big July 2016 article where they said they believed TX1 might just be a placeholder in the devkits "with audible fan noise," and TX2 might be used in the final hardware. That betrays a startling level of ignorance about... a lot of things, really.

Hmmm, I get the speculation by DF reading the article and video again, I don't find it that weird for DF to be more free with speculation as it was just off-the-cuff. The article & video coverage ends with the notion that his sources tell it would be a Tegra TX1 in any case.
Most ARM processors in smartphones have a quick turn-around from announcement to product releases (like Apple) and although there's no argumentation of cost taken into account (as it has gotten a lot more expensive), the Tegra X2 was also taken into consideration of the discussion. Although I guess the reality of increase in costs wasn't taken into account as Nintendo's financial state wasn't that positive at the time (mainly held up due to the 3DS). This may all be redundant talk. I have to read up on the state of discussions again as it's mostly fuzzy in my head, but I don't think it's fair to say they were being ignorant about it 😅

Eitherway, the situation is of course a lot different now, as we have no alternative "baseline" SoC available to speculate about (in comparison to the Tegra TX1 vs TX2 situation). So it's more stringent in terms of how far you can speculate on before you look silly (like I do most of the times ^^' )
 
0
It's not 8/4nm that ultimately matters. It's whether Nintendo want to lopsidedly balance battery life over clock speeds which is exactly what they did with the redbox Switch when there was a big possibility to increase clocks due to the process shrink but they chose battery life because their data suggested people used Switch just as much as a portable as they did a console.

Personally I'm expecting a good 4x leap in CPU performance, 2x RAM, 2x speed storage wise and 1tflop in handheld mode and 2tf docked for the GPU. I know that's super, super low GPU wise but if it's anymore then I won't be disappointed lol.
That’s very pesmistic thought about clocks and even ram beacuse if I understand correctly you expect just 8GB RAM lol, I don’t think Matrix Demo or BOTW 4K/60FPS would be possible on this specs even with DLSS
 
That’s very pesmistic thought about clocks and even ram beacuse if I understand correctly you expect just 8GB RAM lol, I don’t think Matrix Demo or BOTW 4K/60FPS would be possible on this specs even with DLSS
Yeah also Nintendo knows next gen games are going to take more time, money, and people and they need 3rd party aaa games to fill in the many months gaps between release so it’s in there best interest to make a somewhat powerful device.
 
It's very possible that the final product is 4N but the dev kits are all 8nm. Even if it is a portable/hybrid there is no necessity for making the dev kits portable.

Given some of the recent info I've been told regarding node processes, going from Samsung 8nm to TSMC's 4N on the same Soc wouldn't really be feasible. If you're correct though, it would mean Devs initially received bonestock Jetson Orin 16GB kits to get the ball rolling while Nvidia were finalizing Drake onto TSMC's 4N. I don't think it's unreasonable to suggest Devs got Jetson Orins to begin their work. And hey, if we're correct, we'll see early dev kits in the wild in a decade or so selling on ebay.

That said, this is all assuming Drake is even on TSMC, let alone Samsung 8nm. It could be an entirely different node altogether. We could be on Samsung 5nm for all we know, instead of TSMC 4N (which is also 5nm).
 
Given some of the recent info I've been told regarding node processes, going from Samsung 8nm to TSMC's 4N on the same Soc wouldn't really be feasible. If you're correct though, it would mean Devs initially received bonestock Jetson Orin 16GB kits to get the ball rolling while Nvidia were finalizing Drake onto TSMC's 4N. I don't think it's unreasonable to suggest Devs got Jetson Orins to begin their work. And hey, if we're correct, we'll see early dev kits in the wild in a decade or so selling on ebay.

That said, this is all assuming Drake is even on TSMC, let alone Samsung 8nm. It could be an entirely different node altogether. We could be on Samsung 5nm for all we know, instead of TSMC 4N (which is also 5nm).
Samsung’s newer process nodes are know to be super inefficient for portable SOCs.
 
0
Given some of the recent info I've been told regarding node processes, going from Samsung 8nm to TSMC's 4N on the same Soc wouldn't really be feasible. If you're correct though, it would mean Devs initially received bonestock Jetson Orin 16GB kits to get the ball rolling while Nvidia were finalizing Drake onto TSMC's 4N. I don't think it's unreasonable to suggest Devs got Jetson Orins to begin their work. And hey, if we're correct, we'll see early dev kits in the wild in a decade or so selling on ebay.

That said, this is all assuming Drake is even on TSMC, let alone Samsung 8nm. It could be an entirely different node altogether. We could be on Samsung 5nm for all we know, instead of TSMC 4N (which is also 5nm).
Samsung foundry is shit and their 5nm is much worse than TSMC 5nm
 
Given some of the recent info I've been told regarding node processes, going from Samsung 8nm to TSMC's 4N on the same Soc wouldn't really be feasible. If you're correct though, it would mean Devs initially received bonestock Jetson Orin 16GB kits to get the ball rolling while Nvidia were finalizing Drake onto TSMC's 4N. I don't think it's unreasonable to suggest Devs got Jetson Orins to begin their work. And hey, if we're correct, we'll see early dev kits in the wild in a decade or so selling on ebay.

That said, this is all assuming Drake is even on TSMC, let alone Samsung 8nm. It could be an entirely different node altogether. We could be on Samsung 5nm for all we know, instead of TSMC 4N (which is also 5nm).
Samsung 5nm doesn’t make any sense, not just for the reasons listed above, but because Nvidia doesn’t make anything in it. It’ll either be Samsung 8nm used in Ampere chips or TSMC 4nm used in Ada Lovelace chips.
 
Samsung 5nm doesn’t make any sense, not just for the reasons listed above, but because Nvidia doesn’t make anything in it. It’ll either be Samsung 8nm used in Ampere chips or TSMC 4nm used in Ada Lovelace chips.
Yup.

TSMC 4N (as opposed to N4, etc) is literally a node process customized for nvidia and nvidia alone.

We know nvidia products use SEC8N (Samsung 8nm), and TSMC 4N (optimized 5nm). The only question is which node process nvidia/Nintendo went with. Both choices were available at the time T239 was taped out.
 
At the risk of sounding dumb, I recall somebody saying there’s no obvious path for improvements (die shrink) from Samsung 8nm? Something something dead-end process?

Wouldn’t that be enough of a reason for it to not be considered? They’d have known these things well enough in advance and be considering them when looking at a new product with a likely 8+ years on market. And it’s not like they aren’t taking their time making it, which is totally unlike the urgently released Switch (1).
 
Yup.

TSMC 4N (as opposed to N4, etc) is literally a node process customized for nvidia and nvidia alone.

We know nvidia products use SEC8N (Samsung 8nm), and TSMC 4N (optimized 5nm). The only question is which node process nvidia/Nintendo went with. Both choices were available at the time T239 was taped out.
Btw SEC8N is just optimized 10nm in Samsung terms xD
 
Why would volume inherently drive prices down when, as was established, prices are already artificially too high at the current production volume? Part of something being artificial is that it does not have to change, because it disobeys what is natural.

The point here is that CFe has an established and predictable market. Is Nintendo guaranteed to still use CFe in 7-8 years when new hardware rolls around? No, but the pro photo/video market more than likely will. That's what I mean by consistency. But when the volume vanishes, prices aren't likely to rise right away, if at all, since a lower price is a bell that's hard to un-ring, and they'd very much like it to.

So let's take a deep look here: The cheapest 256GB CFe card I could find on Amazon was a Type B for USD$83.61. If we're saying that higher volumes drive down prices, let's compare that to high-volume memory, where a top-of-the-heap 256GB UHS-I card sets you back USD$20-30. Even when factoring in that the difference in technology plays some part in the price difference and you add 50% to the UHS-I price to account for that, that's USD$30-45 to meet microSD margins, so you're still looking at a $38-53 additional mark-up on top of any mark-up that already exists on microSD retail prices. Of the likely minimum of 1 million sales of CF Express cards per year right now (since some camera owners will purchase multiple cards and some will purchase none at all), that's a minimum profit loss of USD$38-53 million if retail prices are pushed down. Even if we think Nintendo will sell one of these to the average 20 million hardware buyers per year, you'd still need to add USD$2-3 on top of that $30-45 with its reasonable markup just to break even on the money currently being made from CF Express' artificial pricing right now. So there really isn't any additional money to be made converting CF Express to a volume business, so card makers will try and retain as much of an over-inflated mark-up as they can. "CF Express at a reasonable price" should basically be declared an oxymoron.
Unless every single manufacturer colludes, at least tacitly, to keep the prices high, it seems pretty unlikely that the current prices could hold with the introduction of a new customer base that is both far larger than the current one and much more price sensitive. Being used in a device like a Switch could even be the tipping point that leads to mass adoption of the format.
 
when did the nvidia customized 4N become available? how long before that would nvidia have to work with tsmc in order to have their very own special node? as close partners would nvidia share this info with nintendo during their design process for t239 before even starting to tape it out? wouldnt it make sense for nvidia to be like "hey youre wanting a very efficient mobile focused soc again, just so happens we're working on something that might be of interest to you"
 
At the risk of sounding dumb, I recall somebody saying there’s no obvious path for improvements (die shrink) from Samsung 8nm? Something something dead-end process?

Wouldn’t that be enough of a reason for it to not be considered? They’d have known these things well enough in advance and be considering them when looking at a new product with a likely 8+ years on market. And it’s not like they aren’t taking their time making it, which is totally unlike the urgently released Switch (1).

I think that's exaggerated. Nvidia has all kind of AI tools now, that they didn't have when they shrunk Erista, which was also on different lithography.
 
They did. But it was a stupid decision, and they quickly rolled 16nm out of the gate for battery reasons. Perhaps the hardware exploits helped speed that up.
more like hacking reasons. it was known before launch that the TX1 modifications opened the system up
Again, Erista models from ~June 2018 onward aren't vulnerable to the USB exploit in RCM mode. New Switches weren't vulnerable to soft mods for over a year before Mariko officially released in a device; hacking was not a driving factor in getting the 16nm models out.

It's not 8/4nm that ultimately matters. It's whether Nintendo want to lopsidedly balance battery life over clock speeds which is exactly what they did with the redbox Switch when there was a big possibility to increase clocks due to the process shrink but they chose battery life because their data suggested people used Switch just as much as a portable as they did a console.

Personally I'm expecting a good 4x leap in CPU performance, 2x RAM, 2x speed storage wise and 1tflop in handheld mode and 2tf docked for the GPU. I know that's super, super low GPU wise but if it's anymore then I won't be disappointed lol.
Am I really living in a world where 3 hours of battery life is lopsidedly underclocked? It was the most they could get away with at the time, and that process shrink was years away. I wouldn't expect the new model to have less battery life than the launch Switch.
 
I think that's exaggerated. Nvidia has all kind of AI tools now, that they didn't have when they shrunk Erista, which was also on different lithography.

I see. Well then the leading argument for not Samsung 8NM still seems to be that the alternative is both more efficient and cheaper to produce. No?

Unless this isn’t true, what situation would have caused Nintendo to still opt for Samsung 8NM? I’m seeing some say it’s “obvious” but I fail to see how the choice is obvious outside of Kopite7kimi tweeting about it.
 
when did the nvidia customized 4N become available? how long before that would nvidia have to work with tsmc in order to have their very own special node? as close partners would nvidia share this info with nintendo during their design process for t239 before even starting to tape it out? wouldnt it make sense for nvidia to be like "hey youre wanting a very efficient mobile focused soc again, just so happens we're working on something that might be of interest to you"
Around the time that the 40 series cards started development. I don't know when that would've been, but we do know that T239's design was finalized at the same time as the 40 series, with tapeout occurring in mid-2022. It's not at all hard to imagine that 4N was an option to Nintendo when development started.
 
0
Im not sure if samsung 8nm would be cheaper since it would be very big compared to very small on TSMC 4N
Chip design costs are mainly a function of the number of transistors and of the node used. The physical chip area is not a direct factor, unlike with manufacturing costs.

I don't know much about the R&D of consoles, but would the money saved by a cheaper design process actually offset the extra money spent to manufacture millions and millions of consoles? Seems to me like you'd want the manufacturing cost of the system to be low since you're gonna be making tens of millions of the things.
I think even @easternlights agrees the increased design cost is something you can easily amortize out over the life of the product, and it's not like Nintendo's hurting for money. Just wanted to clarify that they were not talking about wafer costs.
Precisely, I was replying to @Concernt 's post about 4N being cheaper out of the gate. If IC designers and their customers only cared about the raw cost of manufacturing a single chip, nobody would ever design a chip for anything other than the very latest and most efficient process node, because all else being equal, a given design will always be cheaper to manufacture on a newer process. But all else is rarely equal: the design cost, photomasks and prototyping for a cutting-edge node can be considerable, and in our case Nvidia also paid TSMC a hefty sum simply to reserve all that 4N capacity, some of which would presumably be passed on to Nintendo. So yeah, using 4N is not some kind of free lunch, but at the scale we're talking about here, it probably makes sense.

As an aside, I just found out that TSMC is still operating its Fab 2 from 1990, manufacturing chips on up to 800 nm processes. I wonder what devices use its products; microcontrollers with no power budget that are already minuscule at that node and would be hard to handle when smaller?
 
0
when did the nvidia customized 4N become available? how long before that would nvidia have to work with tsmc in order to have their very own special node? as close partners would nvidia share this info with nintendo during their design process for t239 before even starting to tape it out? wouldnt it make sense for nvidia to be like "hey youre wanting a very efficient mobile focused soc again, just so happens we're working on something that might be of interest to you"
Nvidia's custom 4N is just TSMC's usual service to clients. The fab process is tuned to the product so no two 5nm products are exactly alike. AMD's 5nm products also have special changes. Nvidia applies marketing to this process to make their shit sound special
 
When product development begins, it can last a couple of years. You cannot keep canceling out in middle of it and replacing it with latest technology. Otherwise, nothing will ever get released.
This is true. But wasn't the rumor that Drake was taped out around the same time that the Ada Lovelace GPU was? If that is true, Nintendo could have opted for Ada GPU but from the Nvidia leak, Drake has some power saving features from Ada.
 
This is true. But wasn't the rumor that Drake was taped out around the same time that the Ada Lovelace GPU was? If that is true, Nintendo could have opted for Ada GPU but from the Nvidia leak, Drake has some power saving features from Ada.
What Ada GPU? There's no SoCs on Ada yet, and Thor is likely still a few years out, on top of the R&D to turn it into a low-power gaming chip. For Nintendo's purposes of a tablet with a 2024 release, an Ampere SoC based on Orin was the best that Nvidia could offer.
 
This is true. But wasn't the rumor that Drake was taped out around the same time that the Ada Lovelace GPU was? If that is true, Nintendo could have opted for Ada GPU but from the Nvidia leak, Drake has some power saving features from Ada.
In short, the benefits for a handheld from Ada are mainly in the realm of efficiency. Since it inherits those from Ada, along with, it seems, possibly a more efficient node, there's no real reason to worry about whether T239 is Ampere or Ada. In reality, we already know it's neither desktop Ampere nor desktop Ada, it's a version between them. This sounds bad, but it really isn't, as the additional features of Ada aren't relevant in the market segment and performance target of T239.

Plus, having a desktop GPU that's a superset of a device you're making a game for makes life easier.

They seem to have made a lot of practical choices.
 
I see. Well then the leading argument for not Samsung 8NM still seems to be that the alternative is both more efficient and cheaper to produce. No?

Unless this isn’t true, what situation would have caused Nintendo to still opt for Samsung 8NM? I’m seeing some say it’s “obvious” but I fail to see how the choice is obvious outside of Kopite7kimi tweeting about it.
And the sheer size of the chip. Before the Nvidia leak, we were all convinced it would be 8nm. We speculated it would be between 4 and 8 SM, with 8 being a long shot.

It would be difficult to clock it to its efficiency sweet spot, and if you go below that a smaller chip is better.
 
I see. Well then the leading argument for not Samsung 8NM still seems to be that the alternative is both more efficient and cheaper to produce. No?

Unless this isn’t true, what situation would have caused Nintendo to still opt for Samsung 8NM? I’m seeing some say it’s “obvious” but I fail to see how the choice is obvious outside of Kopite7kimi tweeting about it.

Because 4NM being cheaper than 8NM is bullshit.
 
I was researching the Legion Go's detachable controllers and I'm impressed with its features. For the Joy-Cons, do you think there will be new features for Switch-2 controller. The only think I can think of is maybe analog triggers, but likely they will want to fix the original joy-con drift from switch 1, likely hall effect joysticks. I can't see Nintendo incorporating a touchpad, especially with the touch screen rarely used in games (which makes sense for feature parity with dock mode)
 
0
Older nodes have higher yields
Isnt SEC8N yields notoriously bad?

Anyway, yes, TSMC 4N is supposedly cheaper than Samsung 8N on a per wafer basis. 4N costs 2.2x more per wafer, but is 2.7x more denser.

(note that it's not 4NM either, it's actually 5nm when we're talking about TSMC 4N)
 
What Ada GPU? There's no SoCs on Ada yet, and Thor is likely still a few years out, on top of the R&D to turn it into a low-power gaming chip. For Nintendo's purposes of a tablet with a 2024 release, an Ampere SoC based on Orin was the best that Nvidia could offer.
Any of them. Drake was found to be tested by folks who tested Lovelace gpus on LinkedIn
 
Older nodes have higher yields
Not all the time and not in this case. Samsung 8 nm in particular is quite infamous for its low yields compared to the equivalent TSMC 7 nm node. Meanwhile, TSMC 4N is an exceptionally high-yield node for its type at around 80%, compared to 75% for Samsung's 4 nm processes. While that may not seem like a lot, when you're talking on the order of tens of millions of chips, that small difference adds up fast.
 
Not all the time and not in this case. Samsung 8 nm in particular is quite infamous for its low yields compared to the equivalent TSMC 7 nm node. Meanwhile, TSMC 4N is an exceptionally high-yield node for its type at around 80%, compared to 75% for Samsung's 4 nm processes. While that may not seem like a lot, when you're talking on the order of tens of millions of chips, that small difference adds up fast.
Isn't that similar to the logic used for the Switch 1 being on 16nm, only for it to be on 20nm anyway?
 
Isn't that similar to the logic used for the Switch 1 being on 16nm, only for it to be on 20nm anyway?
No, 20nm was just abandoned. But Nvidia already ordered wafers to be used. The problem here is, does Nvidia still have wafers to be used on Samsung 8nm? Because we know they do for tsmc 5nm
 
Isn't that similar to the logic used for the Switch 1 being on 16nm, only for it to be on 20nm anyway?
That's not really comparable, since Nintendo needed something quick, and there were already X1 chips in 20 nm available. Nintendo can afford to be a lot more choosy about node now.
 
At the risk of sounding dumb, I recall somebody saying there’s no obvious path for improvements (die shrink) from Samsung 8nm? Something something dead-end process?

Wouldn’t that be enough of a reason for it to not be considered? They’d have known these things well enough in advance and be considering them when looking at a new product with a likely 8+ years on market. And it’s not like they aren’t taking their time making it, which is totally unlike the urgently released Switch (1).
Yes. SEC8nm is a dead end which already has been squeezed every optimization it could get. Further shrinks to SEC/SF 7/5/4 would require a design port, new verification and validation, etc. It's the usual stuff that is done when you shrink a design. It's just that it became more and more expensive with modern nodes. The biggest question a lot of people ask is: Does it make sense for Nintendo to manufacture a big SoC on 8N to "save money", just for them to redesign it again and shrink later and waste more money? Why not go with 4N from start and stay there until Switch 3?
Older nodes have higher yields
TSMC 4N has better parametric yields and similar catastrophic yields as of 2022. It's even better now.
 
You can't just stick a graphics card in it and call it a day. The whole point of an SoC is that you have the CPU and GPU all on one piece of silicon.
I think what Feet is saying is that the people who tested the Ada Lovelace GPUs also tested T239, giving credence that T239 was taped out around the same time and that T239 might be on the TSMC 4N node that the Ada Lovelace cards are, even though T239's GPU is not an Ada Lovelace GPU.
 
0
It is interesting to note that it would not be the first time that a Tegra would use more advanced lithography than its base architecture. Tegra X1 was in 20nm while the best Maxwell chips even of the second generation were 28nm, while the X1+ in 16nm used a lithography that would only be used in the Pascal series.
 
It is interesting to note that it would not be the first time that a Tegra would use more advanced lithography than its base architecture. Tegra X1 was in 20nm while the best Maxwell chips even of the second generation were 28nm, while the X1+ in 16nm used a lithography that would only be used in the Pascal series.

Yeah, this is a good note for when people assume it will use 8nm just because that's what Ampere and Orin used. It doesn't matter with these custom Tegras. Not to mention that by the time the Switch 2 launches, 8nm will be more out of date than 20nm was at the Switch launch.
 


Oh snap yaw! They may actually be taking docked mode up to home console territory like I've been hoping! Fingers crossed we're getting a 20w+ home console mode pushing boundaries of what Drake can achieve!


20W Switch 2 is real @Concernt

:D

In all seriousness, since this got patented, they obviously didn't go with it. So what have they chosen instead?
 
20W Switch 2 is real @Concernt

:D

In all seriousness, since this got patented, they obviously didn't go with it. So what have they chosen instead?
I believe that this is not the type of patent that we can discard according to the pattern "if it was made public then it will not be used", this time it is not a ginmick, a new form-factor or anything like that, this going public before the announcement in no way gives a spoiler of what's to come. I don't think anyone is surprised that Nintendo cares about more efficient forms of cooling, even if the Switch 2 doesn't necessarily need it.
 
Please read this new, consolidated staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited by a moderator:


Back
Top Bottom