• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

TSMC 5nm is so bad that I wouldn’t be shocked if Nintendo just did TSMC 7nm and put in a bigger battery to compensate for the extra electricity costs.

There’s like almost no IO or cache shrinking from 7nm to 5nm.

The main issue with that speculation is that I don’t know if NVIDIA ever used TSMC 7nm
No 5nm is not bad is great node, and 4nm(TSMC 4N) is even better
 
one of the rationales given for 8nm is because the design was completed some time ago and Nvidia seemingly didn't expect Nintendo to sit on it this long. so if it's not that great in 2025 that's why. also less competition on the node.
 
one of the rationales given for 8nm is because the design was completed some time ago and Nvidia seemingly didn't expect Nintendo to sit on it this long. so if it's not that great in 2025 that's why. also less competition on the node.
If it taped out in H1 2022, that was right when other Lovelace Gpus taped out.
 
In fact, I don't think so, this generation has so far been marked by graphics that are not impressive, and a large part of the public would accept PS4-level graphics without any major problems.
I believe that a port of FFXVI for a machine with, say, 2.4 TFLOPs with DLSS, would be more beautiful today than The Witcher 3 was on Switch in 2019.
I was not talking how impressive the graphics are and more about how well games should run on this console. If it can run most graphically heavy games that were released for PS4 too with at least 1080p and stable 30fps I also don‘t see much of a problem. DLSS will for sure help too.

What I meant was that if FFXVI or Elden Ring would release in a similar state like Witcher 3 on Switch, people would for sure have way less acceptance for that.
 
TSMC 5nm is so bad that I wouldn’t be shocked if Nintendo just did TSMC 7nm and put in a bigger battery to compensate for the extra electricity costs.

There’s like almost no IO or cache shrinking from 7nm to 5nm.

The main issue with that speculation is that I don’t know if NVIDIA ever used TSMC 7nm
I'm not sure why you think tsmc 5nm is bad. Ita been great for Nvidia

And their most successful product is on 7nm as well as several others
 
If it can run most graphically heavy games that were released for PS4 too with at least 1080p and stable 30fps I also don‘t see much of a problem. DLSS will for sure help too.
That won't ever happen with 8nm, in fact... It might not happen with 4N even. Resolution and framerate is a terrible barometer to determine how powerful this system will be, all that matters is that's powerful enough to run whatever current gen will throw at us even if both of those things need to be compromised, just not the game. Miracle ports were heavily cut down beyond that, and often looked a generation below the proper versions of the game despite the 480p and unstable 20-ish fps, that's what you'd want to avoid.
 
If it taped out in H1 2022, that was right when other Lovelace Gpus taped out.
this & the clocks from the DLSS tests are the main reason I believe the better node is more likely. plus the common sense deduction of T239 being designed around a smaller node.

the guy in the video makes some decent points though and i don't think 8nm can be totally ruled out especially with the bigger system. with the whole custom nature of the project it may be more likely to be on a better Samsung node than 8nm, if it's not TSMC.

some of his conclusions are veering into anti-Nintendo territory so i'm not sure what he's saying is logical regardless.
 
I was not talking how impressive the graphics are and more about how well games should run on this console. If it can run most graphically heavy games that were released for PS4 too with at least 1080p and stable 30fps I also don‘t see much of a problem. DLSS will for sure help too.

What I meant was that if FFXVI or Elden Ring would release in a similar state like Witcher 3 on Switch, people would for sure have way less acceptance for that.
But my point is precisely this, I think that an FFXVI on this hypothetical console would have even better reception than Witcher 3 on the Switch, because PS5 games aren't that much prettier than PS4 games.
Elden Ring would be fully playable, many people played it on PS4 or Steam Deck and considered it good versions.
I see some people who don't like the Series S, for example, but I've never seen anyone saying that a game on the Series S is objectively ugly, and well-made ports are always well accepted by the community.
 
That won't ever happen with 8nm, in fact... It might not happen with 4N even. Resolution and framerate is a terrible barometer to determine how powerful this system will be, all that matters is that's powerful enough to run whatever current gen will throw at us even if both of those things need to be compromised, just not the game. Miracle ports were heavily cut down beyond that, and often looked a generation below the proper versions of the game despite the 480p and unstable 20-ish fps, that's what you'd want to avoid.
Yeah. I could see that being the goal at least in the same way as it was for the Switch. Of course it will not turn out like that in reality, especially in the later life of the console were devs port anything that will be remotely possible.

FFXVI was a bad example because it being a recent title, though I still think that it is quite important that at least last gen titles run well enough, so that the next hardware is perceived as a current home console. It helped Switch a lot having a huge backlog of possible XBOX360 and PS3 games which comparably didn‘t need too much work to port.
 
But my point is precisely this, I think that an FFXVI on this hypothetical console would have even better reception than Witcher 3 on the Switch, because PS5 games aren't that much prettier than PS4 games.
Elden Ring would be fully playable, many people played it on PS4 or Steam Deck and considered it good versions.
I see some people who don't like the Series S, for example, but I've never seen anyone saying that a game on the Series S is objectively ugly, and well-made ports are always well accepted by the community.
I‘m just not sure were the level of acceptance lies. People see some Switch Ports as unplayable or would at least never play them on a TV screen.

Though I think I‘m just bringing up semantics from my side. I agree with you. It wouldn’t need much for that hardware to get already most for Switch impossible games that were released as recent as this year.
 
Yeah. I could see that being the goal at least in the same way as it was for the Switch. Of course it will not turn out like that in reality, especially in the later life of the console were devs port anything that will be remotely possible.

FFXVI was a bad example because it being a recent title, though I still think that it is quite important that at least last gen titles run well enough, so that the next hardware is perceived as a current home console. It helped Switch a lot having a huge backlog of possible XBOX360 and PS3 games which comparably didn‘t need too much work to port.
The thing is... I wouldn't say that was the goal with the Switch, a quarter of an Xbox One (let alone a PS4) was never going to run the same games even if you turned down the resolution to 480p and made the framerate behave like a rollercoaster as we ended up seeing, and Nintendo knew this back in 2017. The og Switch was way too behind the curve to allow such a dream to happen unfortunately, but now there is a chance. If a 4N T239 is capable of handling current gen with only resolution/capped framerates cutbacks that's an absolute win for everyone involved, better to play a blurrier version of a game with less options (retaining its graphical prowess) than playing a blurry demake of it.
 
Last edited:
Please Nintendo, save us from this 8nm talk.

It comes up every week and it literally comes from nowhere other than some random youtubers/twitter accounts that have no credibility.

I'd get used to it. Earliest this whole situation is being cleared is if DF or someone similar gets their hands on an early retail unit.

Nintendo won't mention any of such tech details.
 
Please Nintendo, save us from this 8nm talk.

It comes up every week and it literally comes from nowhere other than some random youtubers/twitter accounts that have no credibility.
They need to hold a press conference where they just whisper the node into the mic and disappear backstage. In fact, they can dictate their reveal process based on what Fami debates the most.
 
Please Nintendo, save us from this 8nm talk.

It comes up every week and it literally comes from nowhere other than some random youtubers/twitter accounts that have no credibility.
That won’t happen until Switch 2 is released and someone gets a die shot of the chip.
 
They need to hold a press conference where they just whisper the node into the mic and disappear backstage. In fact, they can dictate their reveal process based on what Fami debates the most.

It would be 2nm vs 8nm
Color theory
And why suing Yuzu is important for Switch 2
 
Last edited:
I'd get used to it. Earliest this whole situation is being cleared is if DF or someone similar gets their hands on an early retail unit.

Nintendo won't mention any of such tech details.
or if a dev leaks clock speeds sometime before then we'll have a much better idea. could be the golden nugget at this stage.
 
Let me shut this down now - emulation of Switch 2 isn't going to be hard because of the GPU or it's CPU, or because of DLSS, the OS, the decompression engine, or even it's Extreme Power. If it's hard, it's hard because of anit-piracy measures. That's it.

A dedicated developer could start working on Switch 2 emulation now. Yuzu has the groundwork already in place. The ARM emulator in Yuzu needs a couple of extensions to go from ARM8-A to ARM8.2, but it's not like that isn't well documented with plenty of example hardware in the wild to play with.

The GPU emulator will need to support Ampere microcode. But so will the Switch 2's emulator, presuming that's how they go with backward compat. They'll have to reverse engineer Ampere's microcode, but again, someone could have started on that already, with RTX 30 cards readily available for cheapish.

DLSS will need some games reverse engineered to figure out how to inject FSR2 in its place on machines that don't support it. But the actual wrapper to map one to the other is a solved problem.

The OS is a continuation of the existing OS, and so the existing work will be retained.

It will be entirely up to Nvidia and Nintendo's security teams to prevent this thing from seeing year 1 emulation.
 
8nm talk, the endless now
maxresdefault.jpg
 
But 8nm isn't cheaper.

Probably, but I do not think we can say that definitively. Samsung could have offered a tremendous deal on 8nm knowing that it would help establish long term demand for 8nm. I do not know what type of volume the Orin chips are produced at, but they will obviously be manufactured for years to come for the automotive market. Samsung could in theory only charge Nvidia/Nintendo for the fully functional chips making the yields irrelevant to Nvidia/Nintendo. Wafer cost for 8nm could have also fallen sharply with it being a very mature node at this point. Bottom line is if Samsung wanted to undercut TSMC for T239, they absolutely could have. They might not make much or any money on it, but there can be reasons that aren't always obvious on why a manufacture will take on a contract for a deal that nets them little to no profit.

With all that said, the top reasons to believe T239 isn't 8nm has nothing to do with cost. Size, power consumption and peak performance are all significantly better on 4N. When you look at a custom SOC being developed for a device like Switch where things like SOC size, performance and especially power consumption are all very high priority, it gets tough to see how they could have settled on 8nm.
 
Let me shut this down now - emulation of Switch 2 isn't going to be hard because of the GPU or it's CPU, or because of DLSS, the OS, the decompression engine, or even it's Extreme Power. If it's hard, it's hard because of anit-piracy measures. That's it.

A dedicated developer could start working on Switch 2 emulation now. Yuzu has the groundwork already in place. The ARM emulator in Yuzu needs a couple of extensions to go from ARM8-A to ARM8.2, but it's not like that isn't well documented with plenty of example hardware in the wild to play with.

The GPU emulator will need to support Ampere microcode. But so will the Switch 2's emulator, presuming that's how they go with backward compat. They'll have to reverse engineer Ampere's microcode, but again, someone could have started on that already, with RTX 30 cards readily available for cheapish.

DLSS will need some games reverse engineered to figure out how to inject FSR2 in its place on machines that don't support it. But the actual wrapper to map one to the other is a solved problem.

The OS is a continuation of the existing OS, and so the existing work will be retained.

It will be entirely up to Nvidia and Nintendo's security teams to prevent this thing from seeing year 1 emulation.
Exactly what I thought, sometimes I think a Mark Cerny is needed at Nintendo. Some small customizations in the Cuda Amperes instructions, and we could have easier backward compatibility and greater difficulty for emulation.
 
Frankly, why care about the node ? Gamescom talk about the expected upgrades from Switch 1 (BotW) and how it fared in the Matrix demo benchmark (very well, especially in the RT department) is more than enough.

Worse case scenario we get a smaller battery life, but that can be improved in future iterations.
 
Man, Samsung play too much. 🤣🤣🤣🤣 they have a imitation Switch-lite device using their SD cards. Idk, if that's teasing or what.

I am actually using a ProPlus 512gb for Linux and Android on a modded switch.
 
I thought they were already in contract with Nvidia before the Tegra X1 and the Shield TV were released. Wasn't Nintendo involved in some last-minute modifications to the SoC, specifically the security?
EDIT: Kise Ryota found oldpuck's post. His write up is better than mine, but I'm gonna leave this here just in case people wanna save a click or somethin'.

Yep, though for the sake of the discussion and anybody unfamiliar I'll expand on this! Somebody please correct me if needed, but as far as I can remember the chain of events is:

-Nintendo and STMicroelectronics worked on a chip for a new handheld, initially codenamed "Mont Blanc." It was a quad core ARM 53 CPU cluster with a custom "Decaf" GPU, which was essentially a cut down version of the Wii U's "Cafe" GPU. So think the Wii U's GPU but cut in half, and with all the Wii and GameCube stuff ripped out. This was all in service of Project Indy, a project whose main goal was to unite the handheld and console software teams due to growing software development costs. The Wii U's GPU dates all the way back to the N64 days, and Nintendo's devs knew it like the back of their hand. So while the handheld teams would start from scratch, having the console teams work on stuff with prior knowledge of their development pipeline would allow them to still pump out games and make the onboarding approach for the handheld teams quicker and easier. TIme is money, so costs go down too.

-Indy was initially envisioned as a handheld, though the scope of the project eventually expanded to it being a "hybrid" system. They toyed with wireless casting to the TV screen, though the latency was Really Fucking Bad™. Physically hooking up the system to the TV was an idea they also toyed with, that they found to work much better. Indy shifted gears, and hooking Indy up to the TV or "docking" it became the primary focus of the system. Thus, the Switch was born.

-Nintendo engineers were like "Hey wait a minute... Do we need Decaf?" With the system's hybrid nature now being the core component, it was not just the 3DS's successor but the Wii U's as well. It would still serve the initial purpose of Indy: unifying the console and handheld dev teams. They'd lose the 20+ years of knowledge and support, but the teams would also only have to worry about a single device. The problem is that if it's going to be a Wii U successor, it had to beat the Wii U.

-Enter Nvidia, stage left. Nintendo and Nvidia kinda already had a weird situationship due to Nintendo toying with the idea of using an older Tegra SoC in the 3DS. Nvidia were thirsty though. They had these (at the time) unreleased Tegra X1 chips, and needed a customer to use them. So Ninty's teams took a look at the X1 and compared it to Mont Blanc. There were security concerns that Nvidia quickly addressed (Captain Hindsight says "lol") to pass Nintendo's security tests, and talks began. Ninty was hesitant due to the potential loss of the aforementioned legacy support, but Nvidia also had a graphics API - that we now know as NVN - all ready to go. Details are admittedly scarce about the Nvidia and Ninty deal, but I think it's pretty easy to see why Nintendo went with the X1: The X1 outperformed Mont Blanc, and thanks to the SoC being finalized and NVN being already made, was finished and ready for use. And now we're here, 145+ million Switches sold later.
 
Last edited:
Frankly, why care about the node ? Gamescom talk about the expected upgrades from Switch 1 (BotW) and how it fared in the Matrix demo benchmark (very well, especially in the RT department) is more than enough.

Worse case scenario we get a smaller battery life, but that can be improved in future iterations.
We care about the node because if it's on 8nm aside from being very power hungry, it would also be really huge.
 
Last edited:
Frankly, why care about the node ? Gamescom talk about the expected upgrades from Switch 1 (BotW) and how it fared in the Matrix demo benchmark (very well, especially in the RT department) is more than enough.

Worse case scenario we get a smaller battery life, but that can be improved in future iterations.
If they're trying to keep battery life comparable with what we saw with Switch, it would contribute significantly to performance (TFLOPS), if Nintendo saw battery life as something to not toy too much with.

I have a hard time seeing Nintendo go for significantly smaller battery if they're trying to push the hybrid form factor again.

Not to mention the clock speeds seems weird if it's SEC8N and also 12SMs at the same time. Something got to "give" here.
 
Frankly, why care about the node ? Gamescom talk about the expected upgrades from Switch 1 (BotW) and how it fared in the Matrix demo benchmark (very well, especially in the RT department) is more than enough.

Worse case scenario we get a smaller battery life, but that can be improved in future iterations.
it's a tech-oriented thread. little details are of interest to some of us. and 8nm does matter with thinks like battery life, and the limits of performance. not to mention the how cooling is designed, because we've seen what poor cooling design can do to a system
 
If they're trying to keep battery life comparable with what we saw with Switch, it would contribute significantly to performance (TFLOPS), if Nintendo saw battery life as something to not toy too much with.

I have a hard time seeing Nintendo go for significantly smaller battery if they're trying to push the hybrid form factor again.

Not to mention the clock speeds seems weird if it's SEC8N and also 12SMs at the same time. Something got to "give" here.
Your fingerprints as they are burned off. It isn't a engineering issue, it is a feature. /s
 
Is so stupid that the only reason why people believe is Samsung 8nm is Kopite words beacuse he think since is Orin custom version it must be Samsung 8nm or/and is Nintendo thing
 
Is so stupid that the only reason why people believe is Samsung 8nm is Kopite words beacuse he think since is Orin custom version it must be Samsung 8nm or/and is Nintendo thing
Yeah especially considering Maxwell GPUs were all fabbed on 28nm, while the T210, a SoC containing Maxwell GPU, was fabbed on 20nm.

It's like assuming orig Switch's SoC (T210) will be fabbed on 28nm, just because the rest of Maxwell family is. That seems to be the basis of Kepler's assumption. Not sure why Kopite is assuming 8nm though (unless he stated his basis of assumption is formed in the same way Kepler's basis of assumption was)
 
Re: 8nm vs 5nm price

In the past, each node shrink is cheaper, per transistor. That sorta stopped with the transition to 5nm. The new processes are just so dang slow and the number of machines in the world that can do it are so rare, that capacity drops have driven the price-per-wafer up as fast as chip-per-wafer does.

10nm (the class that 8nm is in, confusingly enough) is still probably more expensive per transistor than a newer process node, just because 10nm is old enough. However, Samsung has really struggled to move to EUV technology, and they invested heavily in making 8nm mature.

Samsung is likely willing to give Nvidia a deal where they pay for only working chips. That's a better than obvious price cut - it means that Nvidia can afford to slap 12 SMs in their chip, without any extras to work around chip failures, because they don't have to pay for those chips.

TSMC has zero motivation to offer Nvidia a deal here, and Nvidia has all the motivation to go with Samsung for any product they can, in order to keep TSMC's competition as healthy as possible.

Re: 8m vs 5nm power consumption

I don't think the current design is viable on 8nm. As more and more GPU reviewers are starting to look at power consumption seriously, and as more and more Lovelace products come out, Thraktor's power predictions seem to be holding up.

A lot of smart folks are predicting 8nm, but when I talk to them, the argument sounds like this. "I know enough about the chip market to know 5nm is probably more expensive. I don't know enough about electrical engineering to know if there are significant power savings. I'll bet Nintendo being cheap and Nvidia being brilliant."

Which isn't a bad analysis, but it's kinda shallow. It assumes that there are major power savings that Nvidia knows about that it is choosing to not ship in Lovelace, while ignoring Nintendo's ability to cut costs elsewhere (screen, cough cough).

But I'm also not an electrical engineer either. My confidence that a bunch of forum nerds can reverse engineer the power consumption of an unknown chip is higher than it should be, but it's not that hight.

Re: 8nm vs 5nm generally

I've got zero investment beyond wanting my predictions to be right. If Nintendo says "yeah, we could have gone to 5nm, but instead we worked closely with Nvidia to build a massive power saving architecture that let us get the performance of 5nm at the cost of 8nm" I fail to see how anyone could view that as a bad thing!

If the product is bad, it's bad - if it underperforms, if it's too big, if that battery life sucks - and if it's good, it's good. 8nm is one of a whole host of decisions Nintendo would have made that lead them there, and I'm not going to put the whole thing at the foot of the dang process node.
 
Nintendo systems regardless of power have always at least had more RAM than all consoles on the previous gen so 8GB isn't gonna happen. 10GB at least for switch 2. Nintendo isn't stingy when it comes to RAM. Not since the N64 anyway and they learned from that mistake and even corrected it midgen (Expansion pack)
 
Nvidia has all the motivation to go with Samsung for any product they can, in order to keep TSMC's competition as healthy as possible.
Doesn't that not work if Samsung's nodes kinda suck? Samsung's 8 nm had notorious yield problems with the 30 series, and their 5 nm class nodes still lag behind TSMC's. While competition among fabs is good for fabless chipmakers, I imagine Nvidia doesn't want what is likely to be one of its most popular products to be saddled with a node with lower profitability and especially lower power efficiency. In consumer graphics cards and Orin, power efficiency is less paramount because they either have an external power supply where the cost is more or less invisible, or they're on devices with huge batteries. In the high end data center chips, power efficiency is much more important because their clientele can see exactly how much running a given chip costs them and compare to other chips, so Nvidia made the switch to TSMC N7. On the low end of a tablet, power efficiency starts to matter again since they're relying on a relatively small battery and need to satisy Nintendo's requirements. No amount of ass-kissing on Samsung's part is gonna make a square peg fit in a round hole.
 
If, god forbid, they went with SEC8N, can they node shrink for a hypothetical Pro to TSMC4N? Or is that not an option?
Going from Samsung's 8N process node to TSMC's 4N process node is not considered a process node shrink, but rather a full migration from a full process node from one company to another full process node from another company.

But saying that, the answer's yes, but with a caveat. The caveat being that the SoC has to be redesigned. And here are the reasons why:
  • Samsung's IPs are different from TSMC's IPs.
  • Samsung's 8N process node uses DUV lithography whereas TSMC's 4N process node uses EUV lithography.
And I imagine redesigning the SoC for a full migration from a full process node from one company to another full process node from another company is neither inexpensive nor fast.
 
Last edited:
Going from Samsung's 8N process node to TSMC's 4N process node is not considered a process node shrink, but rather a full migration from a full process node from one company to another full process node from another company.

But saying that, the answer's yes, but with a caveat. The caveat being that the SoC has to be redesigned. And here are the reasons why:
  • Samsung's IPs are different from TSMC's IPs.
  • Samsung's 8N process node uses DUV lithography whereas TSMC's 4N process node uses DUV lithography.
And I imagine redesigning the SoC for a full migration from a full process node from one company to another full process node from another company is neither inexpensive nor fast.

The flip side here being that it would be easier/cheaper to shrink SEC8N to Samsung 5nm? Or am I remembering that SEC8N is a dead end node?
 
Please read this new, consolidated staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited by a moderator:


Back
Top Bottom