StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

But they don't want.

120GB/s is too much bandwidth for low clocks ampere. Since ampere only need 25 GB/s per Tflop, that mean a 4 Tflops Switch 2 is possible on LPDDR5X. below that, or much below that, is cheap use LPDDR5 or even LPDDR4X.

So, do you really think they found a way to have a Switch 2 at 4 Tflops (D)/ 2 Tflops (P) on 8nm?

The high number of SM and the high bandwidth of LPDDR5X point to high clocks. But 8nm don't. So, the wrong is Nintendo to use these or the people that think they tried to put these on 8nm?
These were all some of the reasons I was 100% sold on 4nm before the motherboard leak happened.

If its 8nm 4 teraflops out of the question imo.
Maybe there are benefits to excess memory bandwidth we dont know about (DLSS execution speed, RT who knows).

Or Maybe Z0mb1ie is right that its 5nm.

But right now, my bet is on 8nm with a bit better power efficiency than Orin for unkown reasons.
 
Don’t forget the bandwidth needs of the CPU.
i don't forgot. 20 GB/s is more than enough. Then 100 Gb/s for GPU + 20 GB/s for CPU and the LPDDR5X make sense.

People think Nintendo ask Nvidia to create a 8nm T239 because they want it cheap, but forgot that 12 SM and LPDDR5X are expansive. Switch 2, for the configs we truly know, is nothing cheap, so I doubt the node will end up being the weak link in the chain.
 
About the node size, I try to find something about a 8nm LPDDR5X, but the small node I found is 7nm: https://www.design-reuse.com/news/55156/openedges-7nm-lpddr5x-phy-ip.html

The problem is... that big node became avaliable only after 2023. This mean Switch 2 can't use it, since it was taped on 2022.

So, LPDDR5X is only for 5nm nodes or below.
That's an PHY

Also, LPDDR5X isn't proof of anything regarding node. Nvidia can reuse their own controller across multiple nodes if desired.

Synopsys announced an expanded collaboration for ready to use external IP on Samsung Foundry and that included LPDDR5X for proccesses like 8LPU:

 
These were all some of the reasons I was 100% sold on 4nm before the motherboard leak happened.

If its 8nm 4 teraflops out of the question imo.
Maybe there are benefits to excess memory bandwidth we dont know about (DLSS execution speed, RT who knows).

Or Maybe Z0mb1ie is right that its 5nm.

But right now, my bet is on 8nm with a bit better power efficiency than Orin for unkown reasons.
The problem here is not the 5nm, but the Samsung part.

The first leaker we have after the Nvidia Hack say it was on "Samsung 5nm". What make sense for the configs we have.

But people here discard the "Samsung" part, since Nvidia is working with TSCM for Ada.

If only we had stuck to the leaker's word and debate the possibilities on Samsung Node until now, when the motherboard was revealed, the SM part on the SoC would be only another proof the leaker is right.

Instead we back to the "because Nintendo want it cheap" debate, that is very wrong for what we know about the system.
 
But they don't want.

120GB/s is too much bandwidth for low clocks ampere. Since ampere only need 25 GB/s per Tflop, that mean a 4 Tflops Switch 2 is possible on LPDDR5X. below that, or much below that, is cheap use LPDDR5 or even LPDDR4X.

So, do you really think they found a way to have a Switch 2 at 4 Tflops (D)/ 2 Tflops (P) on 8nm?

The high number of SM and the high bandwidth of LPDDR5X point to high clocks. But 8nm don't. So, the wrong is Nintendo to use these or the people that think they tried to put these on 8nm?

Well, Rich did mention the possibility that Nintendo will downclock the RAM speed as well.
 
That's an PHY

Also, LPDDR5X isn't proof of anything regarding node. Nvidia can reuse their own controller across multiple nodes if desired.

Synopsys announced an expanded collaboration for ready to use external IP on Samsung Foundry and that included LPDDR5X for proccesses like 8LPU:

This article doesn't specifically say that they are bringing LPDDR5X to the 8LPU. You're the one who assumed that.

Plus, that is from 2023, again, 1 year after Switch 2 is tapeout.
 
0
Downclock the LPDDR5X ram or use a cheap LPDDR5, what make more sense?

We don't actually know the price difference.

More advanced is not always more expensive, as supply and demand also matter a lot.

This is the main reason Nintendo changed from lpddr4 to lpddr4x with Mariko in 2019. Not to make the already great battery life improvement slightly better.
 
i don't forgot. 20 GB/s is more than enough. Then 100 Gb/s for GPU + 20 GB/s for CPU and the LPDDR5X make sense.

People think Nintendo ask Nvidia to create a 8nm T239 because they want it cheap, but forgot that 12 SM and LPDDR5X are expansive. Switch 2, for the configs we truly know, is nothing cheap, so I doubt the node will end up being the weak link in the chain.
Thats what i tought too, but maybe they choose the strong components because the node is weak and its custom made with some efficiency gains, we know the prices skyrocketed, look at the ps5 pro 4N i think. I still hope its 5nm, think it will be 7nm and prepared for it being 8....
 
Kind of a tough question. I think Rockstar picks 30 FPS because they have certain design goals in mind, and I also don't think their games would be nearly as ambitious if the target was 60 FPS. If GTA5 targeted 60 FPS on 7th gen I can't imagine how compromised the game would have been compared to what we got, and look at the performance even when they targeted 30 (it also had some performance issues on 8th gen consoles too, but not as pronounced):

D8RoySh.jpeg

Vlwfor2.jpeg

nkq6mWG.jpeg


Could we maybe see a 40FPS mode? Possibly, but I wouldn't count on it

edit: so tl;dr for my answer, the consoles could do 60 but it would be a completely different game

It’s quite interesting because the PS3 had its split pool of memory, which really hampered performance for multiplatform titles early on. Even if it were a unified pool like the 360 was, I think it would’ve resulted in better performance from the get go. That also said, I think in general 512MB just became not enough late into the system’s life more than anything, though I’m sure there are still some technical challenges developers could not overcome at the time with the CPUs, and GPUs.

Given how physics heavy GTAV was with the Euphoria engine, that is probably why the 360 faltered more than PS3 by this point, though I can’t say for certain.

I always find it interesting from the standpoint of “what if?” What if Developers had more time, or found even more interesting ways of optimizing games for either system? What if the PS3 had more ram on tap, or even a unified pool? How much more power was left untapped for the Cell processor? What if the Xbox’s CPU could do Out-of-Order Executions?

But it also makes me excited what modders, and Homebrew enthusiasts of the community will be able to accomplish in terms of unofficial ports, and such. Heck, recently it was announced that GTA3 was “ported” over to the Sega Dreamcast, though it is still in rough shape, and yet it’s possible. Now I want to see someone attempt it for GameCube…

I continue to harp on this point because I’m not interested when someone says, “can’t be done” in regards to games. A couple years ago, I saw a man make Super Mario 64 go vroom vroom on original hardware to make it run at 60fps using only optimizations. Not saying GTAV on PS3 would be able to run at 60fps with enough optimization, but more “what would it take to make it work?”

As a last point I’ll make, given what we’re seeing with Moore’s Law slowing down, what kinds of optimizations haven’t we discovered yet or figured out that’ll make us go, “Whoa!”
 
I always find it interesting from the standpoint of “what if?” What if Developers had more time, or found even more interesting ways of optimizing games for either system? What if the PS3 had more ram on tap, or even a unified pool? How much more power was left untapped for the Cell processor? What if the Xbox’s CPU could do Out-of-Order Executions?

The most interesting "what if" about the PS3, is what if the CELL gpu materialized?

Until very late in development the PS3 was supposed to have a CELL branded, completely inhouse Sony GPU.

Because they couldn't make it work for unknown reasons (either performance or price), Sony had to rush a deal with Nvidia last minute.
 
The ps5pro is 5nm isn't it? Which is basically the same node. Node names are confusing on purpose. Nvidia modified 5nm = 4nm.

Anyway Drake would be tiny on 4 (5) nm, so it's not at all comparable to ps5 pro.
yes 4n is a subnode and ps5 pro is a bigger chip. I think we will be surprised either way, the combination of a custom made chip from the ground up with backported rtx4000 features, infused a with highly optimized Nvidia/Nintendo OS, a powerful t-239 chip with good ram size, speed and no bigger bottleneck + dlss with transformer technology... i think PS4 Pro visuals should be possible, not in raw power, but in todays standarts, thats not necessary anymore. Just look at the ugly unsharp RDR2 ps4 pro version with 1440p checkerboard upscaling, its so damn ugly i couldnt play it for 5 minutes straight and iam absolutely sure switch 2 can do better with dlss 4 technology
 
Last edited:
0
Until very late in development the PS3 was supposed to have a CELL branded, completely inhouse Sony GPU.

Actually, I believe the plan was to have only the Cell processor. I believe Sony even considered doubling up the Cell processor when they saw what the Xbox 360 was doing. They ultimately opted for a dedicated GPU from Nvidia when it became apparent that Cell by itself was not capable of matching the 360 on its own.
 
The most interesting "what if" about the PS3, is what if the CELL gpu materialized?

Until very late in development the PS3 was supposed to have a CELL branded, completely inhouse Sony GPU.

Because they couldn't make it work for unknown reasons (either performance or price), Sony had to rush a deal with Nvidia last minute.

I thought it was going to be two identical Cell processors, and not a Cell CPU, and Cell GPU?

Regardless, I miss the age of unique, and exotic hardware. Such a different time full of possibilities. But I only say this as a gamer, and not a programmer. I’m sure from a programming standpoint, it would’ve been an even more colossal PITA.

I can dream though.
 
I thought it was going to be two identical Cell processors, and not a Cell CPU, and Cell GPU?

Regardless, I miss the age of unique, and exotic hardware. Such a different time full of possibilities. But I only say this as a gamer, and not a programmer. I’m sure from a programming standpoint, it would’ve been an even more colossal PITA.

I can dream though.
It was cool, but I also never want to see another console like the Saturn with most of its library trapped behind its novel architecture.
 
This is the main reason Nintendo changed from lpddr4 to lpddr4x with Mariko in 2019. Not to make the already great battery life improvement slightly better.
Nvidia made the Tegra X1+ that used LPDDR4X, and they made use of that extra power of the SoC and RAM for the 2019 Nvidia Shield TV. It's not a custom design by Nintendo like the T239 is.

Nintendo may have urged Nvidia for something more efficient, but they were not in control over what components it used.
 
I thought it was going to be two identical Cell processors, and not a Cell CPU, and Cell GPU?

Regardless, I miss the age of unique, and exotic hardware. Such a different time full of possibilities. But I only say this as a gamer, and not a programmer. I’m sure from a programming standpoint, it would’ve been an even more colossal PITA.

I can dream though.

Actually, I believe the plan was to have only the Cell processor. I believe Sony even considered doubling up the Cell processor when they saw what the Xbox 360 was doing. They ultimately opted for a dedicated GPU from Nvidia when it became apparent that Cell by itself was not capable of matching the 360 on its own.

No, according to this source who seems to know what they're talking about, it was a separate chip. The first reply here.

"
This is a good question.

Originally they planned to have two separate Cell chips on the PS3; A cell-based CPU, physics & geometry chip(the Cell chip we got), and a Cell-based pixel chip(GPU).

The Cell-GPU would have had 4–6 of the Cell SPUs and then some fixed function graphics hardware, like z-check(ROP) and texturing units (TMU)s.

Having similar architecture on all number crunching cores of the system would probably have given them some nice synergy advantages, same code could execute in either the GPU chip or in the CPU chip, and any improvements they would make to their compilers would benefit both physics/geometry side and the pixel side.

At some point they understood that they cannot get the cell-based GPU working, either in the schedule they had, or at the performance they wanted. This happened when the development of the console was already quite far and there was not very much time to the release.

So they quickly needed a different GPU for PS3. So they went shopping in panic and nVidia offered them a slightly customized geforce 7800. When PS3 was released, the GPU was already obsolete feature-wise, and they also had no synergy advantages between the CPU and GPU because the GPU did not have those cell SPUs anymore."

 
No, according to this source who seems to know what they're talking about, it was a separate chip. The first reply here.

"
This is a good question.

Originally they planned to have two separate Cell chips on the PS3; A cell-based CPU, physics & geometry chip(the Cell chip we got), and a Cell-based pixel chip(GPU).

The Cell-GPU would have had 4–6 of the Cell SPUs and then some fixed function graphics hardware, like z-check(ROP) and texturing units (TMU)s.

Having similar architecture on all number crunching cores of the system would probably have given them some nice synergy advantages, same code could execute in either the GPU chip or in the CPU chip, and any improvements they would make to their compilers would benefit both physics/geometry side and the pixel side.

At some point they understood that they cannot get the cell-based GPU working, either in the schedule they had, or at the performance they wanted. This happened when the development of the console was already quite far and there was not very much time to the release.

So they quickly needed a different GPU for PS3. So they went shopping in panic and nVidia offered them a slightly customized geforce 7800. When PS3 was released, the GPU was already obsolete feature-wise, and they also had no synergy advantages between the CPU and GPU because the GPU did not have those cell SPUs anymore."

fascinating
 
0
It was cool, but I also never want to see another console like the Saturn with most of its library trapped behind its novel architecture.

That’s a fair point. Saturn emulation is still a pain to this day, and same with N64. From an engineering perspective though, I wonder about how things would’ve changed vs. standardizing hardware like we’ve been doing.

Though I will say, we’re starting to go back to exotic hardware, or at least more fixed-function silicon because as it turns out, it’s pretty freaking fast compared to more general purpose hardware. Looking at what Apple has been able to do with their silicon designs, it’s pretty cool. While I hate Apple as a company, I respect the engineering they accomplish with their chips.

No, according to this source who seems to know what they're talking about, it was a separate chip. The first reply here.

"
This is a good question.

Originally they planned to have two separate Cell chips on the PS3; A cell-based CPU, physics & geometry chip(the Cell chip we got), and a Cell-based pixel chip(GPU).

The Cell-GPU would have had 4–6 of the Cell SPUs and then some fixed function graphics hardware, like z-check(ROP) and texturing units (TMU)s.

Having similar architecture on all number crunching cores of the system would probably have given them some nice synergy advantages, same code could execute in either the GPU chip or in the CPU chip, and any improvements they would make to their compilers would benefit both physics/geometry side and the pixel side.

At some point they understood that they cannot get the cell-based GPU working, either in the schedule they had, or at the performance they wanted. This happened when the development of the console was already quite far and there was not very much time to the release.

So they quickly needed a different GPU for PS3. So they went shopping in panic and nVidia offered them a slightly customized geforce 7800. When PS3 was released, the GPU was already obsolete feature-wise, and they also had no synergy advantages between the CPU and GPU because the GPU did not have those cell SPUs anymore."


Damn! That is quite the what if, especially with the description of the synergy advantages between both the CPU, and GPU.
 
I still wonder if it could have a very low (Switch-ish) base clock and then boost a lot higher when it's not fully utilised, like when not using RT for example.
 
0
A silly question: Are there any Switch games that use 32GB cartridges?
What games do they use or have had a stock run with 32GB cartridges?

-Bayonetta 3 (reprint 002, with the version 1.2.0 on cartridge, found in USA but apparently is worldwide)
-The legend of Zelda: Tears of the kingdom (Worlwide)
-The Witcher 3: Wild Hunt Complete Edition (worldwide)
-Star Wars Kotor 2 (Limited Run release)
-Doom Eternal (Limited Run release)
-Alien Isolation (Limited Run release)
-Final Fantasy X/X2 Remastered (Only japan and asia)
-Dragon Quest Heroes I&II (Exclusive game of japan and asia)
-Elders Scrolls V: Skyrim (reprint 002, includes the Version 1.1.143.3229919, the original anniversary patch, EU and america)
-Wolfenstein II (Limited Run Games release)
 
That’s a fair point. Saturn emulation is still a pain to this day, and same with N64. From an engineering perspective though, I wonder about how things would’ve changed vs. standardizing hardware like we’ve been doing.

Though I will say, we’re starting to go back to exotic hardware, or at least more fixed-function silicon because as it turns out, it’s pretty freaking fast compared to more general purpose hardware. Looking at what Apple has been able to do with their silicon designs, it’s pretty cool. While I hate Apple as a company, I respect the engineering they accomplish with their chips.



Damn! That is quite the what if, especially with the description of the synergy advantages between both the CPU, and GPU.
I think it’s just that we’ve reached the end of the race to the common denominator in the basic form of hardware architecture. That doesn’t necessarily mean you’ve landed on the absolutely ideal architecture but that your odds of making something worse instead of better have gotten too high, too much risk for the possible reward. At that point it makes sense to unify around that common denominator and start focusing elsewhere. I think that is what happened here. The last few times someone tried to go off the common path all they got for it was difficult development, high cost, and poor backward compatibility and preservation of the software. Not worth it, so don’t do it anymore.

I don’t think that precludes the same cycle of experimentation and racing to a common denominator from taking place all over again around more peripheral or supporting systems. That’s like what we are seeing now with the fast SSD and FDE type stuff, as well as with machine learning cores, and new fixed function accelerator blocks within the GPU like RT cores. Won’t be surprised if those areas continue to evolve further for another decade or so.
 
Nvidia made the Tegra X1+ that used LPDDR4X, and they made use of that extra power of the SoC and RAM for the 2019 Nvidia Shield TV. It's not a custom design by Nintendo like the T239 is.

Nintendo may have urged Nvidia for something more efficient, but they were not in control over what components it used.
The 2019 shield sold a small fraction of the Switch,

Mariko was likely built first and foremost for Nintendo. The Shield alone would nowhere near justify the development of an upgraded TX1.
 
I don't think definitive statements on what GTA6 will or won't do make any sense. I see a lot of its going to overtax the consoles etc. We don't know what the games going to do. One thing we do know is that things like NPCs etc. are scalable as are Ai functions etc.

No one unless they work for rockstar itself and are working on GTA6 can speak definitively one way or the other.
From the leaks its going to be big on simulations. Things like weather, water, physics, hair etc
 
I know we were all banking on TSMC 4N, but headlines like these push the narrative that things are REALLY bad for Samsung right now and they gave a deal that Nintendo/Nvidia simply couldn't refuse.

I really don't think the more recent situation of Samsung's foundry business would have any impact in a choice that was made somewhere between 2020/21, when the node was locked in.
 
How much would multi-core optimization help with bringing over CPU heavy games? Even if I anticipate all of the CPU cores would be clocked lower than the stationary machines, at least the workload can be distributed among 7-8 of them instead of the 3-4 on NS1 (I give a +/- range as I assume one core is for OS?).
 
So will the reveal just be a 20 second clip or a 2 to 5 minutes video? Because I can't see it being more than 20 second unless they are going into the technical details. Or they might just fill it with fluff.


So let's say it is just a 20-30 second clip and they tell us the battery life, how good of a guess we can get from the video? Also, let's say they show a current gen game with it?
 
How much would multi-core optimization help with bringing over CPU heavy games? Even if I anticipate all of the CPU cores would be clocked lower than the stationary machines, at least the workload can be distributed among 7-8 of them instead of the 3-4 on NS1 (I give a +/- range as I assume one core is for OS?).
How much visuals will they put on the CPU to assist the GPU? How much physics and complex NPC behavior we will get? I think it is a lot of questions, but I don't think it will be that bad if they programmed it right.
 
The 2019 shield sold a small fraction of the Switch,

Mariko was likely built first and foremost for Nintendo. The Shield alone would nowhere near justify the development of an upgraded TX1.
Even if that was the case, we can't compare the Switch 2 using LPDDRX5 just to downclock it like Mariko did using LPDDR4X. The 5X is more or less an overclock without the kind of benefit 4X was able to have, thereby, it makes little sense that Nintendo in a custom design would opt for something they had no intention on using to its full extent.
 
So will the reveal just be a 20 second clip or a 2 to 5 minutes video? Because I can't see it being more than 20 second unless they are going into the technical details. Or they might just fill it with fluff.


So let's say it is just a 20-30 second clip and they tell us the battery life, how good of a guess we can get from the video? Also, let's say they show a current gen game with it?
I would expect a video similar to the Switch reveal. That was about 3:40 seconds. Very much about showing off the hardware and its concepts while teasing game footage. Nintendo notoriously refused to state Skyrim was coming to Switch even though they literally showed it in the teaser for Switch
 
I continue to harp on this point because I’m not interested when someone says, “can’t be done” in regards to games. A couple years ago, I saw a man make Super Mario 64 go vroom vroom on original hardware to make it run at 60fps using only optimizations. Not saying GTAV on PS3 would be able to run at 60fps with enough optimization, but more “what would it take to make it work?”
You're talking about the people's champion Kaze Emanuar! This man use every bit of a byte, fold his polynomials, and give us 9 mb of ram in the 64.
 
I would expect a video similar to the Switch reveal. That was about 3:40 seconds. Very much about showing off the hardware and its concepts while teasing game footage. Nintendo notoriously refused to state Skyrim was coming to Switch even though they literally showed it in the teaser for Switch
So we must be getting some new software feature if that's the case.
 
Quoted by: 10k
1
So we must be getting some new software feature if that's the case.
Voice/party chat overlay (Community/Chat) for the C-button. Brings up a UI over the game and pauses it. Kinda like the Xbox menu or PS menu when you press the home buttons.
 
I'm not going to speak on the software especulation, since this is not the thread for this. However I will just share with you that I am way less worried about the node due to what has been promised in those last few days.
 
I'm not going to speak on the software especulation, since this is not the thread for this. However I will just share with you that I am way less worried about the node due to what has been promised in those last few days.
If you can't love me at my 8nm, you don't deserve me at my 5nm.
 
I would expect a video similar to the Switch reveal. That was about 3:40 seconds. Very much about showing off the hardware and its concepts while teasing game footage. Nintendo notoriously refused to state Skyrim was coming to Switch even though they literally showed it in the teaser for Switch

given that we know the "Switch" concept, hopefully it will mean they spend all 4 mins on the new features and POWAH!!
 
Just one last comment about the PH Brazil's comment. He has scheduled a live to start at 7:45 EST to react to the announcement of the Switch 2. So he seems to be very confident that not only the announcement will happen, but that it will also happen in the morning.
 
Last edited:
The 2019 shield sold a small fraction of the Switch,

Mariko was likely built first and foremost for Nintendo. The Shield alone would nowhere near justify the development of an upgraded TX1.
Just to add from what I already mentioned from this quote, when did one product selling more or less than another mean anything in terms of which product the hardware was designed for? The Tegra X1 was "not" designed first and foremost for Switch v1, but Switch v1 very likely sold the most using it.

Besides, nothing was stopping Nintendo from making use of the added power of the TX1+ and LPDDR4X in docked mode, but they chose not to use it. Was it ever on the cards? If the TX1+ chip was indeed built first and foremost for Nintendo where it was never going to go beyond what Switch v1 pushed, then it had no need for the potential to go beyond what v2/Lite/OLED used, let alone what the Tegra X1 was capable of. That just seems less likely that the Tegra X1+ was made specifically for Nintendo, and more likely that Nintendo was able to use it to their benefit.
 
0
Even if that was the case, we can't compare the Switch 2 using LPDDRX5 just to downclock it like Mariko did using LPDDR4X. The 5X is more or less an overclock without the kind of benefit 4X was able to have, thereby, it makes little sense that Nintendo in a custom design would opt for something they had no intention on using to its full extent.

That doesn't even get into the SoC being entirely custom. I dont think Nintendo would pay extra for so much custom hardware if they only planned to downclock it to hell.
 
You were asked to not pretend to have insider information and to engage with the thread more productively before. For continuing your behavior, you have been threadbanned for a week - MarcelRguez, OctoSplattack, Biscuit, KilgoreWolfe
The fact that the memory of the Switch 2 can have a maximum bandwidth of 120 GB/s doesn't mean it will run at that speed. I don't know where you got that idea from. The memory will run at a MUCH lower frequency to consume less and dissipate less heat.
 
Please read this new, consolidated staff post before posting.
Last edited:


Back
Top Bottom