• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Do you have audio editing experience and want to help out with the Famiboards Discussion Club Podcast? If so, we're looking for help and would love to have you on the team! Just let us know in the Podcast Thread if you are interested!
  • General system instability
    🚧 We apologise for the recent server issues. The site may be unavaliable while we investigate the problem. 🚧

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

Again, the best source of information about the Switch 2 the world wide web over is this thread's OP.

You don't need to go elsewhere. Period.
 
The SD8G3 GPU is rated as high as somewhere around 5TFLOPS of FP32 compute performance, yet even with the highest score in Geekbench 6 compute benchmark it is behind the A17 Pro GPU, which is believe to have around 2.1 TFLOPS FP32. Something doesn't add up here
Some places say about 2.6 TFlops on Adreno 750 (used by SD8G3) on a clock of 903Mhz while others say 5.2TFlops. The Wiki page for Adreno bases the number on "Adreno ALUs", which a note says "ALU * MP count". Dunno what the MP count is, but benchmarks really don't seem to show anything near that higher number compared to the Apple A17 Pro. Tests had been done showing Adreno 750 doing only about 32% better than the A17 Pro, and then the gap shrinks once throttling kicks in. Just another note, but the SD8G3 is limited on RAM bandwidth according to Qualcomm, using LPDDR5x at 4.8Ghz, which equates to roughly 77GB/s, so that 5.2TFlop number just doesn't sound right.
 
To be fair, that was the last time we received positive news related to the hardware. It's all been downhill from there
Nothing to do with the actual hardware (Depending on how much you believe the 8nm stuff), just the release timing. Until we hear more I don't think there's any reason to assume the actual hardware will be disappointing based on what we're guessing so far.
 
My reading (can be very wrong) of the Samsung PR is that their cards will be the first mass-produced microSD Express product, not that they are the first to adopt the new SD 9.1 standard. The product only support the SD 7.0 speed (rated at 800MB/s, below the theoretical limit 985MB/s of SD 7.0). As for the Samsung engineer’s LinkedIn profile, IMHO it seems more likely regarding the next gen Game Card (eMMC protocol) than the microSD Express card (PCIe/NVMe).


The lowered max speed could be due to thermal/power considerations, or there could be some system bottleneck. Switch itself doesn’t go beyond 95MB/s (officially, but I never saw a benchmark above 92MB/s on a hacked Switch), even though the theoretical maximum is 104MB/s.

I think both of these things can be true.
That Samsung could be the first to mass-produced these high performance microSD Express cards and that they came about because of a custom product they made for a customer. Those words they used were pretty specific in the PR blurb just minus saying who it was they are working with...
 
Last edited:
People immediately assuming the worst when it comes to hardware from this extremely vague statement even though a common sentiment after Gamescom was the impressiveness of the tech demo. Have a little faith y'all.
Oh, I'm not worried about the performance, it's everything else that I'm worried about.
 
I think all worries are redundant at this point, except for some of the hardware specs which were known very early on, everything we speculated about switch2 was based on unconfirmed rumors, and worries in this case are pointless
 
从常识来看,如果t239采用8nm工艺,switch2势必会更大、更厚,而之前传出的switch2屏幕可能是7.9英寸,让我怀疑是否是8nm,除非任天堂拿到了来自某处的超时代冷却技术,即使不是 4N,也将是比 8nm 更先进的工艺
 
I think all worries are redundant at this point, except for some of the hardware specs which were known very early on, everything we speculated about switch2 was based on unconfirmed rumors, and worries in this case are pointless
I mean won't really know until more information comes in. We can guess. I hear Ray tracing takes a lot of memory. So maybe Nintendo will have more than 8? Maybe they will be cocky like in the Wii U era and serve us a under powerful console.
Idk, but Nintendo either has a problem under estimating a crucial tech that would benefit them at the time or knowing they will be deficient in that area and not caring.

I like to thing Nintendo learned their lesson and realize they cannot come underhanded. Would Nintendo repeat a slow OS where they would have to clear the system software out memory again because it was too small?

Nintendo only had one system. Left Instead of two, just think about that.
 
0
从常识来看,如果t239采用8nm工艺,switch2势必会更大、更厚,而之前传出的switch2屏幕可能是7.9英寸,让我怀疑是否是8nm,除非任天堂拿到了来自某处的超时代冷却技术,即使不是 4N,也将是比 8nm 更先进的工艺
sponge-bob-italian-italian.gif
 
Neat. Not that it means anything for this go-round, but it’s good that UFS keeps improving.
* Hidden text: cannot be quoted. *
* Hidden text: cannot be quoted. *
* Hidden text: cannot be quoted. *



* Hidden text: cannot be quoted. *
* Hidden text: cannot be quoted. *
* Hidden text: cannot be quoted. *
Hidden content is only available for registered users. Sharing it outside of Famiboards is subject to moderation.

I disagree that Nintendo has no interest in photorealism: They use it when it suits the game, and even then, they will abreviate some aspects of photorealism to avoid making the games too mechanically tedious (i.e. Red Dead Redemption 2 syndrome).

Case in point: Pikmin 4, which, incidentally, also runs in Unreal Engine 4.

I won't be surprised if EAD once again decided to use UE, or UE5 to make Pikmin V, which would be the perfect test case of using Nanite features like dynamic meshes (not to mention being able to zoom out of the map to see a photorealistic enviroment, abeit with small, cartoony characters). Instant environmental changes could make of some interesting gameplay scenarios, as they've already toyed with the idea of environments changing midday, but could expand it further with the new Nanite features.
Unreal Engine is the tool of necessity. Eighting’s deep involvement basically excluded the use of internal development tools. It’s one of the notable downsides of external development partners, is you end up using the external partner’s dev tools, be they their own custom engines or stuff like Unreal and Unity. Nintendo is incredibly secretive with their internal development tools.

Pikmin 5’s (or any 1st-party product’s) use of UE or anything like it is going to be entirely dependent on who’s involved in the project. This is also why NDcube, as an additional example, uses Bezel Engine, because of all the other companies involved in their game production.
I think Nintendo deserves credit for being the first to market with the idea, but I wholeheartedly think it’s where the industry will converge anyway, and I’m not gonna dog companies for making similar plays. So yes, please everybody copy.
Copying the hardware doesn’t mean much if they’re not doing the same with the business model. And I doubt Microsoft (or anyone else) is committed enough to do that.
 
Also important that we can expect 256-500 Gb of storage on the switch 2 which would be more than enough for first party titles. Like I don’t see Nintendo having 50-100 GB games. I’m in the belief the maximum will be 30 GB for first party, but it’s third party that I’m slightly worried about, especially with games like COD.

So Nintendo skipping expressmicroSD in the beginning will not be the end of the world, but I don’t see why Nintendo won’t have support with ExpressSD card in launch, since the switch 2 would have a bigger install base then the pc handheld and I can see Samsung trying to have it be compatible in launch or convince Nintendo.

But quote me if wrong.
Is not expected a 4x jump in file size? Next Zelda with 60GB game?
 
Please offer some level of translation, even if machine translation, if the content isn't in english language

Rough translation:

From a common sense point of view, if T239 adopts 8nm process, switch2 will inevitably be larger and thicker, and the previously reported switch2 screen may be 7.9 inches, which makes me doubt whether it is 8nm, unless Nintendo gets super-era cooling from somewhere.
(*Process) Technology, even if it is not 4N, will be a more advanced process than 8nm

Basically what we have been saying since we have learned about T239 specs, Thraktor tests and Nvidia Orin Power Tool: The T239 SoC design doesn't make sense, from a power point of view, to be manufactured on Samsung 8N. Unless there's something we aren't aware about this process technology and significant DTCO (Design Technology Co-Optimization) gains can be leveraged from it.

The comment also says that, even if T239 isn't manufactured on TSMC 4N, they think it's using a more advanced node (TSMC N6, Samsung 4LPP, etc) than Samsung 8N, as they reiterate that the SoC can't hit the power efficiency and target levels that Nintendo would want if manufactured on Samsung 8N.
 
从常识来看,如果t239采用8nm工艺,switch2势必会更大、更厚,而之前传出的switch2屏幕可能是7.9英寸,让我怀疑是否是8nm,除非任天堂拿到了来自某处的超时代冷却技术,即使不是 4N,也将是比 8nm 更先进的工艺

If we look at Maxwell, it started on TMSC 28 nm and went to TMSC 20 nm (Switch launch), before going to TMSC 16 nm (Switch Revision)

If we look at Ampere, it started on TMSC 7 nm (professional) before going to Samsung 8 nm (consumer). Even the 3050 6 GB, released Feb, 2024, is still using Samsung 8 nm. If we look at the oldest Ada, it started on TMSC 5 nm.

My thinking is this -- if the SoC is being put together by Samsung, it's probably a 8 nm process. But if it's being put together by TMSC like the Switch, it's probably using the "older" 5 nm process for the launch model.
 
As for the Samsung engineer’s LinkedIn profile, IMHO it seems more likely regarding the next gen Game Card (eMMC protocol) than the microSD Express card (PCIe/NVMe).
Probably more for the next-gen Game Card application-specific integrated circuit (ASIC) than the actual next-gen Game Card(s) to ensure backwards compatibility with Nintendo Switch Game Cards, considering Lotus3 uses the eMMC interface, whereas the actual Nintendo Switch Game Cards use a custom Serial Peripheral Interface (SPI).

I've also mentioned (here and here) that the mention of eMMC protocol on the Samsung engineer's LinkedIn profile could be referring to Samsung's custom microSD Express 7.0 controller since the eMMC protocol and the 4-bit SD card protocol are, for the most part, the same (here and here). And Linux classifies both eMMC and microSD cards as MMC (MultiMediaCard).
The lowered max speed could be due to thermal/power considerations, or there could be some system bottleneck.
I also wonder why AyaNeo doesn't use Samsung's performance metric (800 MB/s sequential read speed) when advertising the microSD Express card support, especially if AyaNeo is indeed one of Samsung's customers.
 
Random question:

Is there a possibility we might get a launch Switch 2 and then a revision a la Switch V2, with smaller node, or is that too impractical?
Probably not. 3 nm is just too expensive relative to performance, and 4 nm would already be a really good node for a chip like T239. If there's any power savings to be made, it'll probably be in other components rather than the SoC.
 
Random question:

Is there a possibility we might get a launch Switch 2 and then a revision a la Switch V2, with smaller node, or is that too impractical?
unlikely this time around, if they want better battery and a smaller device for a light model or whatever they will most likely have to do the following instead:
1. the cooling will be overbuilt for handheld mode anyway to accommodate docked mode so they could shrink it a bit for the lite model
2. for a V2 or a lite to get viable battery they will have to go with the improved battery chemistry angle, like the deck oled and most modern smartphones
 
hello everyone. it's been a while since i've last accessed the forum, so, forgive me if this question has already been answered, could we see the Nintendo Switch 2 or it's V2 use Blackwell's 4NP node, then allowing either slightly better clocks or, more realistically, better battery life?
 
hello everyone. it's been a while since i've last accessed the forum, so, forgive me if this question has already been answered, could we see the Nintendo Switch 2 or it's V2 use Blackwell's 4NP node, then allowing either slightly better clocks or, more realistically, better battery life?
Could potentially, but at that point it might not even be particularly worth it. It's obvious that the main architectural innovation of Blackwell will be use of chiplets rather than the node, and having to swap your entire fab production for what would likely be very marginal increases in efficiency wouldn't be a good financial decision.
 
从常识来看,如果t239采用8nm工艺,switch2势必会更大、更厚,而之前传出的switch2屏幕可能是7.9英寸,让我怀疑是否是8nm,除非任天堂拿到了来自某处的超时代冷却技术,即使不是 4N,也将是比 8nm 更先进的工艺
So if it is thicker than it won't fit in the OG Switch dock? I would imagine if we do get backwards compatibility that we would have the same dock size right? Wouldn't that be cheaper than making a new exclusive dock? So 4N meters would make sense?
 
So if it is thicker than it won't fit in the OG Switch dock? I would imagine if we do get backwards compatibility that we would have the same dock size right? Wouldn't that be cheaper than making a new exclusive dock? So 4N meters would make sense?
It's a new generation. In my opinion, making an exclusive dock is necessary and avoiding it will cause problems when the current generation is discontinued/abandoned.
 
Could potentially, but at that point it might not even be particularly worth it. It's obvious that the main architectural innovation of Blackwell will be use of chiplets rather than the node, and having to swap your entire fab production for what would likely be very marginal increases in efficiency wouldn't be a good financial decision.
It allowed a 30% increase on transistor density, so it could be worth it and, maybe, the transition from 4N to 4NP could be done around 1.5 years before launch in March 2025.
 
hello everyone. it's been a while since i've last accessed the forum, so, forgive me if this question has already been answered, could we see the Nintendo Switch 2 or it's V2 use Blackwell's 4NP node, then allowing either slightly better clocks or, more realistically, better battery life?
Assuming TSMC's 4N process node's comparable to TSMC's N4 process node, which seems to be the case, and assuming TSMC's 4NP process node's comparable to TSMC's N4P process node, which is unknown, then TSMC's 4NP process node can probably only offer slightly higher frequencies since TSMC only promised 6% higher performance with TSMC's N4P process node compared to TSMC's N4 process node.
 
Assuming TSMC's 4N process node's comparable to TSMC's N4 process node, which seems to be the case, and assuming TSMC's 4NP process node's comparable to TSMC's N4P process node, which is unknown, then TSMC's 4NP process node can probably only offer slightly higher frequencies since TSMC only promised 6% higher performance with TSMC's N4P process node compared to TSMC's N4 process node.
The N4P process was designed for an easy migration of 5nm platform-based products
the first products based on N4P technology are expected to tape out by the second half of 2022.
:unsure: does this mean that, something taped out on 4N would be possible to migrate to this without large redesign?
 
It allowed a 30% increase on transistor density, so it could be worth it and, maybe, the transition from 4N to 4NP could be done around 1.5 years before launch in March 2025.
AD102 has a transistor density of ~125.403 MTr/mm² [(76.3 billion transistors)/(608.44 mm²)], which is a ~24.25% higher transistor density than Hopper (GH100), which has a transistor density of ~98.28 MTr/mm² [(80 billion transistors)/(814 mm²)].

So Nvidia achieving a ~30% higher transistor density with Blackwell is probably thanks to the architectural changes with Blackwell rather than any difference(s) between TSMC's 4N process node and TSMC's 4NP process node, especially when comparing Blackwell with Hopper. (A single Blackwell die has 104 billion transistors (208 billion transistors is with respect to two Blackwell dies), which is ~26.09% more transistors than what Hopper has at 80 billion transistors.)

So I doubt transitioning Drake from TSMC's 4N process node to TSMC's 4NP process node alone is going to offer any significant upgrades, especially since there's no architectural change(s) in that scenario.

:unsure: does this mean that, something taped out on 4N would be possible to migrate to this without large redesign?
Theoretically speaking, yes, since TSMC's N4 process node and TSMC's N4P process node share the same IP, assuming TSMC's 4N process node is comparable to TSMC's N4 process node, and TSMC's 4NP process node is comparable to TSMC's N4P process node.
 
So if it is thicker than it won't fit in the OG Switch dock? I would imagine if we do get backwards compatibility that we would have the same dock size right? Wouldn't that be cheaper than making a new exclusive dock? So 4N meters would make sense?
 
Since the node is currently the topic again, i have a very layman question: Would Samsungs 4nm be worse or better than TSMC 8nm? I think i've grasped by now that TSMC 4/5nm would be the "ideal" scenario. ^^

LISAN AL GAIB!

God i had to laugh out so fucking loud in the showing i was with my friends during that scene ... people turned their heads at me.
 
Since the node is currently the topic again, i have a very layman question: Would Samsungs 4nm be worse or better than TSMC 8nm? I think i've grasped by now that TSMC 4/5nm would be the "ideal" scenario. ^^



God i had to laugh out so fucking loud in the showing i was with my friends during that scene ... people turned their heads at me.
TSMC doesn't have an 8 nm node, since they went straight from 10 nm to 7 nm without a half-shrink. From what I gathered, it might be a bit better than TSMC N7 and probably a decent bit cheaper, but having to make it in a node that literally no other Nvidia products use would offset any cost savings.
 
Since the node is currently the topic again, i have a very layman question: Would Samsungs 4nm be worse or better than TSMC 8nm? I think i've grasped by now that TSMC 4/5nm would be the "ideal" scenario. ^^
The 8nm process is a Samsung node as well, a shrunk version of the 10nm process, not a TSMC one. Basically: 4nm TSMC > 4nm Samsung > 8nm Samsung.
 
The comment also says that, even if T239 isn't manufactured on TSMC 4N, they think it's using a more advanced node (TSMC N6, Samsung 4LPP, etc) than Samsung 8N, as they reiterate that the SoC can't hit the power efficiency and target levels that Nintendo would want if manufactured on Samsung 8N.
If ever for some reason Nvidia and/or Nintendo absolutely want it to be Samsung and 8nm isn't a viable option, I wonder which would be the more likely option among those available.

Probably not. 3 nm is just too expensive relative to performance, and 4 nm would already be a really good node for a chip like T239. If there's any power savings to be made, it'll probably be in other components rather than the SoC.
The screen of a lite model will be smaller and there will be no need to change clock frequencies, but I don't know if that's enough to make big gains in battery life.
 
It's a new generation. In my opinion, making an exclusive dock is necessary and avoiding it will cause problems when the current generation is discontinued/abandoned.
I wouldn’t be surprised if the dock will have some sort of fan for cooling, since if our speculation is right, then it’ll be around series S level in dock mode.

With that comes the worries of overheating in dock mode.

Like having the dock be bigger and more technologically advanced, then the vanilla dock is extremely important, plus the Oled dock is technically able to output 4K. So if the dock is capable of 1080p-4K 60-120fps in certain games, then I can see them using that as a selling point.

Like botw demo is 4K 60fps. And I’m wondering if Nintendo will maybe have 1080p performance mode in certain games for 120 fps. Especially switch 1 games, like Mario wonder and bowser furry.

Also switch 2 games will I think only offer 30-60 fps. But having the switch 1 having enhancements will be extremely important and can be a huge selling point for various people.
 
Last edited:
If ever for some reason Nvidia and/or Nintendo absolutely want it to be Samsung and 8nm isn't a viable option, I wonder which would be the more likely option among those available.
Assuming, for some reason (Multi-level deal with Samsung for DRAM, NAND, Microcontroller, SoC manufacturing, etc), Samsung is the only option and SEC 8N is unsuitable for T239, then Nintendo/Nvidia could opt for Samsung 4LPP, which is their current HVM node currently, it's cheaper than TSMC N4 while having similar density and good performance and energy characteristics.
The screen of a lite model will be smaller and there will be no need to change clock frequencies, but I don't know if that's enough to make big gains in battery life.
A Lite model wouldn't necessarily need a shrink of the SoC for it to happen. There's gains to be made with more efficient screen, memory, storage, etc in the future.
 
If that PS5 optimization is something about a Radeon GPU feature? Yeah, I'd expect it would apply to PCs using Radeons as well. If something is optimized for a console I'd be thinking of it either being cut down to use less of something the console is poor on, or redesigned so part of the cost moves to something else it's stronger on. If something about DLSS could be cut down to require less tensor core use, that would also apply to PCs. Offloading the work elsewhere doesn't seem very likely, since DLSS is so much centered around the strengths of tensor cores. Simplest DLSS optimization for a weaker part: lower the target resolution.
Otimize for a console is take the best that specific hardware can offer than make a generic GPU optimization. How max out the use of 16 Gb of ram, how make the game works well on that CPU, how put resources here and there that will make the game runs better on that specific hardware spec alone. That kind of optimization, most of time, can't be useful for the PC port, since PC can't be something too specific. PC has more or less ram and it can be faster or slower. There will be better and worse GPUs, it is made to run in a lot of different customization.

Run well in one hardware alone and try adapt the code to take the best that hardware can do is different than make it run in a lot of different hardwares. That why some console optimizations can't be useful for PCs.
 
Yoshi-P expressing interests about bringing FFXVI to other consoles after the PC version is finished. FFXVI is a pretty large game(90 GB), so I wonder how much it could be compressed for a potential Switch 2 version. Assuming it has to run on handheld mode as well, I'd imagine it would be pretty tough, a lot of assets would have to be graphically compromised. FFXVI was originally meant to come to the PS4 but it was pushing the console to its limits. The Switch 2 CPU and GPU would be leaps and bounds over the old PS4's architecture, so hopefully it would be easier to port. I wonder whether FFXVI or FF7R would be easier to bring to the Switch successor.
 
Last edited:
Yoshi-P expressing interests about bringing FFXVI to other consoles after the PC version is finished. FFXVI is a pretty large game(90 GB), so I wonder how much it could be compressed for a potential Switch 2 version. Assuming it has to run on handheld mode as well, I'd imagine it would be pretty tough, a lot of assets would have to be graphically compromised. FFXVI was originally meant to come to the PS4 but it was pushing the console to its limits. The Switch 2 CPU and GPU would be leaps and bounds over the old PS4's architecture, so hopefully it would be easier to port. I wonder whether FFXVI or FF7R would be easier to bring to the Switch successor.
The thing is we’ll never know since developers are able to do some crazy stuff, just to able to make a little bit of money.

Like Doom, doom eternal, Witcher 3 and nier automata were games that people thought weren’t never be coming to switch because of how demanding they seemed for the switch, but they’re able to run on the switch and are surprisingly one of the best ports on the switch.

Like I have confidence that we’ll see some groundbreaking ports on the switch 2, but we’ll have to wait and see which developers are willing to make that happen.
 
Yoshi-P expressing interests about bringing FFXVI to other consoles after the PC version is finished. FFXVI is a pretty large game(90 GB), so I wonder how much it could be compressed for a potential Switch 2 version. Assuming it has to run on handheld mode as well, I'd imagine it would be pretty tough, a lot of assets would have to be graphically compromised. FFXVI was originally meant to come to the PS4 but it was pushing the console to its limits. The Switch 2 CPU and GPU would be leaps and bounds over the old PS4's architecture, so hopefully it would be easier to port. I wonder whether FFXVI or FF7R would be easier to bring to the Switch successor.
If they're planning an Xbox version, it'll have to be downported for Series S anyway.

I guess they'll have to go a bit further for Switch 2, but the sales potential I'm sure will make it viable.
 
Yoshi-P expressing interests about bringing FFXVI to other consoles after the PC version is finished. FFXVI is a pretty large game(90 GB), so I wonder how much it could be compressed for a potential Switch 2 version. Assuming it has to run on handheld mode as well, I'd imagine it would be pretty tough, a lot of assets would have to be graphically compromised. FFXVI was originally meant to come to the PS4 but it was pushing the console to its limits. The Switch 2 CPU and GPU would be leaps and bounds over the old PS4's architecture, so hopefully it would be easier to port. I wonder whether FFXVI or FF7R would be easier to bring to the Switch successor.

They could probably compress it a bit, but I think buyers are simply going to have to get used to the whole "only 16-32GB are on the cartridge, you have to download another 40GB at home" type thing.
 
One of the best parts of Switch 2, if it manages to be a big success, is that studios can go back to making PS4 scale games and having them sell reliably well.

Look at all the scrutiny Rise of Ronin, a PS5 exclusive, has faced for its graphics. There are a lot of consumer expectations for what a PS5 game should like. Which I understand because of what was promised, in addition to the system's price point.

Inherently with the Switch 2 presumably costing less than a PS5, and it being a hybrid, the expectations are different for what is considered "acceptable".

Basically I think it's going to be really healthy for the industry, and it'll promote opportunities for more consistent releases that don't carry the baggage of inflated budgets.
 
Since the node is currently the topic again, i have a very layman question: Would Samsungs 4nm be worse or better than TSMC 8nm? I think i've grasped by now that TSMC 4/5nm would be the "ideal" scenario. ^^

Yes, I think it would be better.
Samsung's 5nm-4nm LPE (EUV) node caused a great deal of criticism in the smartphone industry, as SoCs using that process suffered from high power usage and poor efficiency. Their own SoC - Exynos 2200 released in 2020 also suffered along with it and used an AMD GPU in its design. Thus that generation of phones made with that process node rather stagnated (or became worse) instead of improving the situation. Yes, you could push the SoC to high frequencies and get better performance compared to the previous generation, but that came at a great cost -> power usage and thus more heat.

Geekerwan looked at the Exynos 2200 explicitly also and although a valid comparison cannot be made, they looked at the GPU peak clock frequency relative to AMD Radeon 660M (made on TSMC N6) that's in laptops. Both of the GPUs had the same amount of CU's. It was observed that the GPU in the GPU design has been more optimised for that lower voltage/frequency curve, and thus lower power consumption, but again the chart should be taken with a massive grain of salt as cross-platform comparisons aren't that possible.


However the story does turn around a bit, the Exynos 2400 was released and it has the more "refined" Samsung 4nm LPP+ process [1] .
While its performance is mostly up to par or below the competition, it did bring the necessary stride in power efficiency compared to TMSC's 5nm & 4nm [2].
There also hasn't been a comprehensive in-depth testing of that SoC yet, apart from loose conclusions or short benchmarks, so ehhh I also want more data on that.
Although compared to Samsung's previous attempt with their own SoC; The RDNA3 GPU performance has practically doubled (synthetic), so that's improvement, especially since we know that the gap between RDNA2 and RDNA3 isn't that large. However other variables, such as battery size, thermal design change, and display improvements should be taken into account.

Other than that, I did make those graphs one time that looked at A78 ST scores from different phone chips, you could see that Samsung's small node design did come out ahead TSMC's 6nm at respective peak clock frequencies, so yes compared to 8nm having a smaller node process would be beneficial.;



PS: nm sizes are marketing nomenclature.

[1] https://semiconductor.samsung.com/processor/mobile-processor/exynos-2400/
Those gains in power saving are made possible by a 3rd generation 4nm low-power process node. The Exynos 2400 is also the first Exynos processor to use a Fan-out Wafer Level Package (FOWLP) to boost thermal management so you can push games and apps further and longer.
[2] https://www.androidauthority.com/exynos-vs-snapdragon-galaxy-s24-3411235/
 
They could probably compress it a bit, but I think buyers are simply going to have to get used to the whole "only 16-32GB are on the cartridge, you have to download another 40GB at home" type thing.
Here's hoping 32-64 GB carts being standardized for future games. Nintendo couldn't get 64 GB to work for the original Switch, and only a handful of games used the 32 GB Cards. I'd hope prices have finally gone down enough that the carts could be viable.

It's not likely as physical becomes less prioritized, but it would be nice to finally abandon the 8 GB carts.
 
One of the best parts of Switch 2, if it manages to be a big success, is that studios can go back to making PS4 scale games and having them sell reliably well.

Look at all the scrutiny Rise of Ronin, a PS5 exclusive, has faced for its graphics. There are a lot of consumer expectations for what a PS5 game should like. Which I understand because of what was promised, in addition to the system's price point.

Inherently with the Switch 2 presumably costing less than a PS5, and it being a hybrid, the expectations are different for what is considered "acceptable".

Basically I think it's going to be really healthy for the industry, and it'll promote opportunities for more consistent releases that don't carry the baggage of inflated budgets.
I'm not sure that's gonna do what you think it will do. Switch 2 isn't supposed to get "PS4 scale" games, it's supposed to get downports of everything current gen which is finally the standard and designed around the specifications of the 2020 consoles. It should give AA/Unreal indie games more of a chance to shine for sure, more platforms are always a bonus (which applies for all kinds of games, not just small ones)... But there's virtually no cost cutting for the publisher merely because of Switch 2's existence. PS5 is the dominant platform this generation, optimizing for more platforms actually does the opposite effect to cost cutting these days.
 
Yes, I think it would be better.
Samsung's 5nm-4nm LPE (EUV) node caused a great deal of criticism in the smartphone industry, as SoCs using that process suffered from high power usage and poor efficiency. Their own SoC - Exynos 2200 released in 2020 also suffered along with it and used an AMD GPU in its design. Thus that generation of phones made with that process node rather stagnated (or became worse) instead of improving the situation. Yes, you could push the SoC to high frequencies and get better performance compared to the previous generation, but that came at a great cost -> power usage and thus more heat.

Geekerwan looked at the Exynos 2200 explicitly also and although a valid comparison cannot be made, they looked at the GPU peak clock frequency relative to AMD Radeon 660M (made on TSMC N6) that's in laptops. Both of the GPUs had the same amount of CU's. It was observed that the GPU in the GPU design has been more optimised for that lower voltage/frequency curve, and thus lower power consumption, but again the chart should be taken with a massive grain of salt as cross-platform comparisons aren't that possible.



However the story does turn around a bit, the Exynos 2400 was released and it has the more "refined" Samsung 4nm LPP+ process [1] .
While its performance is mostly up to par or below the competition, it did bring the necessary stride in power efficiency compared to TMSC's 5nm & 4nm [2].
There also hasn't been a comprehensive in-depth testing of that SoC yet, apart from loose conclusions or short benchmarks, so ehhh I also want more data on that.
Although compared to Samsung's previous attempt with their own SoC; The RDNA3 GPU performance has practically doubled (synthetic), so that's improvement, especially since we know that the gap between RDNA2 and RDNA3 isn't that large. However other variables, such as battery size, thermal design change, and display improvements should be taken into account.

Other than that, I did make those graphs one time that looked at A78 ST scores from different phone chips, you could see that Samsung's small node design did come out ahead TSMC's 6nm at respective peak clock frequencies, so yes compared to 8nm having a smaller node process would be beneficial.;




PS: nm sizes are marketing nomenclature.

[1] https://semiconductor.samsung.com/processor/mobile-processor/exynos-2400/

[2] https://www.androidauthority.com/exynos-vs-snapdragon-galaxy-s24-3411235/
Great post! While QCOM claimed there was a 30% deficiency from SF 4LPX (5LPE renamed) to TSMC N4P, Samsung 4LPP+ (Third Gen) should be a bit more closer to TSMC N4P. From the small data available at Golden Reviewer and Galaxy S24 Exynos 2400 testing, it's much closer to the current QCOM flagship than what Exynos 2200 was.

If Nintendo and Nvidia want to stick to Samsung but 8N is impossible, 4LPP+ would be a good and efficient alternative. Plus, Samsung wafers are cheaper and plenty available too.
 
I do wonder with the Samsung 5nm or 4nm nodes, I know people will say "well those can't be used for Switch 2 because Nvidia hasn't purchased any 5nm/4nm capacity from Samsung" .... I mean would we know if they had? Are they obligated to disclose that?
No. Specially for a very small and secretive custom effort.

In the past, we knew about Nvidia buying capacity to due to media rumors and the fact such capacities were tied to the ramp up of a new GPU generation. For a custom offshot project like T239, the chances of we hearing about they buying capacity are slim to none. So we can't predict either way.
 
No. Specially for a very small and secretive custom effort.

In the past, we knew about Nvidia buying capacity to due to media rumors and the fact such capacities were tied to the ramp up of a new GPU generation. For a custom offshot project like T239, the chances of we hearing about they buying capacity are slim to none. So we can't predict either way.

Yeah basically this is what I thought. If Nintendo/Samsung/Nvidia had worked out a deal for say a different Samsung node, I don't think it's a given we would know about that.
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom