Some places say about 2.6 TFlops on Adreno 750 (used by SD8G3) on a clock of 903Mhz while others say 5.2TFlops. The Wiki page for Adreno bases the number on "Adreno ALUs", which a note says "ALU * MP count". Dunno what the MP count is, but benchmarks really don't seem to show anything near that higher number compared to the Apple A17 Pro. Tests had been done showing Adreno 750 doing only about 32% better than the A17 Pro, and then the gap shrinks once throttling kicks in. Just another note, but the SD8G3 is limited on RAM bandwidth according to Qualcomm, using LPDDR5x at 4.8Ghz, which equates to roughly 77GB/s, so that 5.2TFlop number just doesn't sound right.The SD8G3 GPU is rated as high as somewhere around 5TFLOPS of FP32 compute performance, yet even with the highest score in Geekbench 6 compute benchmark it is behind the A17 Pro GPU, which is believe to have around 2.1 TFLOPS FP32. Something doesn't add up here
Nothing to do with the actual hardware (Depending on how much you believe the 8nm stuff), just the release timing. Until we hear more I don't think there's any reason to assume the actual hardware will be disappointing based on what we're guessing so far.To be fair, that was the last time we received positive news related to the hardware. It's all been downhill from there
Funny to look back now that the MSI Claw has released at the stuff that was said when it was announced, and I got it right on the money.SoC with a 28 W TDP, 32 GB of RAM? Unless it has a battery forged in fucking Wakanda, then this thing is a disaster.
You have to remember the power efficiency Arm has, thoughFunny to look back now that the MSI Claw has released at the stuff that was said when it was announced, and I got it right on the money.
My reading (can be very wrong) of the Samsung PR is that their cards will be the first mass-produced microSD Express product, not that they are the first to adopt the new SD 9.1 standard. The product only support the SD 7.0 speed (rated at 800MB/s, below the theoretical limit 985MB/s of SD 7.0). As for the Samsung engineer’s LinkedIn profile, IMHO it seems more likely regarding the next gen Game Card (eMMC protocol) than the microSD Express card (PCIe/NVMe).
The lowered max speed could be due to thermal/power considerations, or there could be some system bottleneck. Switch itself doesn’t go beyond 95MB/s (officially, but I never saw a benchmark above 92MB/s on a hacked Switch), even though the theoretical maximum is 104MB/s.
Oh, I'm not worried about the performance, it's everything else that I'm worried about.People immediately assuming the worst when it comes to hardware from this extremely vague statement even though a common sentiment after Gamescom was the impressiveness of the tech demo. Have a little faith y'all.
I mean won't really know until more information comes in. We can guess. I hear Ray tracing takes a lot of memory. So maybe Nintendo will have more than 8? Maybe they will be cocky like in the Wii U era and serve us a under powerful console.I think all worries are redundant at this point, except for some of the hardware specs which were known very early on, everything we speculated about switch2 was based on unconfirmed rumors, and worries in this case are pointless
从常识来看,如果t239采用8nm工艺,switch2势必会更大、更厚,而之前传出的switch2屏幕可能是7.9英寸,让我怀疑是否是8nm,除非任天堂拿到了来自某处的超时代冷却技术,即使不是 4N,也将是比 8nm 更先进的工艺
Neat. Not that it means anything for this go-round, but it’s good that UFS keeps improving.
* Hidden text: cannot be quoted. *
* Hidden text: cannot be quoted. *
* Hidden text: cannot be quoted. *
* Hidden text: cannot be quoted. *
* Hidden text: cannot be quoted. *
* Hidden text: cannot be quoted. *
Unreal Engine is the tool of necessity. Eighting’s deep involvement basically excluded the use of internal development tools. It’s one of the notable downsides of external development partners, is you end up using the external partner’s dev tools, be they their own custom engines or stuff like Unreal and Unity. Nintendo is incredibly secretive with their internal development tools.I disagree that Nintendo has no interest in photorealism: They use it when it suits the game, and even then, they will abreviate some aspects of photorealism to avoid making the games too mechanically tedious (i.e. Red Dead Redemption 2 syndrome).
Case in point: Pikmin 4, which, incidentally, also runs in Unreal Engine 4.
I won't be surprised if EAD once again decided to use UE, or UE5 to make Pikmin V, which would be the perfect test case of using Nanite features like dynamic meshes (not to mention being able to zoom out of the map to see a photorealistic enviroment, abeit with small, cartoony characters). Instant environmental changes could make of some interesting gameplay scenarios, as they've already toyed with the idea of environments changing midday, but could expand it further with the new Nanite features.
Copying the hardware doesn’t mean much if they’re not doing the same with the business model. And I doubt Microsoft (or anyone else) is committed enough to do that.I think Nintendo deserves credit for being the first to market with the idea, but I wholeheartedly think it’s where the industry will converge anyway, and I’m not gonna dog companies for making similar plays. So yes, please everybody copy.
Is not expected a 4x jump in file size? Next Zelda with 60GB game?Also important that we can expect 256-500 Gb of storage on the switch 2 which would be more than enough for first party titles. Like I don’t see Nintendo having 50-100 GB games. I’m in the belief the maximum will be 30 GB for first party, but it’s third party that I’m slightly worried about, especially with games like COD.
So Nintendo skipping expressmicroSD in the beginning will not be the end of the world, but I don’t see why Nintendo won’t have support with ExpressSD card in launch, since the switch 2 would have a bigger install base then the pc handheld and I can see Samsung trying to have it be compatible in launch or convince Nintendo.
But quote me if wrong.
From a common sense point of view, if T239 adopts 8nm process, switch2 will inevitably be larger and thicker, and the previously reported switch2 screen may be 7.9 inches, which makes me doubt whether it is 8nm, unless Nintendo gets super-era cooling from somewhere.
(*Process) Technology, even if it is not 4N, will be a more advanced process than 8nm
从常识来看,如果t239采用8nm工艺,switch2势必会更大、更厚,而之前传出的switch2屏幕可能是7.9英寸,让我怀疑是否是8nm,除非任天堂拿到了来自某处的超时代冷却技术,即使不是 4N,也将是比 8nm 更先进的工艺
I laughed at this more than I should have hahahahahah
Probably more for the next-gen Game Card application-specific integrated circuit (ASIC) than the actual next-gen Game Card(s) to ensure backwards compatibility with Nintendo Switch Game Cards, considering Lotus3 uses the eMMC interface, whereas the actual Nintendo Switch Game Cards use a custom Serial Peripheral Interface (SPI).As for the Samsung engineer’s LinkedIn profile, IMHO it seems more likely regarding the next gen Game Card (eMMC protocol) than the microSD Express card (PCIe/NVMe).
I also wonder why AyaNeo doesn't use Samsung's performance metric (800 MB/s sequential read speed) when advertising the microSD Express card support, especially if AyaNeo is indeed one of Samsung's customers.The lowered max speed could be due to thermal/power considerations, or there could be some system bottleneck.
Probably not. 3 nm is just too expensive relative to performance, and 4 nm would already be a really good node for a chip like T239. If there's any power savings to be made, it'll probably be in other components rather than the SoC.Random question:
Is there a possibility we might get a launch Switch 2 and then a revision a la Switch V2, with smaller node, or is that too impractical?
unlikely this time around, if they want better battery and a smaller device for a light model or whatever they will most likely have to do the following instead:Random question:
Is there a possibility we might get a launch Switch 2 and then a revision a la Switch V2, with smaller node, or is that too impractical?
Could potentially, but at that point it might not even be particularly worth it. It's obvious that the main architectural innovation of Blackwell will be use of chiplets rather than the node, and having to swap your entire fab production for what would likely be very marginal increases in efficiency wouldn't be a good financial decision.hello everyone. it's been a while since i've last accessed the forum, so, forgive me if this question has already been answered, could we see the Nintendo Switch 2 or it's V2 use Blackwell's 4NP node, then allowing either slightly better clocks or, more realistically, better battery life?
So if it is thicker than it won't fit in the OG Switch dock? I would imagine if we do get backwards compatibility that we would have the same dock size right? Wouldn't that be cheaper than making a new exclusive dock? So 4N meters would make sense?从常识来看,如果t239采用8nm工艺,switch2势必会更大、更厚,而之前传出的switch2屏幕可能是7.9英寸,让我怀疑是否是8nm,除非任天堂拿到了来自某处的超时代冷却技术,即使不是 4N,也将是比 8nm 更先进的工艺
It's a new generation. In my opinion, making an exclusive dock is necessary and avoiding it will cause problems when the current generation is discontinued/abandoned.So if it is thicker than it won't fit in the OG Switch dock? I would imagine if we do get backwards compatibility that we would have the same dock size right? Wouldn't that be cheaper than making a new exclusive dock? So 4N meters would make sense?
It allowed a 30% increase on transistor density, so it could be worth it and, maybe, the transition from 4N to 4NP could be done around 1.5 years before launch in March 2025.Could potentially, but at that point it might not even be particularly worth it. It's obvious that the main architectural innovation of Blackwell will be use of chiplets rather than the node, and having to swap your entire fab production for what would likely be very marginal increases in efficiency wouldn't be a good financial decision.
Assuming TSMC's 4N process node's comparable to TSMC's N4 process node, which seems to be the case, and assuming TSMC's 4NP process node's comparable to TSMC's N4P process node, which is unknown, then TSMC's 4NP process node can probably only offer slightly higher frequencies since TSMC only promised 6% higher performance with TSMC's N4P process node compared to TSMC's N4 process node.hello everyone. it's been a while since i've last accessed the forum, so, forgive me if this question has already been answered, could we see the Nintendo Switch 2 or it's V2 use Blackwell's 4NP node, then allowing either slightly better clocks or, more realistically, better battery life?
Assuming TSMC's 4N process node's comparable to TSMC's N4 process node, which seems to be the case, and assuming TSMC's 4NP process node's comparable to TSMC's N4P process node, which is unknown, then TSMC's 4NP process node can probably only offer slightly higher frequencies since TSMC only promised 6% higher performance with TSMC's N4P process node compared to TSMC's N4 process node.
The N4P process was designed for an easy migration of 5nm platform-based products
does this mean that, something taped out on 4N would be possible to migrate to this without large redesign?the first products based on N4P technology are expected to tape out by the second half of 2022.
AD102 has a transistor density of ~125.403 MTr/mm² [(76.3 billion transistors)/(608.44 mm²)], which is a ~24.25% higher transistor density than Hopper (GH100), which has a transistor density of ~98.28 MTr/mm² [(80 billion transistors)/(814 mm²)].It allowed a 30% increase on transistor density, so it could be worth it and, maybe, the transition from 4N to 4NP could be done around 1.5 years before launch in March 2025.
Theoretically speaking, yes, since TSMC's N4 process node and TSMC's N4P process node share the same IP, assuming TSMC's 4N process node is comparable to TSMC's N4 process node, and TSMC's 4NP process node is comparable to TSMC's N4P process node.does this mean that, something taped out on 4N would be possible to migrate to this without large redesign?
Sure it isEveryone in this thread is on pins and needles waiting for something to happen, so sarcasm about anything Switch 2 is not welcome here
So if it is thicker than it won't fit in the OG Switch dock? I would imagine if we do get backwards compatibility that we would have the same dock size right? Wouldn't that be cheaper than making a new exclusive dock? So 4N meters would make sense?
LISAN AL GAIB!
TSMC doesn't have an 8 nm node, since they went straight from 10 nm to 7 nm without a half-shrink. From what I gathered, it might be a bit better than TSMC N7 and probably a decent bit cheaper, but having to make it in a node that literally no other Nvidia products use would offset any cost savings.Since the node is currently the topic again, i have a very layman question: Would Samsungs 4nm be worse or better than TSMC 8nm? I think i've grasped by now that TSMC 4/5nm would be the "ideal" scenario. ^^
God i had to laugh out so fucking loud in the showing i was with my friends during that scene ... people turned their heads at me.
The 8nm process is a Samsung node as well, a shrunk version of the 10nm process, not a TSMC one. Basically: 4nm TSMC > 4nm Samsung > 8nm Samsung.Since the node is currently the topic again, i have a very layman question: Would Samsungs 4nm be worse or better than TSMC 8nm? I think i've grasped by now that TSMC 4/5nm would be the "ideal" scenario. ^^
If ever for some reason Nvidia and/or Nintendo absolutely want it to be Samsung and 8nm isn't a viable option, I wonder which would be the more likely option among those available.The comment also says that, even if T239 isn't manufactured on TSMC 4N, they think it's using a more advanced node (TSMC N6, Samsung 4LPP, etc) than Samsung 8N, as they reiterate that the SoC can't hit the power efficiency and target levels that Nintendo would want if manufactured on Samsung 8N.
The screen of a lite model will be smaller and there will be no need to change clock frequencies, but I don't know if that's enough to make big gains in battery life.Probably not. 3 nm is just too expensive relative to performance, and 4 nm would already be a really good node for a chip like T239. If there's any power savings to be made, it'll probably be in other components rather than the SoC.
I wouldn’t be surprised if the dock will have some sort of fan for cooling, since if our speculation is right, then it’ll be around series S level in dock mode.It's a new generation. In my opinion, making an exclusive dock is necessary and avoiding it will cause problems when the current generation is discontinued/abandoned.
Assuming, for some reason (Multi-level deal with Samsung for DRAM, NAND, Microcontroller, SoC manufacturing, etc), Samsung is the only option and SEC 8N is unsuitable for T239, then Nintendo/Nvidia could opt for Samsung 4LPP, which is their current HVM node currently, it's cheaper than TSMC N4 while having similar density and good performance and energy characteristics.If ever for some reason Nvidia and/or Nintendo absolutely want it to be Samsung and 8nm isn't a viable option, I wonder which would be the more likely option among those available.
A Lite model wouldn't necessarily need a shrink of the SoC for it to happen. There's gains to be made with more efficient screen, memory, storage, etc in the future.The screen of a lite model will be smaller and there will be no need to change clock frequencies, but I don't know if that's enough to make big gains in battery life.
Otimize for a console is take the best that specific hardware can offer than make a generic GPU optimization. How max out the use of 16 Gb of ram, how make the game works well on that CPU, how put resources here and there that will make the game runs better on that specific hardware spec alone. That kind of optimization, most of time, can't be useful for the PC port, since PC can't be something too specific. PC has more or less ram and it can be faster or slower. There will be better and worse GPUs, it is made to run in a lot of different customization.If that PS5 optimization is something about a Radeon GPU feature? Yeah, I'd expect it would apply to PCs using Radeons as well. If something is optimized for a console I'd be thinking of it either being cut down to use less of something the console is poor on, or redesigned so part of the cost moves to something else it's stronger on. If something about DLSS could be cut down to require less tensor core use, that would also apply to PCs. Offloading the work elsewhere doesn't seem very likely, since DLSS is so much centered around the strengths of tensor cores. Simplest DLSS optimization for a weaker part: lower the target resolution.
The thing is we’ll never know since developers are able to do some crazy stuff, just to able to make a little bit of money.Yoshi-P expressing interests about bringing FFXVI to other consoles after the PC version is finished. FFXVI is a pretty large game(90 GB), so I wonder how much it could be compressed for a potential Switch 2 version. Assuming it has to run on handheld mode as well, I'd imagine it would be pretty tough, a lot of assets would have to be graphically compromised. FFXVI was originally meant to come to the PS4 but it was pushing the console to its limits. The Switch 2 CPU and GPU would be leaps and bounds over the old PS4's architecture, so hopefully it would be easier to port. I wonder whether FFXVI or FF7R would be easier to bring to the Switch successor.
If they're planning an Xbox version, it'll have to be downported for Series S anyway.Yoshi-P expressing interests about bringing FFXVI to other consoles after the PC version is finished. FFXVI is a pretty large game(90 GB), so I wonder how much it could be compressed for a potential Switch 2 version. Assuming it has to run on handheld mode as well, I'd imagine it would be pretty tough, a lot of assets would have to be graphically compromised. FFXVI was originally meant to come to the PS4 but it was pushing the console to its limits. The Switch 2 CPU and GPU would be leaps and bounds over the old PS4's architecture, so hopefully it would be easier to port. I wonder whether FFXVI or FF7R would be easier to bring to the Switch successor.
Yoshi-P expressing interests about bringing FFXVI to other consoles after the PC version is finished. FFXVI is a pretty large game(90 GB), so I wonder how much it could be compressed for a potential Switch 2 version. Assuming it has to run on handheld mode as well, I'd imagine it would be pretty tough, a lot of assets would have to be graphically compromised. FFXVI was originally meant to come to the PS4 but it was pushing the console to its limits. The Switch 2 CPU and GPU would be leaps and bounds over the old PS4's architecture, so hopefully it would be easier to port. I wonder whether FFXVI or FF7R would be easier to bring to the Switch successor.
Since the node is currently the topic again, i have a very layman question: Would Samsungs 4nm be worse or better than TSMC 8nm? I think i've grasped by now that TSMC 4/5nm would be the "ideal" scenario. ^^
[2] https://www.androidauthority.com/exynos-vs-snapdragon-galaxy-s24-3411235/Those gains in power saving are made possible by a 3rd generation 4nm low-power process node. The Exynos 2400 is also the first Exynos processor to use a Fan-out Wafer Level Package (FOWLP) to boost thermal management so you can push games and apps further and longer.
Here's hoping 32-64 GB carts being standardized for future games. Nintendo couldn't get 64 GB to work for the original Switch, and only a handful of games used the 32 GB Cards. I'd hope prices have finally gone down enough that the carts could be viable.They could probably compress it a bit, but I think buyers are simply going to have to get used to the whole "only 16-32GB are on the cartridge, you have to download another 40GB at home" type thing.
I'm not sure that's gonna do what you think it will do. Switch 2 isn't supposed to get "PS4 scale" games, it's supposed to get downports of everything current gen which is finally the standard and designed around the specifications of the 2020 consoles. It should give AA/Unreal indie games more of a chance to shine for sure, more platforms are always a bonus (which applies for all kinds of games, not just small ones)... But there's virtually no cost cutting for the publisher merely because of Switch 2's existence. PS5 is the dominant platform this generation, optimizing for more platforms actually does the opposite effect to cost cutting these days.One of the best parts of Switch 2, if it manages to be a big success, is that studios can go back to making PS4 scale games and having them sell reliably well.
Look at all the scrutiny Rise of Ronin, a PS5 exclusive, has faced for its graphics. There are a lot of consumer expectations for what a PS5 game should like. Which I understand because of what was promised, in addition to the system's price point.
Inherently with the Switch 2 presumably costing less than a PS5, and it being a hybrid, the expectations are different for what is considered "acceptable".
Basically I think it's going to be really healthy for the industry, and it'll promote opportunities for more consistent releases that don't carry the baggage of inflated budgets.
Great post! While QCOM claimed there was a 30% deficiency from SF 4LPX (5LPE renamed) to TSMC N4P, Samsung 4LPP+ (Third Gen) should be a bit more closer to TSMC N4P. From the small data available at Golden Reviewer and Galaxy S24 Exynos 2400 testing, it's much closer to the current QCOM flagship than what Exynos 2200 was.Yes, I think it would be better.
Samsung's 5nm-4nm LPE (EUV) node caused a great deal of criticism in the smartphone industry, as SoCs using that process suffered from high power usage and poor efficiency. Their own SoC - Exynos 2200 released in 2020 also suffered along with it and used an AMD GPU in its design. Thus that generation of phones made with that process node rather stagnated (or became worse) instead of improving the situation. Yes, you could push the SoC to high frequencies and get better performance compared to the previous generation, but that came at a great cost -> power usage and thus more heat.
Geekerwan looked at the Exynos 2200 explicitly also and although a valid comparison cannot be made, they looked at the GPU peak clock frequency relative to AMD Radeon 660M (made on TSMC N6) that's in laptops. Both of the GPUs had the same amount of CU's. It was observed that the GPU in the GPU design has been more optimised for that lower voltage/frequency curve, and thus lower power consumption, but again the chart should be taken with a massive grain of salt as cross-platform comparisons aren't that possible.
However the story does turn around a bit, the Exynos 2400 was released and it has the more "refined" Samsung 4nm LPP+ process [1] .
While its performance is mostly up to par or below the competition, it did bring the necessary stride in power efficiency compared to TMSC's 5nm & 4nm [2].
There also hasn't been a comprehensive in-depth testing of that SoC yet, apart from loose conclusions or short benchmarks, so ehhh I also want more data on that.
Although compared to Samsung's previous attempt with their own SoC; The RDNA3 GPU performance has practically doubled (synthetic), so that's improvement, especially since we know that the gap between RDNA2 and RDNA3 isn't that large. However other variables, such as battery size, thermal design change, and display improvements should be taken into account.
Other than that, I did make those graphs one time that looked at A78 ST scores from different phone chips, you could see that Samsung's small node design did come out ahead TSMC's 6nm at respective peak clock frequencies, so yes compared to 8nm having a smaller node process would be beneficial.;
PS: nm sizes are marketing nomenclature.
[1] https://semiconductor.samsung.com/processor/mobile-processor/exynos-2400/
[2] https://www.androidauthority.com/exynos-vs-snapdragon-galaxy-s24-3411235/
No. Specially for a very small and secretive custom effort.I do wonder with the Samsung 5nm or 4nm nodes, I know people will say "well those can't be used for Switch 2 because Nvidia hasn't purchased any 5nm/4nm capacity from Samsung" .... I mean would we know if they had? Are they obligated to disclose that?
No. Specially for a very small and secretive custom effort.
In the past, we knew about Nvidia buying capacity to due to media rumors and the fact such capacities were tied to the ramp up of a new GPU generation. For a custom offshot project like T239, the chances of we hearing about they buying capacity are slim to none. So we can't predict either way.