• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

Would the 8nm process and chip shortage possibly push prices up?
Yes, 20nm was the cheapest ever I think in recent history, it’s only gone up from there, and also gone up more due to the shortages. But it is unknown by how much.
We have to bear in mind that Nvidia will probably not try to jack up the price to keep a good relationship with one of the best partners they ever had for the Tegra line, specially because they probably want to keep working together and because Nintendo is probably the ones who will popularize DLSS, which may open a bigger market for it and Nvidia... or at least make it open quicker.
The chip's price will also change depending on how much it's costumized, which hopefully isn't a lot (we all want those juicy RT cores). Hopefully it won't be too expensive to keep the price down and sales up.
I mean, it would be Nintendo paying the order and nVidia working with Samsung for the fab allocation. If Nintendo placed an order for 15M first year then that’s the most expensive of the orders probably through the console’s entire lifetime.

Though normally cost depreciates, this time it should depreciate slower than previous console cycles.
 
0
We have to bear in mind that Nvidia will probably not try to jack up the price to keep a good relationship with one of the best partners they ever had for the Tegra line, specially because they probably want to keep working together and because Nintendo is probably the ones who will popularize DLSS, which may open a bigger market for it and Nvidia... or at least make it open quicker.
The chip's price will also change depending on how much it's costumized, which hopefully isn't a lot (we all want those juicy RT cores). Hopefully it won't be too expensive to keep the price down and sales up.

I don't think this can be highlighted enough when we are discussing the future of Nintendo hardware and how Nvidia is cemented in that next roll-out. Hardware features like DLSS and RT for Nvidia are their big gamble going forward and with dedicating so much of their silicon to these as the future of where they see the importance and needs for graphics acceleration.

There are certain factors that we can attribute to this being another lighting strikes situation for Nintendo and some that immediately come to mind are the fact Nvidia always highlights how successful the Switch is and what that means to them in their quarterly financial results.

Another is the fact that AMD are breathing down Nvidia's neck when it comes to hardware performance metrics and given enough runway (the same thing that happened to Intel could happen to Nvidia in relation to market share) especially with the next-generation almost upon us with Lovelace and RDNA3.

Lastly is on the Samsung front where this company continues to be far behind TSMC in the manufacturing marketplace (even though they are the 2nd largest fab currently). Nvidia going with TSMC for Lovelace is essentially a major loss for Samsung, so this is where I can see the company become more aggressive in trying to attract new business and keep as much of their current as they can. The next Nintendo hardware being manufactured on a Samsung process is such a major win because currently they have never had a major console design win on their side...
 
Lastly is on the Samsung front where this company continues to be far behind TSMC in the manufacturing marketplace (even though they are the 2nd largest fab currently). Nvidia going with TSMC for Lovelace is essentially a major loss for Samsung, so this is where I can see the company become more aggressive in trying to attract new business and keep as much of their current as they can. The next Nintendo hardware being manufactured on a Samsung process is such a major win because currently they have never had a major console design win on their side...
The rumours so far only mention high-end consumer Lovelace GPUs (specifically AD102) using TSMC's N5 process node for chip fabrication. And JPMorgan mentioned in a published research analysis paper that JPMorgan expects Nvidia to use TSMC's N5 process node for the chip fabrication of high-end consumer Lovelace GPUs. So, there's a possibility that entry-level consumer Lovelace GPUs are going to use Samsung's 5LPP process node for chip fabrication, similar to how entry-level Pascal GPUs were using Samsung's 14LPP process node for chip fabrication, whilst mid-range & high-end Pascal GPUs were using TSMC's 16FF process node for chip fabrication.

Basically, no one knows for certain if Nvidia's going to exclusively use TSMC's N5 process node for the chip fabrication of consumer Lovelace GPUs, or if Nvidia's going to use TSMC's N5 process node for the chip fabrication of high-end and mid-range consumer Lovelace GPUs, and Samsung's 5LPP process node for the chip fabrication of entry-level consumer Lovelace GPUs.
 
Do you think chip shortages are enough to push Dane onto a new node, even if that means redesign/no longer being Dane?
 
Do you think chip shortages are enough to push Dane onto a new node, even if that means redesign/no longer being Dane?
Probably not since I imagine securing enough capacity for any process node is a process that realistically requires companies to plan a couple of years in advance.
 
0
The rumours so far only mention high-end consumer Lovelace GPUs (specifically AD102) using TSMC's N5 process node for chip fabrication. And JPMorgan mentioned in a published research analysis paper that JPMorgan expects Nvidia to use TSMC's N5 process node for the chip fabrication of high-end consumer Lovelace GPUs. So, there's a possibility that entry-level consumer Lovelace GPUs are going to use Samsung's 5LPP process node for chip fabrication, similar to how entry-level Pascal GPUs were using Samsung's 14LPP process node for chip fabrication, whilst mid-range & high-end Pascal GPUs were using TSMC's 16FF process node for chip fabrication.

Basically, no one knows for certain if Nvidia's going to exclusively use TSMC's N5 process node for the chip fabrication of consumer Lovelace GPUs, or if Nvidia's going to use TSMC's N5 process node for the chip fabrication of high-end and mid-range consumer Lovelace GPUs, and Samsung's 5LPP process node for the chip fabrication of entry-level consumer Lovelace GPUs.

I definitely understand what you have highlighted and that would be more back to business as usual when it comes to Nvidia and both TSMC/Samsung, but for Samsung it's still a major loss going from manufacturing all of Nvidia's desktop and mobile gpu's to just their lower end variants.
 
Assuming that Dane isn’t taped out, how much work can Nintendo have theoretically put into game development, either potential patches for existing Switch games or brand new games? If I understand correctly, a lot can change not only with the physical customization but even after that point, frequencies, power draw, etc can all be tweaked. So without knowing exactly what they’ll be working with, it’s probably difficult to get too far along in actual development, isn’t it?
 
Assuming that Dane isn’t taped out, how much work can Nintendo have theoretically put into game development, either potential patches for existing Switch games or brand new games? If I understand correctly, a lot can change not only with the physical customization but even after that point, frequencies, power draw, etc can all be tweaked. So without knowing exactly what they’ll be working with, it’s probably difficult to get too far along in actual development, isn’t it?
no, dev kits been out since late 2020. they'll have a good idea what they want to achieve, performance-wise, allowing them to make kits that have analog hardware or tell devs to make a system with xyz parts to mimic final performance.

we're long past the days where you need actual hardware to start development
 
Final Fantasy 7 seems to be CPU bound on a 1030 at 720p. getting this to run on switch doesn't seem like the hardest task to me. might be blurry as fuck though.




it also can't max out a 3090 at 4K/120fps. confirms my suspicions about the game not maxing out the PS4, just a lot of smart design

 
Final Fantasy 7 seems to be CPU bound on a 1030 at 720p. getting this to run on switch doesn't seem like the hardest task to me. might be blurry as fuck though.




it also can't max out a 3090 at 4K/120fps. confirms my suspicions about the game not maxing out the PS4, just a lot of smart design


Yeah, wasn't KH3 a game that ran pretty well on a number of hardware configs? I forgot.
 
PS4 and Xbox One also happened to be 3 and a half years old when Switch launched, which was right before sales of both of them were beginning to slow down (PS4 went from ~20mil sales in CY 2017 to ~13mil the next year and continued sliding until the PS5's release), which meant that consumers were interested in new hardware but Sony and Microsoft were in little hurry to offer anything. Meanwhile, when they finally did, they were rare as hen's teeth and high targets for scalping through their entire first year, a trend likely to continue through to 2023.
While a lot of kudos go to Nintendo for an outstanding product, the conditions which it entered the market and market conditions through its life were also VERY favourable to them and it's worth considering not taking those conditions for granted as always being there.

Market conditions have allowed them to take advantage of the situation and continue selling Switch at its current price. And y'know, business wise, I can't fault them for that, especially when they made it plain that Switch's market strategy was "let's use this hardware cycle to make up as much of the money we lost from our previous hardware cycle as possible", which wasn't exactly small potatoes, if anyone happens to recall. So it's doing precisely what it's meant to do in Nintendo's eyes.

FY2021 will mark the first decline in yearly hardware shipments, and while it's super-easy to blame ALL of that on production challenges, Nintendo will not be so short-sighted as to think that's the only possible reason. Nintendo's projections have their sales for the second half of their FY at 15.72 million units. Coupled with the fact that Nintendo makes conservative estimates so that they look like achievers to their investors, should they miss their forecast, it will mark a downturn in sales beyond what can be blamed on shortages. And despite a lot of bullishness from several people, that's not a guarantee, but I don't know if getting into the real nitty-gritty of sales data is what anyone wants from this thread any more than I've already provided.

And about pricing... well, to us in the enthusiast set, $100 is no big deal, but you don't sell 27 million units of hardware in a year to enthusiasts alone. Also, what is realistically or logically not a big difference in price and what is perceived to be a big difference in price are 2 different things. We live in a world where a penny off a price tag psychologically tricks us into thinking something's cheaper than it is, one should not discount the power of perception. I've been setting retail prices for the store I help operate for the better part of 3 years now and... yeah, that shit isn't as simple as some think it is.

The object with new hardware is to keep and expand your marketshare, not take it for granted. And with the next hardware lacking some of the attendant advantages Switch had during its cycle, being all "oh yeah, $400, the market will absolutely and unquestionably accept that" isn't being mindful of everything that goes into a price determination and consumer price perceptions.
So what are you saying? Nintendo is gonna have a $50 price difference between OLED and Switch 2? I think you underestimate how arrogant Nintendo is. I really don't see a drastic price drop or small price difference between OLED and Switch 2. Again, this is the company that hasn't budged with Switch's price point, and sells wii and wii u remasters for $60 on Switch. Nintendo is greedy and will milk the Switch for all its worth unless it really tanks in sales.
Final Fantasy 7 seems to be CPU bound on a 1030 at 720p. getting this to run on switch doesn't seem like the hardest task to me. might be blurry as fuck though.




it also can't max out a 3090 at 4K/120fps. confirms my suspicions about the game not maxing out the PS4, just a lot of smart design


GTX 1030 GPU is 1.1 in Pascal TFLOPs. Something that should surpass xbone base performance. But how does that Ryzen 3100 CPU compare to Switch's A57s?
 
about what and what

the 3300x is better, of course
Based on the video posted, it barely at all ever stressed the CPU. Like, at all.

I see what you mean that the switch can probably run it. Hardly taxed the system at all on PC. And that has a lot of cutscenes too and the switch has hardware that would bring that down to size if it were to come.
 
0
Last edited:
I assume those 2.54% are primarily Switch X1 chips?
I imagine the 2.54% also takes into account the RTX 2060 12 GB and the RTX 2050, which are fabricated using TSMC's 12FFN process node, and the datacentre chips (A30, A100, etc.), which are fabricated using TSMC's N7 process node.
 
0
So today, Nvidia officially announced the RTX 2050, the MX550, and the MX570.

And here's where Nvidia stands amongst TSMC's other consumers courtesy of Bloomberg and DigiTimes (via Tom's Hardware).
interesting. Doesn't say anything about the mx550 and mx570 having rtx or DLSS tech, so I'm assuming they won't have it? RTX 2050 will have support for both though. Shocked RTX 2050 is a thing mostly...
the 2050 has 2048 Turing Cores, 4GB of GDDR6 on a 64-bit bus, at 112GB/s

shit's hella memory bound. it's outright crippling for a laptop
I wonder how it fares vs the 3050.
 
I wonder how it fares vs the 3050.
has 2x the bandwidth, but since the 3050 already falls behind the 1660 because of it, this will be hella disappointing

kopite7kimi mentions that the MX550 is using the GA107S. So I assume the MX550 and the MX570 does, but the real world performance is probably not ideal, which is probably why Nvidia explicitly mentioned that "for supercharged gaming and creative performance, we recommend stepping up to GeForce RTX".
Dane doomed confirmed!
 
0
kopite7kimi has said that the MX550 and the MX570 use GA107S.


Edit #1: Apparently, Nvidia has confirmed to ComputerBase that the RTX 2050 & the MX570 use GA107, and the MX550 use the TU117. And kopite7kimi didn't know about this. Seriously, what's up with Nvidia?!


And the MX570 supports DLSS and ray tracing, although ray tracing is limited, according to Nvidia. So I think the MX570 could give a rough idea of Dane's ray tracing capabilities.


Edit #2: So Qualcomm apparently talked about the scaling trends during a IEDM 2021 presentation.


I think this is more proof that the advantages of quickly adopting cutting edge process nodes nowadays is not as obvious as in the past.
 
Last edited:
0
Can't wait to see the 570's on paper specs...

edit: oh wait we already have the specs


4 TFLOP GPU using a 25 watt TDP board design... vs 2 TFLOP mx550 that also uses a 25 watt board design.

 
Last edited:
Is the thought that the MX570 has any particular relation to Dane? Given how that GPU uses 25W by itself, even an underclocked, die-shrunk version isn’t going to fall within a 15W total SoC power limit. Is the assumption that Dane will be half an MX570?
 
Is the thought that the MX570 has any particular relation to Dane? Given how that GPU uses 25W by itself, even an underclocked, die-shrunk version isn’t going to fall within a 15W total SoC power limit. Is the assumption that Dane will be half an MX570?
it supposedly has RT and DLSS. so it's a good way to see just how low those scale. though RT testing might be useless because RT is very memory bound. and this has pretty bad memory for a laptop
 
0
Can't wait to see the 570's on paper specs...

edit: oh wait we already have the specs


4 TFLOP GPU using a 25 watt TDP board design... vs 2 TFLOP mx550 that also uses a 25 watt board design.

Nvidia apparently confirmed to ComputerBase that the MX550 is based on TU117, not GA107.


Is the thought that the MX570 has any particular relation to Dane? Given how that GPU uses 25W by itself, even an underclocked, die-shrunk version isn’t going to fall within a 15W total SoC power limit. Is the assumption that Dane will be half an MX570?
Dane's GPU probably won't be based on GA107, which the MX570's apparently based on. But the MX570's probably the closest comparison to Dane's GPU since the MX570 apparently supports DLSS and ray tracing, although the MX570's ray tracing applications are apparently limited, according to Nvidia.
 
Nvidia apparently confirmed to ComputerBase that the MX550 is based on TU117, not GA107.



Dane's GPU probably won't be based on GA107, which the MX570's apparently based on. But the MX570's probably the closest comparison to Dane's GPU since the MX570 apparently supports DLSS and ray tracing, although the MX570's ray tracing applications are apparently limited, according to Nvidia.


I agree that this is definitely the closest thing we will see to Dane's GPU since it has none of the automotive Ai hardware attached like Orin does.
The only other determining factors are that it's a 10SM part with higher clocks and GDDR6 RAM which all take up a good portion of that 25w budget.
 
0

Epic itself has already stated that the demo typically requires 10MB of data per frame rendered, suggesting a 300MB/s data throughput rate at 30 frames per second. While we cannot confirm this independently on a console owing to its closed nature, we did take the Western Digital SN750 SE 250GB - the slowest PCIe Gen 4.0 x4 SSD on the market - and then limited its capabilities still further by taping up most of its pins on the PCIe interface, effectively limiting it to PCIe Gen 4.0 x1 spec, with the PS5 only rating it for 1.7GB/s of read performance - far lower than the base spec 5.5GB/s Sony recommends and the 7GB/s mooted by Mark Cerny. The result? As expected, performance and pop-in is exactly the same because the demo is not inherently storage-bound.

There is a PC UE5 demo available - Valley of the Ancient - which I tested at 1080p resolution (with temporal upscaling to 2160p) and capped at 30fps. I noticed a peak throughput of 200MB/s on a scene change, but otherwise the speeds were in the region of 80MB/s on fast traversal across the scene. Additionally, UE5's data caching systems saw throughput drop tremendously when accessing previously accessed data. What this tells us is that Nanite streaming in Unreal Engine 5 is remarkably efficient - with the Nanite visualisation within The Matrix Awaken showing you how it's broken down.

Series S obviously runs at a lower resolution (533p to 648p in the scenarios we've checked), using Epic's impressive Temporal Super Resolution technique to resolve a 1080p output. Due to how motion blue resolution scales on consoles, this effect fares relatively poorly here, often presenting like video compression macroblocks. Additionally, due to a sub-720p native resolution, the ray count on RT effects is also reined in, producing very different reflective effects, for example. Objects within reflections also appear to be using a pared back detail level, while geometric detail and texture quality is also reduced. Particle effects and lighting can also be subject to some cuts compared to the Series X and PS5 versions. What we're looking at seems to be the result of a lot of fine-tuned optimisation work but the overall effect is still impressive bearing in mind the power of the hardware. Lumen and Nanite are taxing even on the top-end consoles, but now we know that Series S can handle it - and also, what the trades may be in making that happen.
 
Quoted by: SiG
1
So, provided the next SoC supports UE5 with all of its next-gen features (Lumen, Nanite, etc.), it should do pretty good even at lower resolutions? Could DLSS be used to improve image stability?
Theoretically yes to both questions.

Nvidia has confirmed to Anandtech that the RTX 2050 and the MX570 are based on GA107.
 
0
The biggest issue for Lumen games being on Dane (theoretically, anyway) is the change in lighting due to the lower internal resolution.
 
Quoted by: SiG
1
The biggest issue for Lumen games being on Dane (theoretically, anyway) is the change in lighting due to the lower internal resolution.
I guess that depends if they're going to use Nvidia's own integrated hardware raytracing solution versus the one in the Matrix demo/AMDs.
 
I guess that depends if they're going to use Nvidia's own integrated hardware raytracing solution versus the one in the Matrix demo/AMDs.
My understanding is that UE5 is being programmed to use hardware-based ray tracing solutions on both Nvidia and AMD graphics hardware when available (their spec sheet confirms that hardware ray tracing is part of Lumen and is only available for Nvidia cards Turing or greater with RT cores), and I have no doubt that Epic will configure UE5 to accommodate the hardware in Dane as much as they can within the scope of their new technologies, just like they have for PS5 and Xbox Series. So long as it's got RT cores, we're probably good to go, albeit maybe on the lower end of what Lumen can do in the absolute worst case scenario.
 
0
So does this new information change anything in regards to speculation and expectations?
I think the discussing is MX570 being the closest GPU to Dane. But the spec sheet says 4GB of VRAM, I assume the Switch 2 GPU will have access to more than that, even in the worst case 8 GB configuration.

Two things that stand out
This has 1028 cores, i'm not sure we're expecting that many cores for Switch2(?)
25w TDP, which is probably too high so I assume we're expecting further reductions in performance to hit 15w (target) for them.


 
So does this new information change anything in regards to speculation and expectations?
I don't think so.

I think the discussing is MX570 being the closest GPU to Dane. But the spec sheet says 4GB of VRAM, I assume the Switch 2 GPU will have access to more than that, even in the worst case 8 GB configuration.

Two things that stand out
This has 1028 cores, i'm not sure we're expecting that many cores for Switch2(?)
25w TDP, which is probably too high so I assume we're expecting further reductions in performance to hit 15w (target) for them.


TechPowerUp's specs are not totally correct, since the MX570 has 2 GB of GDDR6, not 4 GB of GDDR6; and the MX570 has 2048 CUDA cores, not 1024 CUDA cores.
 
I'm dying to know MX570's die size. Knowing TDP is of course important, but I'd expect die size to be the key constraint along core count to determine how Dane would be versus this SoC. The rason for that: TDP can be augmented with changing clock speed and I think there's lots of wiggle room for that with Dane, given how high MX570 is clocked according to TPU(1.0Ghz base, 1.5 Ghz boost) .
 
I'm dying to know MX570's die size. Knowing TDP is of course important, but I'd expect die size to be the key constraint along core count to determine how Dane would be versus this SoC. The rason for that: TDP can be augmented with changing clock speed and I think there's lots of wiggle room for that with Dane, given how high MX570 is clocked according to TPU(1.0Ghz base, 1.5 Ghz boost) .
Since the MX570 is based on GA107, the MX570's die size should be in the range of 190 - 199 mm².
 
Quesiton: are we settled on the 4310mAH battery being what we can expect to physically fit into the form factor for the Switch 2 given it wasn't upgraded for the OLED?

Are we expecting the Switch 2 to perhaps run a bit higher tdp wise than the 11w docked for Switch? Maybe closer to 15w?
 
Quesiton: are we settled on the 4310mAH battery being what we can expect to physically fit into the form factor for the Switch 2 given it wasn't upgraded for the OLED?

Are we expecting the Switch 2 to perhaps run a bit higher tdp wise than the 11w docked for Switch? Maybe closer to 15w?
there can be larger capacity batteries that can fit in teh switch, but it'll come at weight and cost. denser batteries are heavier and more expensive. there are phones with 6000mAh batteries after all

ASUS-ROG-Phone-5-battery-e1628160329931.jpg
 
Quesiton: are we settled on the 4310mAH battery being what we can expect to physically fit into the form factor for the Switch 2 given it wasn't upgraded for the OLED?
I think there's a possibility that Nintendo could use a higher capacity battery that's roughly the same size and thickness as the 4310 mAh battery on the OLED model (e.g. the Samsung Galaxy S21 Ultra's battery). The OLED model doesn't need a higher capacity battery since the OLED model still uses the Tegra X1+.

there are phones with 6000mAh batteries after all

ASUS-ROG-Phone-5-battery-e1628160329931.jpg
I highly doubt Nintendo's going to use dual batteries.

There are also smartphones that use a single 6000 mAh battery that's roughly the same size, but is also considerably thicker, compared to the OLED model's 4310 mAh battery (e.g. Samsung Galaxy M31).
 
I think there's a possibility that Nintendo could use a higher capacity battery that's roughly the same size and thickness as the 4310 mAh battery on the OLED model (e.g. the Samsung Galaxy S21 Ultra's battery). The OLED model doesn't need a higher capacity battery since the OLED model still uses the Tegra X1+.


I highly doubt Nintendo's going to use dual batteries.

There are also smartphones that use a single 6000 mAh battery that's roughly the same size, but is also considerably thicker, compared to the OLED model's 4310 mAh battery (e.g. Samsung Galaxy M31).
not bad 5,000 mAH would be a decent 16% capacity upgrade and likely not cost more than 4310 mAH part from 2017 Switch launch.
 
I'm dying to know MX570's die size. Knowing TDP is of course important, but I'd expect die size to be the key constraint along core count to determine how Dane would be versus this SoC. The rason for that: TDP can be augmented with changing clock speed and I think there's lots of wiggle room for that with Dane, given how high MX570 is clocked according to TPU(1.0Ghz base, 1.5 Ghz boost) .
The MX570 isn’t an SoC. It’s just a GPU. The multi core CPU, IO, and other system controllers are going to use a huge chunk of Dane’s die. IIRC, the GPU only accounted for a third of the X1’s size.
 
The MX570 isn’t an SoC. It’s just a GPU. The multi core CPU, IO, and other system controllers are going to use a huge chunk of Dane’s die. IIRC, the GPU only accounted for a third of the X1’s size.
I'm aware, but at least the information gives us a ballpark since we have TX1 die shot (not sure about later Tegra SoC) and extrapolate from that the area allocated for GPU logic.
 
0
Please read this new, consolidated staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited by a moderator:


Back
Top Bottom