• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

Those units are completely independent from clocks for the most part, the quantity and theoretical TOPS performance of each one matters much more.

Would there be any way to direct more power to them? Designing the chip shop6 that the shader cores only use as much power as they can take advantage of within bandwidth limits, but the RT and tensor cores use more?
 
BOTW probably looks a lot harder to run than it actually is. A Wii U and a power sipping ARM chip from a decade ago could handle it at 720p30
 
Someone can correct me if I’m wrong, but I believe Horizon is a pure static-link OS. So a NG game patch would require a totally new binary, though assets obviously could be reused.
My understanding is that nro files are dynamic libraries. Pretty sure it's less that it doesn't allow dynamic linking, and more that the system does not provide anything for software to link with. If a game wanted to segment itself, for whatever reason, it probably could.

Obviously, one would need to replace the existing binary to go from Switch 1-only to running natively on both systems, but what I'm saying is that the new binary could probably be shared between the two systems if Nintendo designed things to allow it.
ok we talked about bc but what about forwards compatibility you think Nintendo will still support switch 1 or 2 software on the switch 3
Likely much of their work on BC will be able to be reused.
Sure but that's not what I'm asking though, I think you know what I mean.

I'm referring to having a NG patch update for a Switch 1 game (one where it updates a game originally written to use NVN, to use NVN2 instead).

Would that typically require downloading close to the size of the game itself? Asking because it was said in order for Switch 1 game to be able to use NVN2, "recompilation" is required. That sounds almost like one would have to download a patch roughly equivalent to size of the game itself.

Using Witcher 3 as an example which comes on 32 GB gamecard, if the developer decides to offer NG patch for the game (read: to use NVN2 and potentially use DLSS), would that typically require 32+ GB download because the whole thing is "recompiled"?
Game file sizes are typically dominated by non-code assets. Replacing some of those (mainly textures and videos) with higher quality versions may be desirable and incur a file size cost on the patch, but the bare minimum code that would need patching would probably be well under a gigabyte in most, if not all, cases.
 
Would there be any way to direct more power to them? Designing the chip shop6 that the shader cores only use as much power as they can take advantage of within bandwidth limits, but the RT and tensor cores use more?
We'd need more, bigger and newer RT/Tensor cores to increase their performance, memory bandwidth mostly affects the final rendering phase of drawing the calculations on screen. These cores do calculations only with a huge asterisk on DLSS, which might not end up bottlenecked as a result.
 
But even with the cpu leap we're talking about here, it would have no problem running decompression on the cpu for a patch of a switch 1 game.

For a next gen game taking advantage of fast streaming tech, the fde would be super handy. But for a patches switch 1 game, 8a78 cores would likely be more than sufficient.
The A78 will improve loading times, specially so if it's native using all cores. It just won't be instantaneous for 30+ seconds long loading times.

Take the PS5 running BC for example. It also had a huge jump in CPU and has faster SSD than NG realistically will have. It reduced loading times by 30~50% in those tests, from what I can see.

I'm not that familiar about the PS Boost Mode, so I'm not sure if those games are running at base PS4, Pro or PS5 clocks. But even assuming base PS4, Native PS5 should reach 70~80% without hardware decompression.

Let's say native NG gets 80% too. 30s would become 6s, 2 minutes would become 24s. Many people would be satisfied with that, me included, but it would not be surprising if publishers decide to patch assets sometimes.
 
Fourth option: even a weak Ps4 can run Zelda at 4K 60 fps.
That... is not what this video shows at all? The game runs at 20fps, and it's running on a GPU that is 60% more powerful than the PS4? On a more modern architecture?
 
We'd need more, bigger and newer RT/Tensor cores to increase their performance, memory bandwidth mostly affects the final rendering phase of drawing the calculations on screen. These cores do calculations only with a huge asterisk on DLSS, which might not end up bottlenecked as a result.

Really? So the RT and tensor core performance is essentially hard-locked, and you can't make them run faster by providing more power to them the way you can with shader cores?
 
Really? So the RT and tensor core performance is essentially hard-locked, and you can't make them run faster by providing more power to them the way you can with shader cores?
Not separately from the shader cores. The Whole GPU gets faster, you can't do it piecemeal.
 
Now that has me thinking... Because UHS-2 V90 exists in MicroSD already.

We might have found our one, because with adequate compression, the 300MB/s maximum speeds of UHS-2 could scrape 1000MB uncompressed.

This lines up with existing findings points to Game Card continuing to use eMMC, but that can go up to... 300MB/s.

Bringing the minimum speed of formats across expandable, Game Card and internal to 300MB/s raw, targeting a high compression ratio, could be the sweetspot between price and performance Nintendo wants. Plus, it would mean NG Switch games could run without installs.

Interesting. Extremely interesting. Encouraging, even.
I really wish Samsung UFS 3.0 cards were still a thing because what you are suggesting is not remotely future proof enough imo. Perhaps for the expandable storage if the idea is we have to manage it with a smaller fast internal storage but for example the XBox Series S has 2.4GB/s uncompressed and 4.8GB/s compressed read speeds.

I totally get we are talking a handheld vs a console but phones are using UFS 4.0 internal storage as of early this year. If this is a device which launches in potentially late 2024, is expected last 6+ years and it is hoping to have 1/5 the raw read speed of the lowest common denominator from 2020 this could become a huge issue for the NG ports in time.

I hope we get 3D NAND carts, UFS 4.0 storage... less concerned on the expandable storage speed
 
C++:
frameCap(30);
resMax("720p");
With
C++:
try {
    //this always fails on NX
    HOS::setBcMode(ENHANCED);
    frameCap(60);
    resMax("1080p");
 
}
catch (Exception ex) {
    frameCap(30);
    resMax("720p");

}
It wasn't obvious to me that the time and effort required to update a Switch game to take advantage of better hardware exposed through a translation layer would be significantly lower than updating a game to use NVN2. I've updated plenty of core libraries in large projects to new major (not backwards compatible) versions. Sometimes it's a massive PITA and sometimes it's incredibly straightforward. I have almost no experience with graphics APIs though, so maybe the difference is obvious to those more knowledgeable than me. If this code is at all indicative of the actual changes needed for most games, then yeah, I can see Nintendo making this an option.

If they do, I wonder if Nintendo will allow developers to port games to NVN2 for free, because I don't see them not monetizing full 4K60 DLSS NVM2 ports for their own titles somehow, and if 3rd part devs do it for free it would make Nintendo look bad by comparison.
 
It wasn't obvious to me that the time and effort required to update a Switch game to take advantage of better hardware exposed through a translation layer would be significantly lower than updating a game to use NVN2. I've updated plenty of core libraries in large projects to new major (not backwards compatible) versions. Sometimes it's a massive PITA and sometimes it's incredibly straightforward. I have almost no experience with graphics APIs though, so maybe the difference is obvious to those more knowledgeable than me. If this code is at all indicative of the actual changes needed for most games, then yeah, I can see Nintendo making this an option.

If they do, I wonder if Nintendo will allow developers to port games to NVN2 for free, because I don't see them not monetizing full 4K60 DLSS NVM2 ports for their own titles somehow, and if 3rd part devs do it for free it would make Nintendo look bad by comparison.

If Nintendo puts effort into the patches, I still fully expect them to charge for it, or have it be part of NSO.

Maybe the basic settings stuff is free, but if you want the extra goodies like DLSS, out comes the wallet?
 
Regarding the discussions that we had about memory some...pages ago(? God, this thread started moving again hahaha), it's fair to remember that DRAM and NAND prices are starting to rise up once again. This might impact the chances of Nintendo doing any kind of "last minute" change to storage size or RAM amount/type. Ideally, Nintendo has already locked down the contract for both components since some quarters ago.
F8TeLFNbAAAKcZs.jpg
 
I really wish Samsung UFS 3.0 cards were still a thing because what you are suggesting is not remotely future proof enough imo. Perhaps for the expandable storage if the idea is we have to manage it with a smaller fast internal storage but for example the XBox Series S has 2.4GB/s uncompressed and 4.8GB/s compressed read speeds.

I totally get we are talking a handheld vs a console but phones are using UFS 4.0 internal storage as of early this year. If this is a device which launches in potentially late 2024, is expected last 6+ years and it is hoping to have 1/5 the raw read speed of the lowest common denominator from 2020 this could become a huge issue for the NG ports in time.

I hope we get 3D NAND carts, UFS 4.0 storage... less concerned on the expandable storage speed
as you said, it's a handheld. it's power limited. there's gonna be limits to how fast the read speeds can get. given you mention 1/5th 2.4GB/s, UHS-II is just under that, so I don't see it being too much of a problem
 
Has this already been shared?
法人表示,旺宏目前48L、96L 3D NAND均已量產,並出貨給日本客戶,192L可能用於客戶下一代主機卡匣,滿足新遊戲對容量需求
The legal representative stated that Macronix’s 48L and 96L 3D NAND are currently in mass production and shipped to Japanese customer. The 192L may be used in customers’ next-generation console cartridges to meet the capacity needs of new games.
Edit: Sorry, I corrected the url.
 
Last edited:
The text that Google is translating as "legal representative" (法人) seems like it might actually mean "corporation?" So is the article saying that Macronix confirmed they're shipping to a Japanese customer? We already know mass production of 3D NAND has been established, and the second sentence looks like it's just speculation, so the relevant thing is just whether that customer part is real info or more expectations/analysis/speculation.
 
The issue for UFS cards is there’s no real possible use for expandable super fast mobile storage except in gaming and well… Not sure the Switch 2 could carry this market alone.
 
0
Beyond storage, what other benefits come from this 3D NAND versus what Nintendo is currently using?

Speed? Price? If a game is 64GB or more, do we expect a company to bother putting the entire thing on a card?
 
Has this already been shared?
The legal representative stated that Macronix’s 48L and 96L 3D NAND are currently in mass production and shipped to Japanese customer. The 192L may be used in customers’ next-generation console cartridges to meet the capacity needs of new games.
Thanks for sharing. Please note that the machine translation is incorrect. It isn’t “legal representative” but “institutional investor(s)”. I also think that “may be used” is too strong, but “may possibly be used”. It isn’t clear how the quoted institutional investor(s) reached that conclusion—could be either an industry rumor or their own supposition.

Also note that the very next paragraph is a warrant issuer(s) suggesting people to buy warrants of Macronix stocks to “increase upside of [investment] return”. As I warned a few times before, Taiwanese financial reports at times are thinly veiled stock pumping pieces.
 
I somehow have resisted this thread since Tuesday. I’m assuming nothing much new in the way of credible rumors - or any further comments from Nate on when we might expect that report he’s working on?
 
Thanks for sharing. Please note that the machine translation is incorrect. It isn’t “legal representative” but “institutional investor(s)”. I also think that “may be used” is too strong, but “may possibly be used”. It isn’t clear how the quoted institutional investor(s) reached that conclusion—could be either an industry rumor or their own supposition.

Also note that the very next paragraph is a warrant issuer(s) suggesting people to buy warrants of Macronix stocks to “increase upside of [investment] return”. As I warned a few times before, Taiwanese financial reports at times are thinly veiled stock pumping pieces.
More expectations/analysis/speculation then.

There's like a 95% chance that the new game cards will use 3D NAND anyway. And I don't think confirming that Nintendo is a customer of 3D NAND would tell us much about production timeline or anything, either. And personally, I'd be a bit disappointed to hear Nintendo was using the "basic" 3D NAND when we know Macronix is going to start producing the SGVC variant which seems basically better in every regard (density, cost, and longevity), although I think the technology will meet Nintendo's needs either way.
 
Has there been any word on the read speed of Macronix's 3D NAND? Would hope that it's fast enough to where games won't require installation, assuming Switch 2's game cards use that.
 
I somehow have resisted this thread since Tuesday. I’m assuming nothing much new in the way of credible rumors - or any further comments from Nate on when we might expect that report he’s working on?
nope we just going through the regurgitating cycle of the same old topics since we have nothing new to discuss
 
0
The A78 will improve loading times, specially so if it's native using all cores. It just won't be instantaneous for 30+ seconds long loading times.

Take the PS5 running BC for example. It also had a huge jump in CPU and has faster SSD than NG realistically will have. It reduced loading times by 30~50% in those tests, from what I can see.

I'm not that familiar about the PS Boost Mode, so I'm not sure if those games are running at base PS4, Pro or PS5 clocks. But even assuming base PS4, Native PS5 should reach 70~80% without hardware decompression.

Let's say native NG gets 80% too. 30s would become 6s, 2 minutes would become 24s. Many people would be satisfied with that, me included, but it would not be surprising if publishers decide to patch assets sometimes.
Xbox series loads pretty dang fast in bc mode. Not sure what it's doing differently.
 
It wasn't obvious to me that the time and effort required to update a Switch game to take advantage of better hardware exposed through a translation layer would be significantly lower than updating a game to use NVN2. I've updated plenty of core libraries in large projects to new major (not backwards compatible) versions. Sometimes it's a massive PITA and sometimes it's incredibly straightforward. I have almost no experience with graphics APIs though, so maybe the difference is obvious to those more knowledgeable than me. If this code is at all indicative of the actual changes needed for most games, then yeah, I can see Nintendo making this an option.

If they do, I wonder if Nintendo will allow developers to port games to NVN2 for free, because I don't see them not monetizing full 4K60 DLSS NVM2 ports for their own titles somehow, and if 3rd part devs do it for free it would make Nintendo look bad by comparison.
I wonder if Nintendo asks developers/publishers money for patches.
 
0
Although the article doesn't confirm anything, it did get me thinking about comparisons between 3D NAND and Switch game card memory in general. I'm putting this in a spoiler since it's mostly just an unorganized info dump.

There doesn't seem to be any information on the layer count, density, or capacity of anything Macronix is making, but this TechInsights article from January has the specs for other manufacturers. Looks like 128 GB was the highest capacity being manufactured at that time, using 232 layers. All four listed manufacturers have 64 GB chips using 128 layers, and three of them have chips with more than 128 layers but still 64 GB capacity, which would decrease die size and cost.

The density at 128 layers ranges from 6.96 to 8.47 Gb/mm² (average 7.83), and from 10.27 to 11.01 Gb/mm² at 176 layers (average 10.72). The 232-layer chip has a density of 15.03 Gb/mm². This probably isn't a very sound extrapolation (assumes linear scaling, which is wrong), but the first two averages imply a factor of about 0.061 between the layer count and the density, so if we apply that to Macronix's apparent 3D NAND offerings, that would be 2.93 Gb/mm² for 48 layers, 5.86 Gb/mm² for 96 layers, and 11.71 Gb/mm² for 192 layers (or 12.7 Gb/mm² if we extrapolate from the 232-layer chip).

Macronix's SGVC paper from 2017 provided a chart of estimated (or confirmed in the case of 16 layers) densities, which I transcribe as follows:

LayersMLC Density (Gb/mm²)TLC Density (Gb/mm²)
161.62.4
242.53.3
323.14.2
484.66.2

Although we know the scaling isn't linear, I will note that MLC shows a close to 0.1 factor between layers and density, and TLC around 0.13, notably better than the 0.06-ish from the competing non-SGVC 3D NAND.

Based on the Switch cartridge's dimension of 21 mm x 31 mm and this image, it appears that the current Macronix chip package is around 18 mm x 14.5 mm (261 mm²), and the memory die itself is around 14 mm x 9.5 mm (133 mm²). The Breath of the Wild cartridge pictured here has 16 GB capacity, so the memory density of the die in this MX23K128GL0 is around 0.96 Gb/mm². Note that this and other density estimates are not accounting for the peripherals/connectors and assuming everything is die size-only, which may not be the case for other figures (which I believe would lead to slightly conservative estimates of density/capacity in my comparisons, not overestimations).

I don't know anything about the defect rate, die size limitations, etc. in 3D NAND processes, but we can assume that something around the size of the Switch game card memory is doable (despite the fact that all the dies listed in the TechInsights article are only between 46.5 and 73.6 mm²). I say this mostly because in the SGVC paper, Macronix claimed 48 TLC layers could provide around 6.2 Gb/mm² density and deliver a 1 Tb single-chip solution, which implies a die area of about 165 mm², which is larger than the BotW cartridge die, but still within the package area of 261 mm² and well within the total cartridge area of 651 mm². So we can assume such a die area is feasible from a manufacturing perspective.

Turning back to the regular 3D NAND from Macronix, let's assume the same 133 mm² die size as BotW and take the extrapolated density numbers from the above (bearing in mind the amount of napkin math that went into this). That would coincidentally give us capacities of 48 GB for 48 layers, 96 GB for 96 layers, and 192 GB for 192 layers. In short, if the scaling is close enough to linear, and Macronix's density is about as good as the competitions, then Switch 2 cartridge sizes up to 96 GB should be covered by the 3D NAND Macronix is already mass producing, and sizes up to 192 GB would be covered by the 192-layer chips which were supposed to enter mass production sometime this year.

Note that I'm not saying we'll see this exact correspondence between layer count and capacity in practice. For one thing, 128 GB is probably the maximum capacity we'll ever see, and obviously it saves money to manufacture capacities only as large as needed. Moreover, to account for the roughness of the estimate, we can assume that the die size can be increased or decreased to an extent to achieve desired capacities.

On the other hand, if SGVC NAND were used, BotW's die size would yield capacities approximately like this:

LayersMLC Capacity (GB)TLC Capacity (GB)
162640
244054
325070
4876103

At the time the SGVC paper was published in 2017, 48 layers was probably considered a lot, but with current chips using 128-176 and as high as 232 layers, scaling this up to capacities of 128 GB should be perfectly doable. The die size could also be increased from the 133 mm² assumption I'm making here. As a reminder, SGVC NAND is said to be cheaper and (importantly for Nintendo) long-lasting and wear-resistant without the need for self-refreshing. The only things that ought to stop Nintendo from using it would be the tech not panning out, or not being available in time.

One thing to note here is that the lower layer counts are only relevant if manufacturing isn't available for higher layer chips. The more layers, the cheaper the chip is per bit! If 192-layer chips are available, or whatever the maximum is for SGVC, it would be the most cost-effective to manufacture every capacity with that many layers, and simply adjust the die area to achieve the desired capacity.
 
Although the article doesn't confirm anything, it did get me thinking about comparisons between 3D NAND and Switch game card memory in general. I'm putting this in a spoiler since it's mostly just an unorganized info dump.

There doesn't seem to be any information on the layer count, density, or capacity of anything Macronix is making, but this TechInsights article from January has the specs for other manufacturers. Looks like 128 GB was the highest capacity being manufactured at that time, using 232 layers. All four listed manufacturers have 64 GB chips using 128 layers, and three of them have chips with more than 128 layers but still 64 GB capacity, which would decrease die size and cost.

The density at 128 layers ranges from 6.96 to 8.47 Gb/mm² (average 7.83), and from 10.27 to 11.01 Gb/mm² at 176 layers (average 10.72). The 232-layer chip has a density of 15.03 Gb/mm². This probably isn't a very sound extrapolation (assumes linear scaling, which is wrong), but the first two averages imply a factor of about 0.061 between the layer count and the density, so if we apply that to Macronix's apparent 3D NAND offerings, that would be 2.93 Gb/mm² for 48 layers, 5.86 Gb/mm² for 96 layers, and 11.71 Gb/mm² for 192 layers (or 12.7 Gb/mm² if we extrapolate from the 232-layer chip).

Macronix's SGVC paper from 2017 provided a chart of estimated (or confirmed in the case of 16 layers) densities, which I transcribe as follows:

LayersMLC Density (Gb/mm²)TLC Density (Gb/mm²)
161.62.4
242.53.3
323.14.2
484.66.2

Although we know the scaling isn't linear, I will note that MLC shows a close to 0.1 factor between layers and density, and TLC around 0.13, notably better than the 0.06-ish from the competing non-SGVC 3D NAND.

Based on the Switch cartridge's dimension of 21 mm x 31 mm and this image, it appears that the current Macronix chip package is around 18 mm x 14.5 mm (261 mm²), and the memory die itself is around 14 mm x 9.5 mm (133 mm²). The Breath of the Wild cartridge pictured here has 16 GB capacity, so the memory density of the die in this MX23K128GL0 is around 0.96 Gb/mm². Note that this and other density estimates are not accounting for the peripherals/connectors and assuming everything is die size-only, which may not be the case for other figures (which I believe would lead to slightly conservative estimates of density/capacity in my comparisons, not overestimations).

I don't know anything about the defect rate, die size limitations, etc. in 3D NAND processes, but we can assume that something around the size of the Switch game card memory is doable (despite the fact that all the dies listed in the TechInsights article are only between 46.5 and 73.6 mm²). I say this mostly because in the SGVC paper, Macronix claimed 48 TLC layers could provide around 6.2 Gb/mm² density and deliver a 1 Tb single-chip solution, which implies a die area of about 165 mm², which is larger than the BotW cartridge die, but still within the package area of 261 mm² and well within the total cartridge area of 651 mm². So we can assume such a die area is feasible from a manufacturing perspective.

Turning back to the regular 3D NAND from Macronix, let's assume the same 133 mm² die size as BotW and take the extrapolated density numbers from the above (bearing in mind the amount of napkin math that went into this). That would coincidentally give us capacities of 48 GB for 48 layers, 96 GB for 96 layers, and 192 GB for 192 layers. In short, if the scaling is close enough to linear, and Macronix's density is about as good as the competitions, then Switch 2 cartridge sizes up to 96 GB should be covered by the 3D NAND Macronix is already mass producing, and sizes up to 192 GB would be covered by the 192-layer chips which were supposed to enter mass production sometime this year.

Note that I'm not saying we'll see this exact correspondence between layer count and capacity in practice. For one thing, 128 GB is probably the maximum capacity we'll ever see, and obviously it saves money to manufacture capacities only as large as needed. Moreover, to account for the roughness of the estimate, we can assume that the die size can be increased or decreased to an extent to achieve desired capacities.

On the other hand, if SGVC NAND were used, BotW's die size would yield capacities approximately like this:

LayersMLC Capacity (GB)TLC Capacity (GB)
162640
244054
325070
4876103

At the time the SGVC paper was published in 2017, 48 layers was probably considered a lot, but with current chips using 128-176 and as high as 232 layers, scaling this up to capacities of 128 GB should be perfectly doable. The die size could also be increased from the 133 mm² assumption I'm making here. As a reminder, SGVC NAND is said to be cheaper and (importantly for Nintendo) long-lasting and wear-resistant without the need for self-refreshing. The only things that ought to stop Nintendo from using it would be the tech not panning out, or not being available in time.

One thing to note here is that the lower layer counts are only relevant if manufacturing isn't available for higher layer chips. The more layers, the cheaper the chip is per bit! If 192-layer chips are available, or whatever the maximum is for SGVC, it would be the most cost-effective to manufacture every capacity with that many layers, and simply adjust the die area to achieve the desired capacity.
Very cool. How fast are these?
 
To be fair, they would only have to include the tx1 gpu, cause arm 78 is already compatible.

But I agree there are far better ways to solve the problem,
This is only true if Nvidia were to bake Maxwell SMs into Drake, which, given the leaks, does not appear to be the case.

If we're talking about Nintendo adding an extra chip to the motherboard to enable BC (which is the only way the xbox era rumour makes any sense), then they'd have to include an entire Tegra X1, which is ridiculous.
 
Last edited:
The ongoing discussion about the stability of Nintendo's roster, especially in regards to third-party characters, is of genuine interest. It stands to reason that license agreements can make it difficult to consistently use the same third-party characters. Replacing some characters with new ones could be a novel approach. On a slightly different note, I'm writing a paper on the evolution of game characters and their impact on player engagement. I'm thinking of enlisting the services of a consulting company to help polish the work. What can you say about the coding homework help service? Has anyone used similar reliable platforms for writing papers on similar topics?

Mod edit: Removed the spam link.
 
The ongoing discussion about the stability of Nintendo's roster, especially in regards to third-party characters, is of genuine interest. It stands to reason that license agreements can make it difficult to consistently use the same third-party characters. Replacing some characters with new ones could be a novel approach. On a slightly different note, I'm writing a paper on the evolution of game characters and their impact on player engagement. I'm thinking of enlisting the services of a consulting company to help polish the work. What can you say about the coding homework help service? Has anyone used similar reliable platforms for writing papers on similar topics?
Is this a spam bot? Joined 14 minutes ago and this post sounds like something that came out of AI.
 
Is this a spam bot? Joined 14 minutes ago and this post sounds like something that came out of AI.
I understand why you might think that, but I'm not a bot. I just recently registered on this forum and thought I'd share my opinion. I apologize if it sounded strange. Maybe I just spent too much time reading scientific articles and my expression became a bit formal. 😅
 
Not separately from the shader cores. The Whole GPU gets faster, you can't do it piecemeal.

That's a shame. It would be cool if you could apply power to different cores as needed, like how the Switch can underclock the GPU to overclock the CPU and improve loading.
 
0
Man, i hope, for both me and Nintendo, that this 3D NAND stuff, if it's indeed used for ReDraketed game carts, is going to bring cost for higher capacity carts down.

But dunno, it somehow sounds expensive to me. ^^
 
Man, i hope, for both me and Nintendo, that this 3D NAND stuff, if it's indeed used for ReDraketed game carts, is going to bring cost for higher capacity carts down.

But dunno, it somehow sounds expensive to me. ^^
As i remember, that type of cartridges are more cheaper than actual cartridges
 
Man, i hope, for both me and Nintendo, that this 3D NAND stuff, if it's indeed used for ReDraketed game carts, is going to bring cost for higher capacity carts down.

But dunno, it somehow sounds expensive to me. ^^
Would like to know more about how 3d nand scale.

If there will be new pricing models such as a fast 16gb card costs the same as a 32 gb card with half the speed for example.
 
As i remember, that type of cartridges are more cheaper than actual cartridges

Well, i hope you remember right then!

I don't think that Nintendo can get very far with 16GB/32GB, maybe for their own games, but not if they want more 3rd party ports.

Would like to know more about how 3d nand scale.

If there will be new pricing models such as a fast 16gb card costs the same as a 32 gb card with half the speed for example.

Honestly, i could see that offerings two "speed" options would backfire hard.

And really, such sizes should only be offered to indie devs and pubs like Limited Run. Especially if the upper quote ends up being correct and this new type of storage is cheaper.

Nintendo shouldn't give thirds more chances to skimp out on the cart size and have insane required DLs again. 'Cause built-in storage size is one area where i can see Nintendo cheaping out.
Not on the speed, mind you, but the size.

It's such a "easy" area to cheap out...
 
All Nintendo home consoles and hanhelds in this century have had backwards compatibility, except the Switch because it was a radical shift and it wasn't possible. All the current competitors in the market have BC. The hardware will be a direct succesor to the Switch, with a similar form factor, using a chip from the same family from the same company.

These are strong reasons to believe that Switch 2 will have BC. But, you know? It doesn't even matter because there is a much more important reason: Nintendo has stated that the next hardware will be BC many times. Sure, not with that wording or with a direct confirmation, but they've been leaving hints one after another since 2015 when explaining their new focus, the NSO, the transition, etc.

So, speculating about the technical and practical aspects of BC is a very interesting discussion. But I don't understand why there are SO many doubts about its existence when there is not a single thing that points in the opposite direction. Maybe someone could make an elaborated argumentation, but what I've seen until now is just pure "because Nintendo" nonsense.
It mostly stems from people thinking that Nintendo will want to sell the same games twice both on Switch 1 and Switch 2.
 
Man, i hope, for both me and Nintendo, that this 3D NAND stuff, if it's indeed used for ReDraketed game carts, is going to bring cost for higher capacity carts down.

But dunno, it somehow sounds expensive to me. ^^
So the way most semiconductor production works, is you have a base silicon wafer (usually around 11 inches/300 mm in radius), which undergoes a whole bunch of sequential steps meant to form a whole bunch of layers of different materials and different patterns, and then square/rectangular dies are cut from the wafer which are then packaged onto a final product.

You can only get X number of usable dies from any wafer, since they are square regions on a circular wafer. And because each wafer undergoes a whole bunch of processes which all take time, equipment, and precursor materials, the amount of dies you can get from a single wafer is the single most important factor in determining the economics of the final chip cost.

So let's say for current 16GB game cards we have an 11 inch wafer, and we can get 8 dies from each wafer. I'm just pulling this number out of my ass, I'm sure in reality it's different. When switching to 3D NAND, you still start with the same 11 inch wafer, but since you can now add 48+ layers to increase your storage, the amount of die space needed is drastically smaller. Again, out of my ass, let's say you can now get 32 dies per 11 inch wafer when you use 48 layers, getting even more storage on each die due to the increased total volume, despite the lower area.

So that's how this tech winds up being much, much cheaper. For 1,000,000 wafers you wind up with 32,000,000 dies, versus 8,000,000 dies you'd have previously. That means you need to buy 4x fewer wafers and use your time, equipment, and precursor materials ~4x less than before.
 
Please read this new, consolidated staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited by a moderator:


Back
Top Bottom