• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

I wonder if with this piece of news TOTK might be the last Nintendo AAA title under the $69.99 price tag:

TotK is the same €69 as BotW in Europe and just a bit higher in JP (¥7678 -> ¥7900) but still bellow Xenoblade 2/3 (¥8700).

Then we have Metroid Prime for $40, Switch Sports for $50 and Pikmin will be $60.

IMO, instead of bumping the standard price to $70, they're getting ride of the "standard price" for games in America and pricing their games on a case by case basis like they do in EU and JP already. So I'm expecting more $70 games ahead but also more $40 and $50 games.
 
@Thraktor Speaking of SD cards, I'm thinking on buying a surveillance-camera grade sd card for my switch because I had bad experiences with multiple regular ones in the past and I'm worried that mine may get corrupted too quickly.

Are those surveillance grade ones any better?
 
0
T - 9 Days to go
By the way:



Guess 2H isn’t gonna be warren. We’ll definitely get a June Direct announcing all software. I presume they’re over focusing on TOTK to then unveil the rest of the 2023 lineup later.

But whether or not this next Direct will have any ties to the successor, who knows

That they might be porting Lara Croft Guardian games instead of Tomb Raider, Rise of Tomb Raider (Has a X360 version) and Shadow is a mind boggling decision. Sigh..... Would double-dip on some portable TB goodness. These games, Batman Arkham Tri/Tetralogy and FF XIII Saga are my Top 3 games that I want ported to Switch.
I think that depends on how well Tears of the Kingdom sells, because unlike Sony and Microsoft games, Nintendo games very rarely have price cuts.

Anyway, speaking of the type of RAM, I think the absolute best case scenario is LPDDR5X-7500, which is 120 GB/s of bandwidth. And the worst case scenario is LPDDR5-6400, which is 102.4 GB/s of bandwidth. (I think Nintendo's probably going to choose the worst case scenario.)
Even if they do take the worst case scenario, that's still ~4x the amount of bandwidth compared with Switch. Even 88 GB/s (5500 MT/s) is still an option, but they need to mitigate the bandwidth issue as much as possible, as that's one of the biggest bottleneck of current Switch.
 
To be extremely clear:

128GB or 256GB is a decision I think NINTENDO will make. I don't think they'll ever offer storage variations that aren't seperate, visually distinct redesigns. Simplicity of production, simplicity of marketing. And avoiding anything akin to Wii U.
I think they'll release a single 64 or 128GB unit first to test the waters for demand and then, a 256GB one. Maybe the second one could be a halfway update (like the v2 switch was).
 
there's still 88GB/s as an option. but bandwidth is something they will want to mitigate as much as possible
I think that depends on how well Tears of the Kingdom sells, because unlike Sony and Microsoft games, Nintendo games very rarely have price cuts.

Anyway, speaking of the type of RAM, I think the absolute best case scenario is LPDDR5X-7500, which is 120 GB/s of bandwidth. And the worst case scenario is LPDDR5-6400, which is 102.4 GB/s of bandwidth. (I think Nintendo's probably going to choose the worst case scenario.)
I’m making peace with either scenario. I was initially hoping for lpddr5x, but I just keep telling myself that PS4 had 7x the bandwidth of Switch, and Switch still got miracle ports. Even at 88 GB/s, that gap narrows to the PS5 having 5x the bandwidth as Switch 2. So miracle ports could still happen (maybe even more often), as far as bandwidth is concerned. Finally. Inner peace. 🙏😇
 
........... No?

To render the upscaled textures, you still need to load them into RAM.

The entire benefit of this would be as a form of lossy compression that had minimal loss.
You’re inserting what you want to believe I’m referring to, what you are referring to isn’t what I’m referring to. It’s a different concept.
 
You’re inserting what you want to believe I’m referring to, what you are referring to isn’t what I’m referring to. It’s a different concept.

What you are referring to seems to be a completely incoherent thing. You cannot upscale an asset while not increasing its memory footprint.

Using machine learning to more easily reduce the memory footprint of assets while minimizing the amount of quality lost is very possible, but this is all production side stuff.

The only point to run ML algorithms on native hardware is if you need extremely low latency and need to do something dynamically. Almost everything related to ML should be production-side with most other applications done cloud-side.
 
Last edited:
I wonder if with this piece of news TOTK might be the last Nintendo AAA title under the $69.99 price tag:

No, unless you mean they'll just skip straight to $79.99. Maintaining at $60 forever as the value of the dollar continually gets lower is just impossible.
You’re inserting what you want to believe I’m referring to, what you are referring to isn’t what I’m referring to. It’s a different concept.
What you are referring to seems to be a completely incoherent thing. You cannot upscale an asset while not increasing its memory footprint.
I understand the kind of thing ReddDread means. I remember when it was kind of a big deal when GameCube added a form of texture compression that only decompressed at the final step so made for more efficient use of RAM than PS2. But if something like that was at all feasible in real-time with Redacted's hardware, I'd have absolutely expected it to already have been happening with the last few years of NVIDIA PC GPUs.
 
Last edited:
No, unless you mean they'll just skip straight to $79.99. Maintaining at $60 forever as the value of the dollar continually gets lower is just impossible.


I understand the kind of thing ReddDread means. I remember when it was kind of a big deal when GameCube added a form of texture compression that only decompressed at the final step so made for more efficient use of RAM than PS2. But if something like that was at all feasible in real-time with Redacted's hardware, I'd have absolutely expected it to already have been happening with the last few years of NVIDIA PC GPUs.
Microsoft is actually doing research on it. One of their first party studios is working on something akin to what I’m referring to.




The only thing is that the HQ textures aren’t stored in the memory, the system has to do it over repeatedly to make it appear as though it is high quality. but just like how DLSS isn’t perfect, and has its faults that can be observed, a “DLSS” for textures where if you pause the frame, you can make out what the low res texture is.
 
Breath of the Wild also uses physically based rendering. If you rewatch that clip, what's astounding isn't "PBR on Switch" it's "PBR with Metroid Prime's level of material's complexity at 60fps".

"Classic" rendering starts with a texture that has been carefully created by the artist to look good, with shadows and detail baked in. Then you might add various passes to improve the realism of the way the light hits the object - the clip you refer to mentions a "roughness map". This is a greyscale image that says how much light scatters when it hits it. A smooth surface will reflect a lot of light and look "glossy", a rough surface will scatter light and look more "matte." You can add more passes too, like a bump map, that helps create shadows on little details of an object, without adding more geometry.

PBR works in a similar way. You start with a texture, and you use additional maps to tell the lighting engine how light might interact with that surface. But unlike classical rendering, instead of starting with texture that is artistically lit to look good, you use a flat, bland image. And instead of using a small number of passes to optionally add semi-realistic light to that texture, you use a large number of mandatory passes to build up to a final look.

Because PBR standardizes these passes, and they're based on the physical properties of light, rather then retrofitting light on top of a pre-lit texture, all of these passes interact coherently with each other. In classical rendering, if your normal map could create shadows that point in the opposite direction of the shadows baked into your original texture, or your specular lighting pass might result in an object that reflect more light than actually hits it.

PBR does require a lot of passes though (the standard is 10). By contrast, the original Metroid Prime was a 2 pass renderer. Not only does this take more time, it's a lot of data that needs to be touched - you can think of each map as an additional greyscale copy of the original texture. The Switch is particularly bandwidth limited, and this is what makes PBR so impressive on Metroid Prime.
 
I wonder if with this piece of news TOTK might be the last Nintendo AAA title under the $69.99 price tag:


I know that at least in AU, Nintendo titles were typically landing at $69 and first party PS5 titles would be $109. I’m not sure why it’s ended up like that but it’s put me off buying games / a PS5.

Even Tears of the Kingdom is only hitting $75-79 here. I won’t be happy if all Nintendo games hit that mark, but it’s a far cry from what’s happening on PlayStation.
 
Microsoft is actually doing research on it. One of their first party studios is working on something akin to what I’m referring to.




The only thing is that the HQ textures aren’t stored in the memory, the system has to do it over repeatedly to make it appear as though it is high quality. but just like how DLSS isn’t perfect, and has its faults that can be observed, a “DLSS” for textures where if you pause the frame, you can make out what the low res texture is.

Even in these hype interviews they literally just talk about it as a form of compression instead of a RAM saving breakthrough because the textures have to eventually end up in the RAM to be visible to the player.

It's possible to massively lower the size of assets while minimizing the quality loss as a way to save RAM, but (again) this would be a production step and there would be no point whatsoever to do this in real time.

A native rendering application would be something like Hey You Pikachu 2 if Nintendo didn't want to use a cloud solution because the native hardware is good enough (which RTX 2050 and above hardware is at this point)
 
Last edited:
It's possible to massively lower the size of assets while minimizing the quality loss as a way to save RAM, but (again) this would be a production step and there would be no point whatsoever to do this in real time.
if you're doing this in production and not in real-time, then you're not actually reaping benefits. in fact, AI upscaling is pointless here. just author your textures in a higher resolution. the benefit for upscaling in real-time is to save storage space. instead of saving a 4MB texture file, you can instead save a 1MB texture file, use real-time upscaling, and mimic the quality of the 4MB file. it'll consume more ram space, but you save on storage space
 
I wonder if with this piece of news TOTK might be the last Nintendo AAA title under the $69.99 price tag:

To be honest, both Sony and MS have been duking it out on game prices at least since the PS3/360 days where they shared the majority of 3rd-party games, opting for early/deep discounts as a means to outdo the other. Gamers were trained on this, that if they wanted a game on the cheap, they didn't have to wait long. Now it's coming back to bite them in the butt when they feel they can't go discounting their stuff as soon.
 
if you're doing this in production and not in real-time, then you're not actually reaping benefits. in fact, AI upscaling is pointless here. just author your textures in a higher resolution. the benefit for upscaling in real-time is to save storage space. instead of saving a 4MB texture file, you can instead save a 1MB texture file, use real-time upscaling, and mimic the quality of the 4MB file. it'll consume more ram space, but you save on storage space

I'm talking about another potential application of ML instead of what we were talking about earlier.

But this functionally compression tech is just way too computational expensive for devs to care about now.
 
Even in these hype interviews they literally just talk about it as a form of compression instead of a RAM saving breakthrough because the textures have to eventually end up in the RAM to be visible to the player.

It's possible to massively lower the size of assets while minimizing the quality loss as a way to save RAM, but (again) this would be a production step and there would be no point whatsoever to do this in real time.

A native rendering application would be something like Hey You Pikachu 2 if Nintendo didn't want to use a cloud solution because the native hardware is good enough (which RTX 2050 and above hardware is at this point)

OK I’m going to be clear with you, and this is probably the only time I’m going to be clear with you in the most direct format possible. This is all speculation. None of this means it’s going to happen, the point is to theorycraft and come up with ideas. I simply stated a feature in which it does it in real-time will be beneficial to a low-power system.

It doesn’t end up being in the RAM to be visible by the player, because the GPU is the one that displays it. The GPU is the one that’s doing the Upscaling in real time. If it goes back into the RAM as a higher quality texture, that is not what I’m talking about and I beg you to not read this as me saying it is a decompression because this is not that. This is very clear, you are looking for a way to make this more complicated than it needs to be.


And in any case, I’m not even sure why you’re taking this as if I’m saying it’s gonna happen, the original question I was asked was about how this would be beneficial for a low power system beyond file size reduction, and someone asked me “would it?”. I responded in how it would be beneficial to a low power/low spec system… and you somehow have an issue with it for X Y Z reason that has nothing to do with what JoshuaJStone inquired about.

Which is, again, “this would have more benefits than simply lowering file size”

Why? Because the low res texture remains low res in memory, the GPU and the algorithm are Upscaling it real time to appear as though it is a higher resolution texture, when it is still a low res texture. Just like how DLSS Super Resolution is not actually native 4K, it is a lower resolution and it is being super sampled to 4K, that does not mean that it is 4K. DLSS to 4K ends up taking less VRAM space than doing native 4K.


And I will repeat myself once again, because I feel like somehow you’re going to misunderstand what I’m saying and assume I am saying that this is what is going to happen: this is all speculative discussion. It is speculative discussion about something I was asked about. The idea is not decompression, the idea is the GPU using its tensor cores will literally upscale the texture to appear as though it is a higher resolution texture in real time, using an algorithm. That does not mean that it is High res texture. It is still a low-res texture internally, it is being displayed as a Hi-Rez texture to the user or what they perceive as a higher res texture. The eye is perceiving that as a higher resolution texture.


This means that DLSS IS NOT USED IN THESE SCENARIOS.


This means using the Tensor cores for a different purpose
 
Last edited:
OK I’m going to be clear with you, and this is probably the only time I’m going to be clear with you in the most direct format possible. This is all speculation. None of this means it’s going to happen, the point is to theorycraft and come up with ideas. I simply stated a feature in which it does it in real-time will be beneficial to a low-power system.

It doesn’t end up being in the RAM to be visible by the player, because the GPU is the one that displays it. The GPU is the one that’s doing the Upscaling in real time. If it goes back into the RAM as a higher quality texture, that is not what I’m talking about and I beg you to not read this as me saying it is a decompression because this is not that. This is very clear, you are looking for a way to make this more complicated than it needs to be.


And in any case, I’m not even sure why you’re taking this as if I’m saying it’s gonna happen, the original question I was asked was about how this would be beneficial for a low power system beyond file size reduction, and someone asked me “would it?”. I responded in how it would be beneficial to a low power/low spec system… and you somehow have an issue with it for X Y Z reason that has nothing to do with what JoshuaJStone inquired about.

Which is, again, “this would have more benefits than simply lowering file size”

Why? Because the low res texture remains low res in memory, the GPU and the algorithm are Upscaling it real time to appear as though it is a higher resolution texture, when it is still a low res texture. Just like how DLSS Super Resolution is not actually native 4K, it is a lower resolution and it is being super sampled to 4K, that does not mean that it is 4K. DLSS to 4K ends up taking less VRAM space than doing native 4K.


And I will repeat myself once again, because I feel like somehow you’re going to misunderstand what I’m saying and assume I am saying that this is what is going to happen: this is all speculative discussion. It is speculative discussion about something I was asked about. The idea is not decompression, the idea is the GPU using its tensor cores will literally upscale the texture to appear as though it is a higher resolution texture in real time, using an algorithm. That does not mean that it is High res texture. It is still a low-res texture internally, it is being displayed as a Hi-Rez texture to the user or what they perceive as a higher res texture. The eye is perceiving that as a higher resolution texture.


This means that DLSS IS NOT USED IN THESE SCENARIOS.

Yeah, I have no idea why you would need to do any of this in real time instead of production-side.

What mechanism would cause this to need to be done in real time (with hyper low latency) instead of production-side.
 
Yeah, I have no idea why you would need to do any of this in real time instead of production-side.

What mechanism would cause this to need to be done in real time (with hyper low latency) instead of production-side.
bruh. I posted the very pdf that explains why you would. it outlines the entire theory behind it and it's practical use case

this whole time you made it clear you haven't clicked a damn thing I posted and instead you'd rather be stupid
 
Last edited:
bruh. I posted the very pdf that explains why you would. it outlines the entire theory behind it and it's practical use case

this whole time you made it clear you haven't clicked a damn thing I posted and instead you'd rather be stupid

I read the entire presentation and it was just about reducing the file size instead of saving RAM.

You are malding so badly you seem to be unable to read Redd's (very odd) posts.
 
No, I am reading pretty clearly, you just seem extremely confused and don't want to back down on a misunderstanding while Feet just dislikes me and wants to argue because I think it's really unlikely the Switch 2 launches this year.
Are you actually trying to gaslight me into believing I’m “confused” when you can’t read what I said more than once as “reducing footprint for low spec/low power devices” and don’t understand a single iota of it.
 
I'll read the literature later if I'm wrong but I don't want to spoil the answer

isn't it storage?
 
0
Are you actually trying to gaslight me into believing I’m “confused” when you can’t read what I said more than once as “reducing footprint for low spec/low power devices” and don’t understand a single iota of it.

I understand what you're trying to say.

I'm saying you're fractally wrong and this is not how any of this works.

You can make a lower quality asset look like a higher quality asset through optimization.

There is no benefit whatsoever to doing this in real time.

Upresing assets in real time could be very useful for minimal loss lossy compression, but I don't think this is relevant to devs as devs do not care at all about file size. They will only do it when it's super computationally cheap and it looks (based on my look into upresing HD textures in general) computationally expensive.
 
I know we keep discussing the possibilities of the production node Drake could land on but ever since Lovelace launched we have been hearing stories of over saturation of both Ampere and now Lovelace. Nvidia's pre-purchase of capacity on TSMC's 4N and being in the predicament they find themselves in, makes sense to manufacture a product that they can then bring in revenue by selling to Nintendo for Switch 2.

 

it's not impossible for switch to have RT, but given how slow Unity is with adding stuff (like TAA for URP), I doubt they already implemented a software RT function. and why add it for NVN before PC? NVN2 will support hardware RT so just save it for that

How many Nintendo devs care that much about file size though or need to to the point of implementing bespoke and expensive processes.

Like, is it vital for more than 3 teams.

Splatoon 3 and Mario Odyssey clock in at less than 6 GBs, sequels with 4K assets will not be breaking the fridge.
you keep saying "bespoke" and "expensive" processes but you've yet to prove the latter. being bespoke is irrelevant since it can be used across all of their teams. neural network inference can and has been added to many things
 
How many Nintendo devs care that much about file size though or need to to the point of implementing bespoke and expensive processes.

Like, is it vital for more than 3 teams.

Splatoon 3 and Mario Odyssey clock in at less than 6 GBs, sequels with 4K assets will not be breaking the fridge.
Who knows isnt that the point of a speculation thread? Maybe Miyamoto really wants their games to stay under 20Gb.
 
Although true, Valve seems to be using LPDDR5-6400 modules for the Steam Deck (here and here) and reducing the RAM frequency from 3200 MHz to 2750 MHz to have 88 GB/s of bandwidth.
I've recently got a scalding hot take on this, that I don't think I've seen yet on the internet:
I propose the possibility that at the time of AMD working on Van Gogh, their memory controller wasn't good enough to guarantee beyond 5500 MT/s yet.

The reason that this possibility crossed my mind is that we can see right now in desktop space that AMD is behind Intel in the memory controller game (both OC and not OC, but more noticeable in OC space). Sticking with official support, Intel's currently up to DDR5-5600 MT/s at best for Raptor Lake, while AMD's at 5200 MT/s for desktop Zen 4. Yes, that's DDR5 in particular, but if AMD is behind Intel in one type of memory, I'm inclined to think that maybe it's also the case in other types.
As for LPDDR5, Intel themselves in 2022 with mobile Alder Lake only officially supported up to 5200 MT/s. It wasn't until mobile Raptor Lake this year that Intel started backing LPDDR5-6400 MT/s.
Mobile Zen 4 does go up to DDR5-5600/LPDDR5X-7500 MT/s (bizarrely, AMD doesn't list the supported rate for regular LPDDR5), but those would be this year's products; more than a year later than Steam Deck's Van Gogh.

Corollary to this is: Nvidia is flexing their memory controller superiority with Orin achieving LPDDR5-6400 MT/s last year and Grace running LPDDR5X-8533 MT/s (which should've been shipping this half of the year, but got delayed to next half for undisclosed reasons).

---

CFexpress power draw is... holy shit, it's relatively straight forward, kinda. A few results from casually googling "cfexpress power consumption"...
Delkin Devices lists 2.5 to 3 watts for read. Exascend lists power consumption for their Essential series as "Active < 4.5W". And for one of Innodisk's cards, I see a max of 3.3 watts. So eh, one can probably say 'a few watts' for CFexpress.
...actually, it kinda makes sense, huh? As far as sequential read speed:watt ratio goes, they're kinda in the same ballpark as random pcie gen 3 nvme drives.
 
I wonder if with this piece of news TOTK might be the last Nintendo AAA title under the $69.99 price tag:


I don't think this is a particularly well done analysis. There are multiple factors that as to why software sales are down like the overall the state of the economy for example. For Modern Warfare 2 specifically, it looks like the reception to the game hasn't been as strong as they hoped and players are dropping the game faster than previous COD titles.
 
I don't think this is a particularly well done analysis. There are multiple factors that as to why software sales are down like the overall the state of the economy for example. For Modern Warfare 2 specifically, it looks like the reception to the game hasn't been as strong as they hoped and players are dropping the game faster than previous COD titles.
Yeah software sales have slipped across the board. We saw it with Nintendo during this FY as well.
 
Yeah software sales have slipped across the board. We saw it with Nintendo during this FY as well.

91257_5_ps5-games-69-99-price-tag-leads-to-less-unit-sales-data-suggests_full.png


This is Sony's own graph which they used in the article as proof that $70 is resulting in few game sales. Like just looking at this alone, you can easily pin point the main reason why game sales were up in 2020 having been dropping since. I'm sure a $70 price tag plays some role but it's definitely not the first major factor I would suspect.
 
PBR does require a lot of passes though (the standard is 10). By contrast, the original Metroid Prime was a 2 pass renderer. Not only does this take more time, it's a lot of data that needs to be touched - you can think of each map as an additional greyscale copy of the original texture. The Switch is particularly bandwidth limited, and this is what makes PBR so impressive on Metroid Prime.
Helps with bandwidth that the Remaster uses the same trick the original game does of only rendering two rooms at a time.
 
Helps with bandwidth that the Remaster uses the same trick the original game does of only rendering two rooms at a time.
Oh absolutely, but the sheer amount of detail in those rooms is actually insane. I know a lot of the lighting in the Metroid Prime remaster is pre-baked, which certainly helps, but the texture and material work is nuts when you consider that it's a Switch game that runs at 60 FPS.

Maybe it's a hot take but I'm willing to say that what they've done with the remaster on Switch is more impressive than what the original achieved on the GameCube. The GameCube was one of the stronger systems of its time, and wasn't working with a mobile chipset and power budget. I'd expect a game made for the system from the ground up to look and run like Prime (although I'd argue that part of Prime's visual appeal comes from the art and the attention to detail, even if it's also technically impressive to boot.) I don't expect a game for Switch - let alone a 60 FPS one - to look like Metroid Prime Remastered does.
 
$70 games being priced that way may be part of the reason why there have been less sales but, at the same time, they're probably making more money than before considering the decline isn't as big as the growth of how much they're making from each unit now.

It really just depends on the game. Something like Zelda or God of War doesn't have to worry about this too much.
 
Last edited:
Anyway, speaking of the type of RAM, I think the absolute best case scenario is LPDDR5X-7500, which is 120 GB/s of bandwidth. And the worst case scenario is LPDDR5-6400, which is 102.4 GB/s of bandwidth. (I think Nintendo's probably going to choose the worst case scenario.)
This is more a question for Team2023 (but Team2024 can of course join in) ….
Would LPDDR5X-7500 RAM make a release in late 2024 worth the wait?
 
This is more a question for Team2023 (but Team2024 can of course join in) ….
Would LPDDR5X-7500 RAM make a release in late 2024 worth the wait?
I don't think it's gonna be transformative. And a delay just to add that wouldn't be worth the design costs, IMO
 
it's not impossible for switch to have RT, but given how slow Unity is with adding stuff (like TAA for URP), I doubt they already implemented a software RT function. and why add it for NVN before PC? NVN2 will support hardware RT so just save it for that


you keep saying "bespoke" and "expensive" processes but you've yet to prove the latter. being bespoke is irrelevant since it can be used across all of their teams. neural network inference can and has been added to many things


Oh my god. Nintendo Switch ray tracing.


As this is hardware speculation I must point out that the next Switch will have hardware RT, and I wonder if this isn't a slip-up from Unity, where they mean the "Nintendo Switch platform", which includes a new, RT capable unit. Or if it's giving developers the tools to prepare a game for Switch to take advantage of Switch 2 features.

Who knows.
 
Why don't you guys put each other on ignore if you can't stand each other?
@ItWasMeantToBe19
@ReddDreadtheLead
@ILikeFeet

I went back to lurking as my humour wasn't wanted here (according to Mods anyway) but the bickering is something that isn't warranted when reading through this thread. People don't need to be at each other over something relatively trivial like this. People have opinions, they have their speculations (some more wrong than others, #team2023 mwahuahahaha) but they don't need to dig their heels in over this stuff.

Going through these, nothing is changing, the conversations go around in circles and don't add much overall to this thread with the back and forth repeating the same points.

Keep it light when discussing things, it's not like Nintendo is watching a debating team deciding whoever "wins" is how the Switch 2 is exactly going to be and when it's going to release.

Oh and speculation is speculation, it's kind of anything goes in terms of speculation on future Nintendo hardware. Nothing is off limits in terms of what can be speculated even if it's the prospect of a device that will project holographic displays for your gaming.

...It won't happen, yet, but it can be speculated. It's in the OP.
 
Last edited:
Who knows isnt that the point of a speculation thread? Maybe Miyamoto really wants their games to stay under 20Gb.
There is IMMENSE financial incentive for Nintendo to keep their games under the 16GB mark.

Game Card limitations, so they can opt for something other than the 32GB Game Card, internal storage limitations, since most users can only be assumed to have 25GB to spare. Network limitations, so they can ease pressure on their own network during major launches.
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom