• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

Rather than what Nintendo is saying when asked about the unannounced new console, which can often only be interpreted in a hindsight except for statements such as "No new console, please enjoy and go suck on some ice while you're at it", I think that Nintendo's release schedule and marketing strategy, combined with game development cycles, are a much better indicator on what they have in store.
We currenlty have no visibility whatsoever after summer; this is indeed uncharted territory, and it tends to indicate that Nintendo is preparing something unusual; or they are incompetent and put themselves in those uncharted territory with no preparation. I trust Nintendo to not do that, they like money too much. I do not believe that for example, Nintendo is holding back the 3D Mario marketing to give Zelda (or Animal Crossing) breathing space. I also do not believe that Nintendo doesn't have a 3D Mario nearing completion after 6 years and zero DLC besides Luigi Balloons. Thus, Nintendo has probably a showable 3D Mario yet keeps it under wraps. My opinion was, and still is nearly a year later, that it's because it's a launch title of the new console. It was also one of the reason (among other) why I did not believe that Zelda would be a launch title.
Combined with the knowledge that Nvidia has a Nintendo-targeted chip at an advanced stage of development, and considering the fact that the Switch is 6+ years old with declining sales (which Nintendo probably forecasted; it's their job), it is probable in my opinion that Nintendo will release their new console within the next 8 to 18 months along Mario. And since I like gaussian curves (we all have our kinks, don't judge), I place the maximum probability at around March 2024. Late 2023 wouldn't surprise me too much. Neither would late 2024, though it would disappoint me because I'm ready for a new console after 2023.
 
and yes, I'm pretty damn sure. DVD sales were still high in the advent of HD televisions. Netflix's 1080p plan is still the most popular despite being able to get a 4K television for as little as $200. image quality just isn't a selling point to the wider audience
and even in Gaming World , the PS2 Sales was still Strong in the 1st 2 Years of PS3 Life ... not all people care about latest and greatest Technology or willing to spend more to get it .
 
If you know about things and wanna talk, you can always PM me.

I wouldn't leak a thing. I'm "Hidden Content" in living human form.

But i would make everyone know that i now know things, but won't ever tell them. All while laughing evil-ish like a little imp from hell.

I'm also good at ignoring pleas directed at me, after all, i have three kids.
 
Rather than what Nintendo is saying when asked about the unannounced new console, which can often only be interpreted in a hindsight except for statements such as "No new console, please enjoy and go suck on some ice while you're at it", I think that Nintendo's release schedule and marketing strategy, combined with game development cycles, are a much better indicator on what they have in store.
We currenlty have no visibility whatsoever after summer; this is indeed uncharted territory, and it tends to indicate that Nintendo is preparing something unusual; or they are incompetent and put themselves in those uncharted territory with no preparation. I trust Nintendo to not do that, they like money too much. I do not believe that for example, Nintendo is holding back the 3D Mario marketing to give Zelda (or Animal Crossing) breathing space. I also do not believe that Nintendo doesn't have a 3D Mario nearing completion after 6 years and zero DLC besides Luigi Balloons. Thus, Nintendo has probably a showable 3D Mario yet keeps it under wraps. My opinion was, and still is nearly a year later, that it's because it's a launch title of the new console. It was also one of the reason (among other) why I did not believe that Zelda would be a launch title.
Combined with the knowledge that Nvidia has a Nintendo-targeted chip at an advanced stage of development, and considering the fact that the Switch is 6+ years old with declining sales (which Nintendo probably forecasted; it's their job), it is probable in my opinion that Nintendo will release their new console within the next 8 to 18 months along Mario. And since I like gaussian curves (we all have our kinks, don't judge), I place the maximum probability at around March 2024. Late 2023 wouldn't surprise me too much. Neither would late 2024, though it would disappoint me because I'm ready for a new console after 2023.
I agree. My take is the direct not showing any games past July is a big deal.

And while they can still announce a Mario game for Q3 to piggyback on the movie, but not having a game for the movie in the spring or summer does check another box that whatever Mario game they have is likely a Switch 2 or cross gen title.
 
This feels like it was based around a vague comment and some patents
If Sony is doing a PS5 Pro, that would mean they feel confident there is a market for a $700 console. Sony has increased the sale price of the PS5 in many territories and is unlikely to have a price drop anywhere any time soon. I'm skeptical that a PS5 Pro is coming, but if it does, its going to be expensive and will do modest numbers compared to the base models.

We currenlty have no visibility whatsoever after summer; this is indeed uncharted territory, and it tends to indicate that Nintendo is preparing something unusual; or they are incompetent and put themselves in those uncharted territory with no preparation.
Part of Nintendo's strategy with Switch was to combine their software development on to one platform to eliminate software droughts and this has been successful for them since launch. Here we are in 2023, the Switch is six years old and we have a blockbuster title launching in May and a solid title in July but then the road map ends. If Nintendo is trying to convince everyone, including investors that they are committed to the Switch as their platform for the foreseeable future, it seems like you would want a road map that extends beyond July. It is popular opinion that many early titles will be cross gen for Switch Redacted and I believe this will end up being the case. However, Nintendo will refrain from showing these titles until they can show them on the new hardware. When you see a commercial for a cross gen PS5/PS4 title, the footage in the commercial will be the PS5 build. Who knows though, maybe Zelda and Pikmin are the final push with OG Switch, and when Switch Redacted launches it will have a new 3D Mario as an exclusive. Nintendo may feel that OG Switch can continue to sell, probably at a reduced price, for years to come based on its existing library of games and use their new first party titles to persuade people to move to Switch Redacted.
 
Nintendo could say that the Switch is going to be well for the next years, and still launch a new console tomorrow, PR talk is meaningless for us enthusiasts.
That is very true. Release new hardware drop price of original switch and count your money.
 
0
I mean, then why would Nintendo do anything except release a Switch 2 for $400 that is exactly the same as the Switch 1 but has another 4 GB of RAM, lol.

You kind of have to put together the argument of "here's why people care significantly more about improved lighting (though not close to RT level) than improved image quality on the Switch 2" to put forward an argument of Nintendo intentionally refusing to use DLSS 2.xx while also creating a significantly stronger piece of hardware.
I had no idea @Raccoon had a sidekick.
 
While I know none of you can give me an official answer, how likely do you think the next model will be using MicroSD cards? I need to buy a new one after my Sandisk Switch edition one perished, but I'm scared of buying a big GB one when a new Switch successor could be in the future.
We can't know for certain, but SD card compatibility is extremely likely.
 
While I know none of you can give me an official answer, how likely do you think the next model will be using MicroSD cards? I need to buy a new one after my Sandisk Switch edition one perished, but I'm scared of buying a big GB one when a new Switch successor could be in the future.
Storage remains a significant question mark. Even if microSD remains supported, functionality may be limited.
 
0
Rather than what Nintendo is saying when asked about the unannounced new console, which can often only be interpreted in a hindsight except for statements such as "No new console, please enjoy and go suck on some ice while you're at it", I think that Nintendo's release schedule and marketing strategy, combined with game development cycles, are a much better indicator on what they have in store.
We currenlty have no visibility whatsoever after summer; this is indeed uncharted territory, and it tends to indicate that Nintendo is preparing something unusual; or they are incompetent and put themselves in those uncharted territory with no preparation. I trust Nintendo to not do that, they like money too much. I do not believe that for example, Nintendo is holding back the 3D Mario marketing to give Zelda (or Animal Crossing) breathing space. I also do not believe that Nintendo doesn't have a 3D Mario nearing completion after 6 years and zero DLC besides Luigi Balloons. Thus, Nintendo has probably a showable 3D Mario yet keeps it under wraps. My opinion was, and still is nearly a year later, that it's because it's a launch title of the new console. It was also one of the reason (among other) why I did not believe that Zelda would be a launch title.
Combined with the knowledge that Nvidia has a Nintendo-targeted chip at an advanced stage of development, and considering the fact that the Switch is 6+ years old with declining sales (which Nintendo probably forecasted; it's their job), it is probable in my opinion that Nintendo will release their new console within the next 8 to 18 months along Mario. And since I like gaussian curves (we all have our kinks, don't judge), I place the maximum probability at around March 2024. Late 2023 wouldn't surprise me too much. Neither would late 2024, though it would disappoint me because I'm ready for a new console after 2023.
I think that "Gaussian curve" thing is a joke, right?

As for release window. Even 8 months from now seems pessimistic, and it remains that there is no evidence pointing to early 2024, but some evidence pointing to late this year. Balance of probability from what we know, the most likely window is the August to December period.
 
Knowing absolutely nothing about the potential implications of AI-implementation, I just wanted to ask if there's a chance to have a hardware-level auto-translate option to make imported games more accessible and the whole region-free nature of the Switch more useful?
 
In practice, what will happen is that the tensor cores will simply not be used at all in handheld mode as using DLSS to push the handheld from 480p to 720p is a massive waste.

These will be a docked only thing for like 99.99% of software released.
Even with a fairly conservative GPU speed guess it would probably take less than 1/8 of a 60fps frame or 1/16 of a 30fps frame to DLSS up to 720p. Not taking advantage of it seems more wasteful.
 
I was thinking of smarter npcs, more security in games (cheaters and hackers) and maybe something related to automatic content generation (items in Animal Crossing?).

But maybe I'm getting confused about what the AI could do besides improving the appearance.
Knowing absolutely nothing about the potential implications of AI-implementation, I just wanted to ask if there's a chance to have a hardware-level auto-translate option to make imported games more accessible and the whole region-free nature of the Switch more useful?
this would be amazing
 
I was thinking of smarter npcs, more security in games (cheaters and hackers) and maybe something related to automatic content generation (items in Animal Crossing?).

But maybe I'm getting confused about what the AI could do besides improving the appearance.

this would be amazing
I'm genuinely terrified by the prospect of Animal Crossing neighbours all given individual chatbot AI personalities 😅 The amount of shame they'd lay on you for disappearing for months at a time, feeling so lonely...I couldn't do it, no thank you.
 
I'm genuinely terrified by the prospect of Animal Crossing neighbours all given individual chatbot AI personalities 😅 The amount of shame they'd lay on you for disappearing for months at a time, feeling so lonely...I couldn't do it, no thank you.
they might even plot against you to ban you from the island or worse.... 👀
 
I was thinking of smarter npcs, more security in games (cheaters and hackers) and maybe something related to automatic content generation (items in Animal Crossing?).

But maybe I'm getting confused about what the AI could do besides improving the appearance.

this would be amazing
smarter NPCs isn't even a hardware limitation. don't need AI for that



Digital Foundry reviewed the Aya Neo 2. turns out, there's a limit on gains when you're power starved. who'd a thunk. but it's also running PC games so it's not an absolute measure of performance at 15W. there is ray tracing testing and it's pretty impressive, though it's tested at 22W and the UE5 testing is pretty weird

 
Digital Foundry reviewed the Aya Neo 2. turns out, there's a limit on gains when you're power starved. who'd a thunk. but it's also running PC games so it's not an absolute measure of performance at 15W. there is ray tracing testing and it's pretty impressive, though it's tested at 22W and the UE5 testing is pretty weird


Yeah, this was an interesting video. Couple things that I teased out.
  • Steam Deck has some performance advantages because of software. Another hint at how much of the Windows/DirectX penalty these devices are paying that REDACTED won't.
  • Bandwidth limitations aren't just a Switch thing. If REDACTED drops the memory clocks in handheld mode, which I expect, they'll be below these handhelds. Ampere efficiency advantage better hold up.
  • Obviously Ayo Neo 2 is running higher clocks, but its number of RT cores are the same as Drake. Some hints of what might be doable, considering Nvidia's general superiority in this space.
 
smarter NPCs isn't even a hardware limitation. don't need AI for that



Digital Foundry reviewed the Aya Neo 2. turns out, there's a limit on gains when you're power starved. who'd a thunk. but it's also running PC games so it's not an absolute measure of performance at 15W. there is ray tracing testing and it's pretty impressive, though it's tested at 22W and the UE5 testing is pretty weird


I'm not the biggest fan of Aya Neo products conceptually but can I just say the Neo 2 is beautifully designed?

I don't expect many, if any, visual changes to the new device, I expect the Joy-Con, screen size, even the dock to stay almost exactly the same as OLED model. But gosh, the things they could do with a ground-up redesign. It can't happen, just for practical reasons like having to dock it, the console coming with wireless controllers, having a kickstand, etc.

But maybe the Nintendo Switch [REDACTED] LITE will have a killer design? Personally I currently think the Lite is the best designed of the Switch models, visually.
 
So I've been going back to the RAM bandwidth discussion and I saw many folks mention that 'Ampere is very bandwidth efficient.' Can someone explain why Ampere is so efficient, efficient compared to what, and how much more efficient? I guess especially compared to XSS, since that is the most relevant point of comparison.

Thanks!
 
Hi, I'm a long time lurker (going back to ResetEra) with a first question. Is it possible that T239 could be produced on an Intel fab?
I've heard that the Intel4 node is all ready for production now (it's just Intel's design teams which are slow).
Highly unlikely. Was preproduction in good enough shape that Nvidia could have designed for it 2-3 years ago? Probably not. And if it was, it's a 7nm node, and Nvidia's GPUs are already on 5nm. Intel would have to offer a hell of a deal to make it affordable to not only spin up a significant chip on an unknown, untried node, but also to shoulder the costs of shipping SOCs from Arizona to South Asia for assembly.
 
And since I like gaussian curves (we all have our kinks, don't judge), I place the maximum probability at around March 2024. Late 2023 wouldn't surprise me too much. Neither would late 2024, though it would disappoint me because I'm ready for a new console after 2023.
I prefer distributions with a... long tail 😋

What were we talking about again? Oh right, I'd go with a right-skewed distribution centered on November 2023 and with a tail stretching out towards early 2024 and then late 2024. Very low probability for 2025 or later.
 
So I've been going back to the RAM bandwidth discussion and I saw many folks mention that 'Ampere is very bandwidth efficient.' Can someone explain why Ampere is so efficient, efficient compared to what, and how much more efficient? I guess especially compared to XSS, since that is the most relevant point of comparison.

Thanks!
*relative to GCN

It’s still more efficient than RDNA mind you, in which AMD aimed to alleviate this with the infinity cache, that the consoles don’t have…


However, this section of this article by Chips and Cheese provides nice tests:

Infinity Cache bandwidth scales very well too, and actually closely matches RDNA 1’s L2 bandwidth. It can’t match L2 bandwidth on Nvidia’s 3090, but it doesn’t need to because the 4 MB L2 in front of it should catch a lot of accesses. So far, AMD looks pretty good in terms of cache bandwidth. VRAM however, is another story. Nvidia has a massive VRAM bandwidth advantage. With large workloads that don’t fit in cache, Ampere is far less likely to run out of VRAM bandwidth. However, both RDNA generations are better at making use of the VRAM bandwidth they do have. They don’t need as much work in flight to make good use of their available bandwidth.


 
I prefer distributions with a... long tail 😋

What were we talking about again? Oh right, I'd go with a right-skewed distribution centered on November 2023 and with a tail stretching out towards early 2024 and then late 2024. Very low probability for 2025 or later.
Assuming this distribution is normal or with a single peak is naïve. I think I'd put a high peak on October-November with a sharp drop off to zero at July 2023, a long tail with a moderate peak in March 2024 and a small peak in November 2024.
 
Highly unlikely. Was preproduction in good enough shape that Nvidia could have designed for it 2-3 years ago? Probably not. And if it was, it's a 7nm node, and Nvidia's GPUs are already on 5nm. Intel would have to offer a hell of a deal to make it affordable to not only spin up a significant chip on an unknown, untried node, but also to shoulder the costs of shipping SOCs from Arizona to South Asia for assembly.
Or even worse, Ireland to SEA!
 
So I've been going back to the RAM bandwidth discussion and I saw many folks mention that 'Ampere is very bandwidth efficient.' Can someone explain why Ampere is so efficient, efficient compared to what, and how much more efficient? I guess especially compared to XSS, since that is the most relevant point of comparison.
Compared to other GPU architectures, primarily the one in the PS4/Xbone consoles, but also, to a lesser extent, RDNA* which is in modern consoles.

In Maxwell, Nvidia's 2016 architecture, Nvidia switched from immediate mode rendering to a tile based approach. The amount of data that tile based approaches have to move is the same, but they move it in pieces over a longer period of time. That doesn't actually make it slower either, as the GPU can begin working on the first set of tiles before moving the next, which eliminates idle times on the GPU.

AMD's RDNA adopted a similar strategy, but again, Nvidia's strategy simply seems better, or at least more mature.

Bandwidth is expensive, both in terms of electricity and manufacturing costs. RDNA 2 added something called Infinity Cache, which trades Big Fat Caches for Less Bandwidth. The PS5 and the Xbox Series Consoles skipped the Infinity Cache and went with Big Fat Bandwidth instead. There really is no clear performance winner between Infinity Cache and Big Bandwidth. Some workloads are accelerated by one or the other, but most of the time it comes out in the wash.

The reason the chatter comes up at all is trying to game out REDACTED's performance against the other consoles. Both PS4 and Series S seem to have huge bandwidth advantages over Drake. The PS4 advantage is irrelevant because it's dealing with that older GCN architecture. On Series S, it's got a hybrid RDNA 1.5, so it seems like it's huge bandwidth is a win. But that huge amount of bandwidth is partially offsetting the lack of Infinity Cache in the GPU's design. So instead of calling the Bandwidth a win, we call it a wash - the bandwidth in the XSS/XSX/PS5 probably don't add GPU performance relative to the desktop cards.

Drake's bandwidth is known both from the Lapsus$ hack and from comments in the open source Nvidia Linux driver. With that knowledge we can compare Drake's bandwidth to various desktop Ampere cards, and see how it would perform. And the answer is that as long as you stay in that sub-3 TF range on Drake you should have plenty of bandwidth for Drake to perform like a comparably sized RTX 30 card. Once you get past 3TF, you start eating into the CPUs bandwidth - on game consoles, the CPU and GPU share memory bandwidth, unlike desktops. So for each FLOP past 3TF, you're getting less and less bang for your buck.
 
Maybe we move the bigger secrets/leaks to discord?
A user sharing information that they themselves say to take with a grain of salt shouldn't need to be hidden or moved. Outlets/personalities should take the warning and maybe practice caution in reporting unvetted information. Too many use the excuse, "I'm not the source, I'm just reporting the info" to cover themselves.

And this isn't a dig at the info posted. The user can share what they choose. It's a dig at any outlet/personality willing to report on something that they themselves cannot verify or find an outside means to back it up. Because, unfortunately, if the info doesn't pan out, the user is the one who gets roasted. Not the irresponsible individuals who reported it.
 
And the answer is that as long as you stay in that sub-3 TF range on Drake you should have plenty of bandwidth for Drake to perform like a comparably sized RTX 30 card. Once you get past 3TF, you start eating into the CPUs bandwidth - on game consoles, the CPU and GPU share memory bandwidth, unlike desktops. So for each FLOP past 3TF, you're getting less and less bang for your buck.
*3.2TF, not sub-3TF. After 3.2TF you start eating it. At 3.2TF you have the same bandwidth while giving the CPU enough as well.

Edit1: Unless, you know, you’re aiming to give it a different variable from the desktop cards :p


Edit2 : Well, 3.3 if you push it. But we don’t need that. Ceiling of 3.2 is fine.



Edit 3: to expand what I mean, we are assuming that this is gonna have 102.5GB/s, right? ARM generally works well with low bandwidth and doesn’t necessarily require high bandwidth. But 17.5GB/s let’s leave for the CPU.
ARM-v9-architecture-6_videocardz.jpg



Here we see that, despite the 60GB/s, that would only offer 8% more performance. But 20GB/s seems “ok” enough to work with.

Desktop Ampere has a bandwidth of average from 25-26.5GB/s per TFLOP if I remember right, with one card having I think 28 or 29GB/s and another card having as low as 22 or 23GB/s. The others were around the 25-26.5GB/s range

So, let’s assume that we have 17.5-20GB/s out of the 102.5GB/s, so 85GB/s for “just” the GPU, which would be around 3.2-3.4TF.

However, that’s probably unrealistic to how they’ll actually use it, let’s say it’s 20GB/s for the CPU (so a bit more), then the TF for the rest without going into “too much GPU for the BW” zone (to me), then it becomes 3.1-3.3TF.


Unless they are using all 60GB/s for the CPU, I don’t see sub-3TF as the ceiling, but a bit above that.


Edit4: Also, I don’t think anyone should see 4TF as on the table as it severely exacerbates the bandwidth limit. It’s why I didn’t entertain those ideas as I thought it was unrealistic.
 
Last edited:
*3.2TF, not sub-3TF. After 3.2TF you start eating it. At 3.2TF you have the same bandwidth while giving the CPU enough as well.

Unless, you know, you’re aiming to give it a different variable from the desktop cards :p

Well, 3.3 if you push it. But we don’t need that. Ceiling of 3.2 is fine.
Eh, being conservative. The high end desktop cards are pretty consistent about that ~25GB/s/TF, but the lower end cards are actually all over the place, with a lot of them higher. That might be just a floor effect where there is simply a minimum bus size (128bit) that means there is only so low the bandwidth can go. But I can also imagine that texture demands have a ceiling effect the opposite direction, so a slower GPU might need a little more bandwidth to compensate.

But yeah, generally, anything in that range is probably fine. It's not a hard limit, and we don't know where the CPU is going to be clocked either, so it's a guess on the CPUs consumption
 
Eh, being conservative. The high end desktop cards are pretty consistent about that ~25GB/s/TF, but the lower end cards are actually all over the place, with a lot of them higher. That might be just a floor effect where there is simply a minimum bus size (128bit) that means there is only so low the bandwidth can go. But I can also imagine that texture demands have a ceiling effect the opposite direction, so a slower GPU might need a little more bandwidth to compensate.

But yeah, generally, anything in that range is probably fine. It's not a hard limit, and we don't know where the CPU is going to be clocked either, so it's a guess on the CPUs consumption
I expanded on what I meant post-edit.
 
A user sharing information that they themselves say to take with a grain of salt shouldn't need to be hidden or moved. Outlets/personalities should take the warning and maybe practice caution in reporting unvetted information. Too many use the excuse, "I'm not the source, I'm just reporting the info" to cover themselves.

And this isn't a dig at the info posted. The user can share what they choose. It's a dig at any outlet/personality willing to report on something that they themselves cannot verify or find an outside means to back it up. Because, unfortunately, if the info doesn't pan out, the user is the one who gets roasted. Not the irresponsible individuals who reported it.
People are hungry for more information than is available. The audience wants anyone who might have a nugget of information to have a mountain instead. There is currently no "inside source" that seems to have the meat and potatoes of what is going on with the next Nintendo hardware. So whenever somebody claims to have information they open themselves up to scrutiny, and the whole "take this with a gain of salt" is really just a shield that is no better than the YouTubers who run with rumors/speculation as real insider info on the daily. If you got to say take this with a grain of salt, probably best to just put it out there as your own person speculation rather than potential inside info. Having limited inside info is a curse really as you are very aware. You open yourself up to accusations of being a fraud who doesn't really have inside info or the flip of that, you have inside info that you withhold from everyone and it seems selfish. Its a dirty game. LOL
 
I expanded on what I meant post-edit.
I saw - good stuff!

At the bottom end of the Ampere line we see cards the 30-35GB/s/TF range. Like I said, this could be a side effect of not being able to drop the bandwidth further, or it could be a side effect of needing a little extra bandwidth at the low end because once asset res and polygon count are maxed out, additional post processing effects eat GPU power, or even VRAM, but don't eat bandwidth.

As for the CPU slide - the only source I can find for that is the ARM9 architecture announcements. Do we have data on the ARM8 CPU that Drake uses.
 
Please read this new, consolidated staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited by a moderator:


Back
Top Bottom