• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

I was re-watching the DF video with Rich testing the 2050M, and I didn't even remember that he had tested Control with RT (medium RT = ray-traced reflection and transparency reflections).

Then I went to techpowerup to see the specs of the 2050M once again, and this time I noticed that it's said to have 32 RT cores, so 2 cores per SM. That must be a mistake on their part, right?
 
had a wild dream about the announcement and it was launching in two colours - orange & black was the main one and it looked amazing (lol), the other was a light turquoise/black. the joycons were very cool looking too.
 
It's a perfectly valid question. The answer is probably not very different than the A78 cores we're expecting. The A710 is a tweaked version of the A78 core, where the main update is support for ARMv9, the most relevant feature of which is SVE2. SVE2 is a new type of vector extension where the vector width used in software doesn't have to be the same as the vector width implemented in hardware, which makes it more flexible. The actual hardware vector width in A710 is unchanged over A78, but one claimed advantage of SVE2 is that it's much easier for compilers to optimise for, particularly for autovectorised code. Unfortunately there don't seem to be many public tests of this, and as far as I can tell standard benchmarks like Geekbench haven't been recompiled for ARMv9 yet, so it's hard to say how much of a benefit you'd get in real-world code.
Thanks for the information.
I honestly wanted to ask another question, if you don't mind.
Is it possible that in the future games will be more graphically advanced on Switch 2 thanks to DLSS?
 
0
The source is homebrew developer GRAnimated who has worked on the custom theme implemtnation, payloads, decompilations etc.
They've also given other insight about Switch firmware 0.8.5 like the original name for the news applet - "Nintendo Switchboard"

Nintendo_Switch_Switchboard-3.jpg


There's a possibility that GRAnimated misinterpreted what the constants meant in relation to the OS options, but I doubt it.
It‘s just because "btn" is a widely used term in UI programming for buttons. But who knows maybe I am misinterpreting something.
 
Don't hurt my feelings 😭😭😭

I did hear about that. Do you mind sending me a link so I can look it up?
Let us be hurt together


Chrome64.73%
Safari18.56%
Edge4.97%
Firefox3.36%
Opera2.86%
Samsung Internet2.59%

(Samsung Internet!?)

===

I'm dipping in here after mostly avoiding the past couple weeks. Keep saying Spawnwave videos pop-up with the latest shady switch 2 rumour.

It feels like the rumor mill is heating up?

That said, I would have expected some of the well-known industry insiders (Grubb, Zhuge, Nate, Imran, etc) to be loudly whispering hints if we were really close.
 
Let us be hurt together


Chrome64.73%
Safari18.56%
Edge4.97%
Firefox3.36%
Opera2.86%
Samsung Internet2.59%

(Samsung Internet!?)

===

I'm dipping in here after mostly avoiding the past couple weeks. Keep saying Spawnwave videos pop-up with the latest shady switch 2 rumour.

It feels like the rumor mill is heating up?

That said, I would have expected some of the well-known industry insiders (Grubb, Zhuge, Nate, Imran, etc) to be loudly whispering hints if we were really close.
You know what? All of that make sense even Samsung Internet. There's a lot of people who aren't tech savvy who just use Samsung Internet like my mom and my friend.


And yeah things are getting a bit heated up however, there is a lot fake information and recycled news.
 
The better question is will nintendo improve the system? I remember hearing a story about how the 360 cutting it memory footprint down. Anyone feel free to correct me, but they manage their background process, invites, and system-level voice chat down to 32 MB.
Edit for clarity:
Of course I am saying, they don't have to bring it down to 32 MB. But they should cut down on the memory footprint the best they can.

That's a different question, covering RAM quantity rather than bandwidth. I expect the amount of RAM used by the OS to increase from the Switch's 800MB, but hopefully by not too much. If they can give the system 12GB total with 1.5GB dedicated to the OS, that would mean it had more RAM than the Series S or Xbox One X.
 
16GB would be great, but 12 is enough. I'm guessing we'll use 2-3 for OS. I'm more worried about bandwidth.

If get what you're saying. There's some optimization that needs to be done.

its interesting to note that Steam Deck's OLED 15% boost in bandwidth has been giving it a 10% boost in framerate over the original (102 GB/s vs 88 GB/s), according to Eurogamer/Digital Foundry.
That's not too shabby. Now if Switch 2 got the max LPDDR5x bandwidth of 136GB/s, perhaps if could get a 20-25% in framerate? I don't know.


if we look at games in switch in which the RAM speeds have been upclocked (Mariko models have their lpddr4x in particular) from 25.6 to 30-34 GB/s (30-34 GB/s, the boost in framerate was actually quite significant in games like botw and ToRK It made a bigger impact on it, than GPU and CPU on those games. I heard it was a near solid 30fps with the lpddr4x RAM speeds in the worst but areas. Of course it's game dependent.

* Hidden text: cannot be quoted. *


Correct me if I'm wrong but, can't more bandwidth can take the load off the GPU that is working on the framerate, when the GPU could instead work on a higher resolution or more graphical detail, no?

I think the main why Bayonetta 1 and 2 on switch stayed at 720p on docked like handheld mode (but better framerate), because of a lot of alpha effects on screen, which was bottlenecked by lack of bandwidth? the switch only got 2x more bandwidth than Wii U, which is a real shame. If had lpddr4x memory speeds tested, maybe it could have ran at 1080p? I don't know. Obviously the GPU was spent elsewhere (framerate) as a result of the bottleneck with RAM, but maybe we could have gotten the same framerate stability and 900-1080p resolution?

reference:

I'm not opposed to 16GB RAM at all, and it would definitely last us for 7 years. But I still think bandwidth is a more important and immediate bottleneck to take on. Especially for 3rd parties.

You brought up the benefit of higher resolution textures, but does the Switch 2 really need to match the same 4k textures as current gen consoles, considering how much weaker it will be, and it will have less DLSS cores than the Ampere graphics cards?

How many 4k games do you will we get outside of 1st party nintendo? Perhaps a PS4 quality game at 1080p native to 4k might not be possible with DLSS on Switch 2. It might not be fast enough to render it. We really only need to compete with the Series S.

Of course whatever decision is made on the RAM module and bandwidth is already made anyway

12GB LPDDR5X and LCD Screen..Final offer.

I don't think we'll get 2nm. Mass production is scheduled for 2025. I can't say the latest iPhones will get it in the fall, but who knows.
3nm is more likely for a revision for a 2026 revision. If will be mature enough then to be affordable as well. I don't remember the increase in efficiency going from 4nm TSMC to 3nm TSMC off the top of my head. It's not huge.
Nintendo could help increase the battery life by 30-50% or the revision by doing something similar to what happened to Steam Deck OLED.
Based on some testing, Overclocking the Switch RAM totally fixes the alpha effects in Zelda TOTK. When you use Ultrahand, the alpha effects kills the framerate to 15-20FPS. Now when you overclock the RAM, the game rans at lock 30FPS even after using Ultrahand. It also locks to 30FPS throughout the game.

Edit: Overclocking the RAM, not the entire Switch.

Edit 2: Proof from MVG
 
Last edited:
There's a lot of people who aren't tech savvy who just use Samsung Internet like my mom and my friend.
I'm tech savvy and I use Samsung Internet. I like it a lot more than Chrome mobile. In particular, the forced dark mode for web sites was so good I even installed it in a Pixel phone, when I had one.
 
I'm tech savvy and I use Samsung Internet. I like it a lot more than Chrome mobile. In particular, the forced dark mode for web sites was so good I even installed it in a Pixel phone, when I had one.
Really? I tried it. Couldn't get into it.
Even if the Switch 2 had 8GB RAM, it would still have more RAM available for games than the PS4, since the OS is probably less than 2GB.
One thing I think we are all expecting is increase ram usage for background process and I think the frame buffer would increase based well if they want to go above 1080p.
 
I was re-watching the DF video with Rich testing the 2050M, and I didn't even remember that he had tested Control with RT (medium RT = ray-traced reflection and transparency reflections).

Then I went to techpowerup to see the specs of the 2050M once again, and this time I noticed that it's said to have 32 RT cores, so 2 cores per SM. That must be a mistake on their part, right?
It's the same chip as the 3050 mobile, which TPU says has 16 cores. So yes, a mistake. Not sure where it comes from because I've seen it repeated elsewhere, but no source.
 
Originally i was gonna ask if 12GB was gonna be enough to last the entire gen. but I realize I don't really know what the next big push in video game tech {like the next 10 years) is gonna be so what do you all think it's gonna be? Just more refinement on current RT techniques?
maybe path tracing if AMD ever gets basic RT right.
 
0
Really? I tried it. Couldn't get into it.
Really. Changing Browsers does feel weird for a while, but it was totally worth it.

Having functions I use often being 1 tap away (e.g. reader mode and find in page), force dark mode, locking tabs so I can't close them by mistake and other small things which chrome doesn't (didn't?) have makes quite a difference to me when added together.
 
Originally i was gonna ask if 12GB was gonna be enough to last the entire gen. but I realize I don't really know what the next big push in video game tech {like the next 10 years) is gonna be so what do you all think it's gonna be? Just more refinement on current RT techniques?
NGL, if I had to put my money on it... I'd say the sort of solid-state loading shenanigans that games like Rift Apart and Spider-Man 2 pull off.
 
0
Originally i was gonna ask if 12GB was gonna be enough to last the entire gen. but I realize I don't really know what the next big push in video game tech {like the next 10 years) is gonna be so what do you all think it's gonna be? Just more refinement on current RT techniques?
12 is the best we should expect.
 
12 is the best we should expect.
I'm not knowledgeable about all this stuff at all but based on spending a lot of time reading this thread and, like, vibes I guess, I have the expectation that 12 will be the retail model and the 16 we've been hearing about could be coming from devkits having extra.

And if I'm wrong and the 16GB is what retail models are equipped with, then woohoo!
 
Really. Changing Browsers does feel weird for a while, but it was totally worth it.

Having functions I use often being 1 tap away (e.g. reader mode and find in page), force dark mode, locking tabs so I can't close them by mistake and other small things which chrome doesn't (didn't?) have makes quite a difference to me when added together.
My ears perked up hard at “find on page being 1 tap away”
 
I can see Nintendo eventually evolving the Switch to do something like the Ayaneo Slide, but with a second screen instead of a keyboard.
Maybe by Switch 3 they can get everything refined enough to be able to dock, detachable controllers and with great performanc...

 
Well T239 should already be running up besides Series S if the DLSS Tester clocks are accurate (Especially the 1.3+GHz value).

And considering the ROG Ally (Z1E) can actually match the Series S in some scenarios despite a TFLOP deficit and only running at 30W (EX: Cyberpunk's quality mode), and the Z1E probably is the closest relative part (processing output) wise to T239 we have. I do think that at least via future iterations of DLSS if they can eliminate 99% of any cost of using it via the offloading to the Tensor Cores, then I think at the very least Switch 3/Drake-Next could probably match outputs and even shoot ahead of the PS5/Series X.
This has always fascinated me: we have the NVIDIA white paper that states that overlapping frames is possible between native rendering of the base image and DLSS upscaling, but as recently as late last year, DF has shown that the DLSS impact remains significant on the RTX2050 (which is an Ampere card, confusingly). I wonder if there are software/implementation issues that have disallowed its use so far or if it turned out not to be feasible in realistic applications...

It may be the case that building a stepwise rendering pipeline consisting of these three components (base image rendering, DLSS upscaling, higher-res post-processing, where step 1 of frame 2 can be launched while steps 2 and 3 of frame 1 have not yet finished) requires changes to the game engines that so far have not been identified as worthwhile investments considering the games run well without this significant optimisation. Perhaps with the T239 Nintendo might do it (assuming DF is right and the rumours are as well, then this would be necessary because BOTW 4k 60fps cannot exist if the DLSS actually takes anywhere close to 18ms for going to 4K).

Other than that, we still have post-processing on the higher-res image that will make it such that the effective DLSS overhead (i.e. when you include the overhead from things that now happen at a higher resolution) will probably never be quite that close to zero. DLSS will never be free in the strict sense because of these two factors, however, if the overhead can be reduced to something like 6ms for a 4K output, then that would be a massive boost if you can natively render at 1080p or 720p. DF showed that DLSS to 4K could take an 18ms overhead (including post-processing, which they did not delve into further), but if that can be reduced to 6ms on a T239-level device, then that would be more viable.
 
I could see them releasing a Switch Lite OLED in the near future just as a final hurrah like they did with the New 2DS
Nah. The 2ds was about a very low barrier of entry to the entire 3ds library (99$?). A lite Oled would be a lot more expensive than the lite, and at this point in the lifecycle not a very appealing product to many people.

If anything they would release a docked only model with a pro controller for very cheap.
 
I'm not oldpuck. But my choice is 12 GB of LPDDR5X-8533. I think increasing the amount of RAM only requires buying and installing higher capacity RAM modules, which I think is a straightforward process. But I think increasing the amount of RAM bandwidth requires changing the RAM controller inside the SoC, and then do another tape out of the SoC afterwards, which is practically re-designing the SoC, which I don't think is a straightforward process.

I recently found a very interesting article from Semiconductor Engineering about how glitch power issues increase as process nodes become more advanced, which is especially problematic for AI accelerators. And I think this will be an issue for Nintendo in the future, especially if Nintendo continues to partner with Nvidia.

Yeah the decision for the modules, and probably the RAM has already made. I would gladly take 16GB for free. Just in a world where I could choose between 12GB with more 33% RAM bandwidth or 33% more RAM (16GB)I'd take the former personally.
My understanding is that more bandwidth allows the CPU and GPU to work properly and use their full power, while less bandwidth is a bottleneck that hamstrings the chip. I think the docked Switch 2 could really use 134GB/s because it's going to be targeting much higher resolutions than the Deck does -partly through DLSS, but still. If 102GB/s is apparently about right, maybe a little more than needed for the Deck targeting 800p, the Switch 2 targeting 1080p and bringing that to 1440p or 2160p seems like it must need more.
Yep, and higher GPU clocks could be met. 4 TFLOPs would actually feasible with that much bandwidth without diminishing returns. it's been talked about in this board. Anything more than 3-3.4 TFLOPs would be diminishing returns with just 102GB/s.

Handheld and docked performance would also be more consistent and stable. 88 or 102GB/s in handheld and up to 136GB/s in docked. We would get more games with 2x resolution differences, and not some weird 720p and 900p Frankenstein resolutions between modes, with more stable frames to boot.
Based on some testing, Overclocking the Switch RAM totally fixes the alpha effects in Zelda TOTK. When you use Ultrahand, the alpha effects kills the framerate to 15-20FPS. Now when you overclock the RAM, the game rans at lock 30FPS even after using Ultrahand. It also locks to 30FPS throughout the game.

Edit: Overclocking the RAM, not the entire Switch.

Edit 2: Proof from MVG

Nice find! That's pretty significant.

Even if the Switch 2 had 8GB RAM, it would still have more RAM available for games than the PS4, since the OS is probably less than 2GB.
Considering Switch 2 could be twice as powerful as PS4 base in performance in GPU alone and without DLSS, it's gonna need more. Also devs are complaining about X series s lack of RAM.. I think this time around, expecting 2GB as a minimum reasonable for OS, but with 12GB of RAM total.

16GB would be fantastic of course.
 
This has always fascinated me: we have the NVIDIA white paper that states that overlapping frames is possible between native rendering of the base image and DLSS upscaling, but as recently as late last year, DF has shown that the DLSS impact remains significant on the RTX2050 (which is an Ampere card, confusingly). I wonder if there are software/implementation issues that have disallowed its use so far or if it turned out not to be feasible in realistic applications...

It may be the case that building a stepwise rendering pipeline consisting of these three components (base image rendering, DLSS upscaling, higher-res post-processing, where step 1 of frame 2 can be launched while steps 2 and 3 of frame 1 have not yet finished) requires changes to the game engines that so far have not been identified as worthwhile investments considering the games run well without this significant optimisation. Perhaps with the T239 Nintendo might do it (assuming DF is right and the rumours are as well, then this would be necessary because BOTW 4k 60fps cannot exist if the DLSS actually takes anywhere close to 18ms for going to 4K).

Other than that, we still have post-processing on the higher-res image that will make it such that the effective DLSS overhead (i.e. when you include the overhead from things that now happen at a higher resolution) will probably never be quite that close to zero. DLSS will never be free in the strict sense because of these two factors, however, if the overhead can be reduced to something like 6ms for a 4K output, then that would be a massive boost if you can natively render at 1080p or 720p. DF showed that DLSS to 4K could take an 18ms overhead (including post-processing, which they did not delve into further), but if that can be reduced to 6ms on a T239-level device, then that would be more viable.

For what it's worth, I remember that when Rich's video dropped, someone here did some testing of Death Stranding's DLSS costs and found that there seems to be something wrong with them, specifically I think the cost of a 1440p frame over a 1080p frame was more than it had any reason to be (and more than going from 1440p to 2160p), and that the cost multiplier from 1080p to 2160p was something like 5.5x. I don't think we should be taking the DS DLSS cost data at face value, it seems like Rich may have picked the wrong game.

Yeah the decision for the modules, and probably the RAM has already made. I would gladly take 16GB for free. Just in a world where I could choose between 12GB with more 33% RAM bandwidth or 33% more RAM (16GB)I'd take the former personally.

Yep, and higher GPU clocks could be met. 4 TFLOPs would actually feasible with that much bandwidth without diminishing returns. it's been talked about in this board. Anything more than 3-3.4 TFLOPs would be diminishing returns with just 102GB/s.

Handheld and docked performance would also be more consistent and stable. 88 or 102GB/s in handheld and up to 136GB/s in docked. We would get more games with 2x resolution differences, and not some weird 720p and 900p Frankenstein resolutions between modes, with more stable frames to boot.

Imagine a docked mode with 134GB/s bandwidth and PS5-style fixed power/dynamic clocks and a 25W power draw. The result would be insane and could punch way above its weight.
 
For what it's worth, I remember that when Rich's video dropped, someone here did some testing of Death Stranding's DLSS costs and found that there seems to be something wrong with them, specifically I think the cost of a 1440p frame over a 1080p frame was more than it had any reason to be (and more than going from 1440p to 2160p), and that the cost multiplier from 1080p to 2160p was something like 5.5x. I don't think we should be taking the DS DLSS cost data at face value, it seems like Rich may have picked the wrong game.
I think at least part of the issue was that post processing was done at the post dlss resolution. So what Rich was measuring was the frametime cost of diss + the frametime cost of higher resolution post processing
 
For what it's worth, I remember that when Rich's video dropped, someone here did some testing of Death Stranding's DLSS costs and found that there seems to be something wrong with them, specifically I think the cost of a 1440p frame over a 1080p frame was more than it had any reason to be (and more than going from 1440p to 2160p), and that the cost multiplier from 1080p to 2160p was something like 5.5x. I don't think we should be taking the DS DLSS cost data at face value, it seems like Rich may have picked the wrong game.
Interesting, thanks for mentioning this. I went back to Alex' analysis from before this where he studied Doom, and he found that DLSS 1080p ->4K took about 1.9 ms on the RTX 2060, which has 110 INT OPS. T239 would be half or more in docked mode, so less than 4 ms by that standard. While there is a large uncertainty, it differs from the DS derivation by a factor of 4.5, which is suspicious. We'll see how this plays out...
 
I think at least part of the issue was that post processing was done at the post dlss resolution. So what Rich was measuring was the frametime cost of diss + the frametime cost of higher resolution post processing

I do recall this, but I could swear I remember that being brought up as an additional wrinkle in the question of DLSS costs, rather than an explanation for why going from 1080p to 1440p increased the cost by a greater multiplier than 1440p to 2160p. Post-processing costs should logically affect those jumps more consistently.

But yeah, moving the post-processing to before DLSS would apparently improve performance, though it would also come at a cost to visuals, as can be seen in Alan Wake 2.

Interesting, thanks for mentioning this. I went back to Alex' analysis from before this where he studied Doom, and he found that DLSS 1080p ->4K took about 1.9 ms on the RTX 2060, which has 110 INT OPS. T239 would be half or more in docked mode, so less than 4 ms by that standard. While there is a large uncertainty, it differs from the DS derivation by a factor of 4.5, which is suspicious. We'll see how this plays out...

Wow, that is a major difference. Do we have any more data on the DLSS upscaling speed on other cards, so it can be made clear just how directly it correlates to INT OPS?
 
For what it's worth, I remember that when Rich's video dropped, someone here did some testing of Death Stranding's DLSS costs and found that there seems to be something wrong with them, specifically I think the cost of a 1440p frame over a 1080p frame was more than it had any reason to be (and more than going from 1440p to 2160p), and that the cost multiplier from 1080p to 2160p was something like 5.5x. I don't think we should be taking the DS DLSS cost data at face value, it seems like Rich may have picked the wrong game.



Imagine a docked mode with 134GB/s bandwidth and PS5-style fixed power/dynamic clocks and a 25W power draw. The result would be insane and could punch way above its weight.
You mean this post from Thraktor, I assume:

https://famiboards.com/threads/futu...-staff-posts-before-commenting.55/post-889301

It's good you remembered it. It's an excellent post indeed.
 
Please read this new, consolidated staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited by a moderator:


Back
Top Bottom