• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

I mean at this point does it really matter what funcle and jungle and dungle have said? Be happy knowing next gen Nintendo hardware will be in your hands in 2024. Dev kits are out. More non obscure leaks will come from big companies. We don’t need a factory for leaks.
 
Last edited:
Why the panic?

* Hidden text: cannot be quoted. *
Indeed, could have been the other device, but I assume the changes in the left controller would have warranted a call out. It is a distinct visible change in the controller layout.
 
0
So? Witcher 3 exists on Switch.

What people have to understand about these types of things is the developers in prioritizing pushing the limits. That doesn’t mean the switch or a less capable PC couldn’t run this game. It’s up to the developers. In this day and age for anyone to believe x game can’t run on a less powerful machine is beyond me. If the developer wants to make the game and optimize it for certain specs they can.

you-dumbass.gif


Let's look at what was said and compare what the video addresses:
"But storage issues aside the game will very likely be able to run on the Switch 2, it's not as resource intensive as many might think. The game is slow, the camera usually on a top-down view (atleast on PC) so not much to render at once. The combat is turn based, so not much CPU resources needed for AI and such"

I did not at any point say in one way or another say that a Switch 2 could not run Baldur's Gate 3 in some form. Those are your words and that is called checks notes...a strawman fallacy. All i am saying is, just as the video is BG3 has performance issues later on in the game. If that was not clear to start with. I am sorry.
 
I thought your takes were already addressed lots of pages ago, by multiple people in fact... First of all, you can't compare the upgrade from PS4 to PS5 to the upgrade from Switch 1 to Switch NG, that's irrelevant and disingenuous. PS4 was already a 1.8 TFLOPS machine capable of achieving outstanding visuals, good enough to the point most developers have proved unable to make better looking games due to a variety of factors, Switch is a 0.397 TFLOPS machine (when docked) that's already severely held back in both resolution and asset fidelity, therefore the jump for its successor is going to be infinitely more noticeable and much easier to achieve in a regular basis.

Ok. Yea. Sure.

And my flippant response is to say my wife and my son won’t really see that much of a difference, I assure you.

Second, game design being changed hasn't been the point of hardware upgrades since the PS3 era and won't ever be again. I don't get why you're so fixated on game design when anyone that's been around since 2006 knows the point of better hardware is to get better fidelity (and everything it encompasses), that may have been the point before the 7th generation but not anymore. That said, better fidelity will never be same to better IQ, asset quality is a thing as well and Nintendo's yet to reach the average of what can be achieved in a non-power constrained, modern environment regardless of resolution and performance. Let's get all this through before you keep dragging this strange take all over the ground.

Good. We agree then. The new hardware will essentially be used to take the Lite/OLED designed games and use DLSS and the SoC to make them run at 4K/60fps with increased graphics sliders.

Nothing more, nothing less.

Third, Nintendo is not going to keep making games for the original Switch since all of their major studios have clearly moved on, they always do. It's definitely going to keep getting what it should able to run, but the hardware jump is literally so huge anything targeting Switch 2 since the start is never going to run on the original, period. Regardless of how much of a jump the current PS5 exclusives may seem to you, many of those would never run on PS4 no matter what you do, either because of a CPU, GPU or storage bottleneck... Like it or not, those are next gen games by definition. You might call those "diminishing returns" in PS5's case, but a console leaving the X360 baseline to embrace PS4 Pro's is never going to suffer from such a thing. Arguing otherwise would mean you considered the 8th gen a minor jump over the 7th... Which, I don't see many agreeing with such a thing unless you're purposely using this arbitrary "game design" metric of yours.

lol

You just went from “everyone knows that since 2006 new hardware is all about fidelity increase and not about major design change…and went directly, and without any hint of irony, to “the ps5 and new Switch hardware will be utilized to design games impossible for previous hardware!!”

Which is it?
 
Sony/Microsoft could barely archive true 4K resolution on they consoles, 4K on Switch sucessor will be a upcaled 1080p resolution, not true 4K as many hope.
That's what pretty much everyone expects, and the image quality will still be perfectly adequate.
DLSS is currently the best upscaler available to devs. It's only downside is that it's Nvidia exclusive due to being hardware accelerated, but that results in much better image quality and higher performance gains than the competition.
Switch 2 games upscaling from 1080p to 4K via DLSS will look comparable to PS5/XSX upscaling 1440p to 4K via AMD's FSR2.
 
0
you-dumbass.gif


Let's look at what was said and compare what the video addresses:
"But storage issues aside the game will very likely be able to run on the Switch 2, it's not as resource intensive as many might think. The game is slow, the camera usually on a top-down view (atleast on PC) so not much to render at once. The combat is turn based, so not much CPU resources needed for AI and such"

I did not at any point say in one way or another say that a Switch 2 could not run Baldur's Gate 3 in some form. Those are your words and that is called checks notes...a strawman fallacy. All i am saying is, just as the video is BG3 has performance issues later on in the game. If that was not clear to start with. I am sorry.
To be fair, all you did was link a DF video with no comment. Not sure what exactly i am supposed to gather from that.
But it's fine, don't worry about it. Speculation is fun!
 
native 4K is a pointless target to reach, it's much more efficient to use image reconstruction/image upscaling to get there and use the available resources on something else. With Drake, Nintendo will probably try to go from 1080p to 1440p/4K depending on the games which on a TV will be more than adequate.
 
Ok. Yea. Sure.

And my flippant response is to say my wife and my son won’t really see that much of a difference, I assure you.



Good. We agree then. The new hardware will essentially be used to take the Lite/OLED designed games and use DLSS and the SoC to make them run at 4K/60fps with increased graphics sliders.

Nothing more, nothing less.



lol

You just went from “everyone knows that since 2006 new hardware is all about fidelity increase and not about major design change…and went directly, and without any hint of irony, to “the ps5 and new Switch hardware will be utilized to design games impossible for previous hardware!!”

Which is it?
Lets just look at the PS5 and Switch 2 from the specs we have, it's simple to figure out if this is a next gen product or a "pro" model.

PS4 uses GCN from 2011 as it's GPU architecture, it has 1.84TFLOPs available to it. (We will use GCN as a base) The performance of GCN is a factor of 1

Switch uses Maxwell v3 from 2015, it has 0.393TFLOPs available to it. The performance of Maxwell V3 is a factor of 1.4 + mixed precision for games that use this... This means when docked Switch is capable of 550GFLOPs to 825GFLOPs GCN, still a little less than half the GPU performance of PS4, this doesn't factor in far lower bandwidth, RAM amount or CPU performance, all of which sit around 30-33% of the PS4, with the GPU somewhere around 45% when completely optimized.

PS5 uses RDNA1.X, customized in part by Sony, introduced with the PS5 in 2020, it has up to 10.2TFLOPs available to it. The performance of RDNA 1.X is a factor of 1.2 + mixed precision (though this is limited to AI in use cases, developers just don't use mixed precision atm for console or PC gaming, it's used heavily in mobile and in Switch development though). This means ultimately that the PS5's GPU is about 6.64 times as powerful as the PS4, and around 3 times the PS4 Pro.

Switch 2 uses Ampere, specifically GA10F, which is a custom GPU architecture that will be introduced with the Switch 2 in 2024 (hopefully), it has 3.456TFLOPs available to it. The performance of Ampere is a factor of 1.2 + mixed precision* (this uses the tensor cores, and is independent of the shader cores). Mixed precision offers 5.2TFLOPs to 6TFLOPs. It also reserves 1/4th of the tensor cores for DLSS according to our estimates, much like the PS5 using FSR2, this allows the device to render the scene at 1/4th the resolution of the output with minimal loss to image quality, greatly boosting available GPU performance, and allowing the device to force a 4K image.

When comparing these numbers to PS4 GCN, Switch 2 has 4.14TFLOPs to 7.2TFLOPs, and PS5 12.24TFLOPs GCN equivalent, meaning that Switch 2 will do somewhere between 34% to 40% of PS5. It should also manage RT performance, and while PS5 will use some of that 10.2TFLOPs to do FSR2, Switch 2 can freely use the remaining 1/4th of it's tensor cores to manage DLSS. Ultimately there are other bottlenecks, the CPU is only going to be about 2/3rd as fast as the PS5's, and bandwidth with respect to their architectures, will only be about half as much, though it could offer 10GB+ for games, which is pretty standard atm for current gen games.

Switch 2 is going to manage current gen much better than Switch did with last gen games. The jump is bigger, the technology is a lot newer, and the addition of DLSS has leveled the playing field a lot, not to mention Nvidia's edge in RT playing a factor. I'd suggest that Switch 2 when docked, if using mixed precision will be noticeably better than the Series S, but noticeably behind current gen consoles.
 
Last edited:
Side note: I dislike the use of TFLOPs to draw a comparison between two different products that do things differently because they are architecturally different.

A) it confuses others and can alienate them from attempting to speculate
B) people will roll with it and take it as fact.


I’d rather go with actual hardware resources to give an estimated idea of how it’ll perform relatively speaking. Let’s see, 12SMs vs 2SMs? So it is at the very least 6 times the hardware performance relatively speaking, and that assumes the same clock speed.



that seems better, no? More manageable to envision than worrying about “this is ackshually 200 GIGAFLOPS, and with this other thing at a 33% increase in speed on top of a 6 times GPU at 1526 shaders you get 2 TFLOPs which means it’s 10 times the performance and then include these other features that skew the performance more in favor of X, Y and Z because it doesn’t have it and you get that it’s actually-“ I think my point is clear.


It’s a small change, but its goal is to be better digestible and easier to envision rather than trying to envision GFLOPs or TFLOPs.


Edit: well, actually, this assumes people even know what an SM is. But for this sake just picture it as a “core”

12 GPU “cores” vs the 2 GPU “cores” in the switch
 
Last edited:
I mean at this point does it really matter what funcle and jungle and dungle have said? Be happy knowing next gen Nintendo hardware will be in your hands in 2024. Dev kits are out. More non obscure leaks will come from big companies. We don’t need a factory for leaks.
my funcle who works at nintnendo says to expect the console next year.
 
0
On the conversation regarding gains Nintendo will see regarding new hardware:
On the visual fidelity point, the Mario movie should be a clear indicator that if a ceiling exists on Mario-style visuals, modern consumer hardware isn't anywhere close to hitting it.

On the game mechanics point... This is such a baffling take to me. Open Air Zelda is probably the AAA series that is most limited by hardware. The design philosophy can scale up to and beyond any consumer hardware we have. Faster streaming to enable faster traversal (vehicles in Tears of the Kingdom have a speed limit at which they eject Link), greater memory for a more persistent world, greater processing power for fluid stimulation and destructible environments, ray tracing for complex, emergent light and reflection puzzles... And the possibility of putting this all together to have very large scale physics puzzles that occur at vast distances, like dams being broken to drain lakes and create rivers.
Tears of the Kingdom is a game that would have been limited by hardware even if was released on the PS5.

Also I really don't understand why Rift Apart's portals are glossed over. To the point where some people are looking at the stuttery, pausey demonstrations of it running on HDDs on PC and saying, "this is fine". It honestly feels like for some people, visuals are the only thing that exist.
 
0
Side note: I dislike the use of TFLOPs to draw a comparison between two different products that do things differently because they are architecturally different.

A) it confuses others and can alienate them from attempting to speculate
B) people will roll with it and take it as fact.


I’d rather go with actual hardware resources to give an estimated idea of how it’ll perform relatively speaking. Let’s see, 12SMs vs 2SMs? So it is at the very least 6 times the hardware performance relatively speaking, and that assumes the same clock speed.



that seems better, no? More manageable to envision than worrying about “this is ackshually 200 GIGAFLOPS, and with this other thing at a 33% increase in speed on top of a 6 times GPU at 1526 shaders you get 2 TFLOPs which means it’s 10 times the performance and then include these other features that skew the performance more in favor of X, Y and Z because it doesn’t have it and you get that it’s actually-“ I think my point is clear.


It’s a small change, but its goal is to be better digestible and easier to envision rather than trying to envision GFLOPs or TFLOPs.


Edit: well, actually, this assumes people even know what an SM is. But for this sake just picture it as a “core”

12 GPU “cores” vs the 2 GPU “cores” in the switch
You do realize that is all TFLOPs is right? it's just a hardware resource, specifically how many ops are theoretically available per second... You even literally describe FLOPs in your example lol. I understand not wanting to give numbers, but if you do the math above, you come to 393GFLOPs for Switch and 2359GFLOPs for Drake at the same clock.

Having said that, there is architectural differences and bottlenecks to consider, you are ultimately right, but the problem is there just isn't a better method of performance metrics for GPUs, outside of actual benchmarks, which we won't get for a while.
 
I thought DLSS 3 was exclusive to the 4000 series? I have a 3070 in my laptop and as far as I'm aware I can only up to whatever the latest version of DLSS 2 is.
Well, 3.0 basically just added Frame Generation, which is exclusive to the 4000 series. Seems like Ray Reconstruction doesn’t have hardware dependency outside the tensor cores.
 
I thought DLSS 3 was exclusive to the 4000 series? I have a 3070 in my laptop and as far as I'm aware I can only up to whatever the latest version of DLSS 2 is.
The DLSS 3 SDK includes both the upscaler and the frame generator. (At least if I'm reading this programming guide correctly.) But on the marketing/consumer side, DLSS 2.x = upscaler and DLSS 3.x = frame generator.
 
Not revelant to Switch 2, but does not hurt to post it here:


Very relevant actually imo. Any new DLSS feature that Nvidia develops that can work at some level on the Switch 2 hardware will likely be brought over at some point, why wouldn't it? This new one looks absolutely perfect for Switch 2 and might have been developed with it in mind and of course they are going to bring it over to the main DLSS codebase.

Also keep in mind devs developing for Switch 2 are designing for a console and not for PC which has huge advantage of knowing exactly how it will perform. A DLSS feature that may be iffy on PC (this only works on 20% of rigs etc) can be optimized to work 100% they way they want on a console.
 
regarding dlss, I'm wondering if there is any possibility for Nintendo and Nvidia to implement in their backend a way to automatically update the dll files for any games whenever a newer version of DLSS is out. Obviously it'd be something developers could opt in or out of if they decided.

Probably more work than it is worth but that'd be an interesting way to essentially have games improve their IQ overtime without the developers direct involvement (tho yes I realize that sometimes it does break things so it wouldn't always be ideal).
 
Just to be clear, Orin and Drake both use a different OFA engine than other Ampere GPUs, it isn't clear how it compares to Ada's OFA engine, but more importantly, unlike Turing, Ampere's OFA engine works at 1:1 pixel resolution, so it can generate frames via DLSS 3.0 like technology, the issue is how fast? Well here is the answer I'd like to offer, even if Drake's OFA is half as fast as Ada, it should have no problem frame generating at 1440p, this doesn't mean it will adopt this feature, just that the hardware limitation doesn't exist. Ray Reconstruction is actually something Turing GPUs are capable of, so Drake being Ampere V3, it shouldn't have any issue with DLSS 3.5's feature set, if it adopts these features is anyone's guess, but it is capable of it on a hardware level.
 
Not revelant to Switch 2, but does not hurt to post it here:


On the contrary, this is very relevant to Switch 2. In fact, it was pretty much on top of my list of "things that would make RT more viable on Switch 2".

To explain, if you're using both ray tracing and DLSS in a game, there are effectively three relevant steps involved:
  • Ray traced graphics pass, which produces a noisy image
  • Denoising pass, which uses spatial data (ie info from neighbouring pixels) and temporal data (ie info from the same pixel in previous frames) to produce a smoother image
  • DLSS pass, which uses spatial and temporal data to reconstruct a higher resolution image
The issue is that the denoising and DLSS steps interfere with each other to an extent. The denoising pass doesn't leave DLSS with enough spatial or temporal information to reconstruct a good image, and the DLSS pass is a bit of a mess without denoising (it's not expecting noisy input data). Given that both denoising and DLSS are effectively two different approaches to the same problem, creating a good output image from incomplete data using spatial and temporal sampling, the obvious solution is to combine them into a single pass, which is what DLSS ray reconstruction is.

DLSS ray reconstruction should absolutely produce better image quality than the old denoiser plus DLSS approach, which means better quality output for a given level of RT performance, or the ability to hit the same quality output with reduced RT performance (ie lower ray count). It's also unrelated to DLSS 3's frame generation feature, so should be usable on the next Switch (so long as performance isn't significantly worse than regular DLSS).
 
regarding dlss, I'm wondering if there is any possibility for Nintendo and Nvidia to implement in their backend a way to automatically update the dll files for any games whenever a newer version of DLSS is out. Obviously it'd be something developers could opt in or out of if they decided.

Probably more work than it is worth but that'd be an interesting way to essentially have games improve their IQ overtime without the developers direct involvement (tho yes I realize that sometimes it does break things so it wouldn't always be ideal).

Well ... nVidia still owes Nintendo one for that catastrophe that was the security problems with the chips in the launch model, soooo ...

Dunno how feasible and realizable that is, though. ^^
 
On the contrary, this is very relevant to Switch 2. In fact, it was pretty much on top of my list of "things that would make RT more viable on Switch 2".

To explain, if you're using both ray tracing and DLSS in a game, there are effectively three relevant steps involved:
  • Ray traced graphics pass, which produces a noisy image
  • Denoising pass, which uses spatial data (ie info from neighbouring pixels) and temporal data (ie info from the same pixel in previous frames) to produce a smoother image
  • DLSS pass, which uses spatial and temporal data to reconstruct a higher resolution image
The issue is that the denoising and DLSS steps interfere with each other to an extent. The denoising pass doesn't leave DLSS with enough spatial or temporal information to reconstruct a good image, and the DLSS pass is a bit of a mess without denoising (it's not expecting noisy input data). Given that both denoising and DLSS are effectively two different approaches to the same problem, creating a good output image from incomplete data using spatial and temporal sampling, the obvious solution is to combine them into a single pass, which is what DLSS ray reconstruction is.

DLSS ray reconstruction should absolutely produce better image quality than the old denoiser plus DLSS approach, which means better quality output for a given level of RT performance, or the ability to hit the same quality output with reduced RT performance (ie lower ray count). It's also unrelated to DLSS 3's frame generation feature, so should be usable on the next Switch (so long as performance isn't significantly worse than regular DLSS).
Based on the sample FPS counts they provided, it seems to offer a slight (~10%) performance improvement in addition to an image quality improvement, but who can say if that's an across-the-board improvement or an edge case.
 
Based on the sample FPS counts they provided, it seems to offer a slight (~10%) performance improvement in addition to an image quality improvement, but who can say if that's an across-the-board improvement or an edge case.

It's quite possible that the savings from removing the denoising pass (which takes a non-trivial amount of time itself) may result in performance improvements overall even if the actual DLSS pass takes more time. Although without knowing whether all the other variables are the same in the FPS counts they provided, it's something I'd want to test myself.
 
Successful Nintendo systems do usually have cross-gen with their successor to some extent. Kirby’s Adventure came out on NES in 1993, DKC3 released after the N64 and SNES was still getting occasional first-party games into the late 90s in Japan (Thracia 776, Wrecking Crew ‘98). And handhelds did cross-gen more due to success and backward compatibility.

Although, its unlikely Switch will get PS4-style cross-gen due to the hardware leap and the successor coming late - MP4 is probably its last marquee release. Switch sales have been given a boost by TotK/Mario movie but the general trend is still decline, and they’re going to want to push for the upgrade. But remasters and low-mid budget games will keep coming to Switch for awhile I expect.
 
Last edited:
It's quite possible that the savings from removing the denoising pass (which takes a non-trivial amount of time itself) may result in performance improvements overall even if the actual DLSS pass takes more time. Although without knowing whether all the other variables are the same in the FPS counts they provided, it's something I'd want to test myself.



"In general, frame rates with DLSS RR will be about the same as frame rates without." - so it looks like it's a bit of a free lunch situation.
 
"In general, frame rates with DLSS RR will be about the same as frame rates without." - so it looks like it's a bit of a free lunch situation.

In the article, they state:

Note that games with multiple ray-traced effects may have several denoisers that are replaced by the single Ray Reconstruction neural network. In these cases, Ray Reconstruction can also offer a performance boost. In titles with less intensive ray tracing and fewer denoisers, Ray Reconstruction improves image quality though may have a slight performance cost.

So yeah, sometimes a bit better, sometimes a bit worse. I'd say it's a pretty obvious choice to leave it on regardless, given the image quality improvements.
 
wouldn't it be nice if the resolution goalposts stopped moving?
I legitimately think 4K is enough

No question. Honestly, for gaming we might have been better off if flat panel displays didn't become the standard. If CRT TV's were still the standard, resolution would have probably stayed put at 1080p and we would have better image quality than we do now even with 4K LCD TV's. There are numerous videos online showcasing just how good those HD CRT TV's were, and despite the lower resolution, games tend to look better on them. There are certainly some drawbacks to this, mainly the size of your TV would likely still be no larger than 42". Compared to twenty years ago, most people are buying TV's 2-3x the size they were buying in the CRT days, but for gaming this actually exasperated the drawbacks of fixed pixel displays. TV's got bigger and this requires higher resolutions to still look good. You could sit a 42" 1080p and a 4K TV side by side and will be hard pressed to see the difference unless you get very close, but on a big 85" TV, its very easy to see the difference even at moderate viewing distances. Hopefully 4K becomes the standard for a long time, but TV manufactures are always looking to sell you a new TV, and its easy to market an increase in resolution.
 
No question. Honestly, for gaming we might have been better off if flat panel displays didn't become the standard. If CRT TV's were still the standard, resolution would have probably stayed put at 1080p and we would have better image quality than we do now even with 4K LCD TV's. There are numerous videos online showcasing just how good those HD CRT TV's were, and despite the lower resolution, games tend to look better on them. There are certainly some drawbacks to this, mainly the size of your TV would likely still be no larger than 42". Compared to twenty years ago, most people are buying TV's 2-3x the size they were buying in the CRT days, but for gaming this actually exasperated the drawbacks of fixed pixel displays. TV's got bigger and this requires higher resolutions to still look good. You could sit a 42" 1080p and a 4K TV side by side and will be hard pressed to see the difference unless you get very close, but on a big 85" TV, its very easy to see the difference even at moderate viewing distances. Hopefully 4K becomes the standard for a long time, but TV manufactures are always looking to sell you a new TV, and its easy to market an increase in resolution.
I posted this on the other page but I imagine they will for consoles later in the cycle, just because devs want to push the hardware more fidelity wise. I think we will see a lot more 1080p and 1440p games on PS5/X series S once multiplatform games are done with last gen.

In regards to something like 8k. Would it really even be feasible on PS5 Pro? A console that has a theoretical 2-3x more GPU than current PS5 model. I forgot that 8k has 4x as many pixels as 4k, instead of 2x as many pixels.

I mean I'm sure Sony would love to advertise their 8k TVs, but they're still super expensive anyway. Maybe PS6 🤔. But yeah it does feel like diminishing returns after 4k. Why play 8k 30fps when one can play 4k 60fps and/or better fidelity.
 
Lets just look at the PS5 and Switch 2 from the specs we have, it's simple to figure out if this is a next gen product or a "pro" model.

PS4 uses GCN from 2011 as it's GPU architecture, it has 1.84TFLOPs available to it. (We will use GCN as a base) The performance of GCN is a factor of 1

Switch uses Maxwell v3 from 2015, it has 0.393TFLOPs available to it. The performance of Maxwell V3 is a factor of 1.4 + mixed precision for games that use this... This means when docked Switch is capable of 550GFLOPs to 825GFLOPs GCN, still a little less than half the GPU performance of PS4, this doesn't factor in far lower bandwidth, RAM amount or CPU performance, all of which sit around 30-33% of the PS4, with the GPU somewhere around 45% when completely optimized.

PS5 uses RDNA1.X, customized in part by Sony, introduced with the PS5 in 2020, it has up to 10.2TFLOPs available to it. The performance of RDNA 1.X is a factor of 1.2 + mixed precision (though this is limited to AI in use cases, developers just don't use mixed precision atm for console or PC gaming, it's used heavily in mobile and in Switch development though). This means ultimately that the PS5's GPU is about 6.64 times as powerful as the PS4, and around 3 times the PS4 Pro.

Switch 2 uses Ampere, specifically GA10F, which is a custom GPU architecture that will be introduced with the Switch 2 in 2024 (hopefully), it has 3.456TFLOPs available to it. The performance of Ampere is a factor of 1.2 + mixed precision* (this uses the tensor cores, and is independent of the shader cores). Mixed precision offers 5.2TFLOPs to 6TFLOPs. It also reserves 1/4th of the tensor cores for DLSS according to our estimates, much like the PS5 using FSR2, this allows the device to render the scene at 1/4th the resolution of the output with minimal loss to image quality, greatly boosting available GPU performance, and allowing the device to force a 4K image.

When comparing these numbers to PS4 GCN, Switch 2 has 4.14TFLOPs to 7.2TFLOPs, and PS5 12.24TFLOPs GCN equivalent, meaning that Switch 2 will do somewhere between 34% to 40% of PS5. It should also manage RT performance, and while PS5 will use some of that 10.2TFLOPs to do FSR2, Switch 2 can freely use the remaining 1/4th of it's tensor cores to manage DLSS. Ultimately there are other bottlenecks, the CPU is only going to be about 2/3rd as fast as the PS5's, and bandwidth with respect to their architectures, will only be about half as much, though it could offer 10GB+ for games, which is pretty standard atm for current gen games.

Switch 2 is going to manage current gen much better than Switch did with last gen games. The jump is bigger, the technology is a lot newer, and the addition of DLSS has leveled the playing field a lot, not to mention Nvidia's edge in RT playing a factor. I'd suggest that Switch 2 when docked, if using mixed precision will be noticeably better than the Series S, but noticeably behind current gen consoles.
This reminds me of the famous Scott Steiner TNA math promo

.

I promise I'm not being nasty, I just read it in his voice lol.

Also as a side note how can you possibly know that Switch 2 will be 3.4tflops? Let's face it there's a good chance it will be closer to 2tflops to maximize battery life and cut down on active cooling inside the device. DLSS was probably only every considered due to it letting them clock the GPU waaaay lower than they would have had to without it imo.
 
wouldn't it be nice if the resolution goalposts stopped moving?
I legitimately think 4K is enough

There comes a point where resolution within the consumer space does hit a limit, and I think 4K is that limit within the <120” TV space.

But even that, we go to the cinema, and watch a film on the big screen, and that is typically a 4K projector these days. I think only an IMAX screen is technically a higher resolution, but they have a much larger screen to cope with it.

We know for actual Film reels, there’s a theoretical limit to how much detail can be resolved within the image. The same can largely be true for a digital image within a specific size with some caveats.

What I could see though is “Retina” Displays still being a push, at least for the folks at Apple, and we know already they have 5k displays that are only 27”. Dell even has an 8K 32” Monitor meant for professional work.

I do see the draw towards a screen where the individual pixels are not visible from your viewing distance. But the further away you sit from a TV, the required resolution is lower. It then only really makes any sense to have that kind of display for a computer screen such as a desktop or laptop, or your smartphone, or tablet. I think it matters less though in the gaming realm.

I just had an idle thought about an old 'leak'

* Hidden text: cannot be quoted. *

Given what we’ve seen of it currently, I can totally get behind that idea. That someone was simply mistaken, and took it to mean it was Nintendo related, rather than a different company.

When I look at the two, there are definitely similarities, but enough differences to distinguish that someone may look at it, and think, “that’s the new system!”
 
Also as a side note how can you possibly know that Switch 2 will be 3.4tflops? Let's face it there's a good chance it will be closer to 2tflops to maximize battery life and cut down on active cooling inside the device. DLSS was probably only every considered due to it letting them clock the GPU waaaay lower than they would have had to without it imo.

The 3.4Tflop would be for docked mode, so battery life is not a concern. I will admit that I am cautious with my expectations, because Nintendo tends to be conservative with the thermals with their tech. I expect power draw to be very similar to Switch, but without knowing what process Drake is being manufactured, its hard to lock down the clock speeds. An extremely pessimistic outcome would be to match the clock speeds for Switch and even then it is 2.4Tflop. My guess is that it will be right around 3 Tflop.
 
Also as a side note how can you possibly know that Switch 2 will be 3.4tflops? Let's face it there's a good chance it will be closer to 2tflops to maximize battery life and cut down on active cooling inside the device.
He doesn't "know" that it will be such... it's his educated guess and if you've been paying attention you'd know that it's a very well educated decent guess.
 
Lets just look at the PS5 and Switch 2 from the specs we have, it's simple to figure out if this is a next gen product or a "pro" model.

PS4 uses GCN from 2011 as it's GPU architecture, it has 1.84TFLOPs available to it. (We will use GCN as a base) The performance of GCN is a factor of 1

Switch uses Maxwell v3 from 2015, it has 0.393TFLOPs available to it. The performance of Maxwell V3 is a factor of 1.4 + mixed precision for games that use this... This means when docked Switch is capable of 550GFLOPs to 825GFLOPs GCN, still a little less than half the GPU performance of PS4, this doesn't factor in far lower bandwidth, RAM amount or CPU performance, all of which sit around 30-33% of the PS4, with the GPU somewhere around 45% when completely optimized.

PS5 uses RDNA1.X, customized in part by Sony, introduced with the PS5 in 2020, it has up to 10.2TFLOPs available to it. The performance of RDNA 1.X is a factor of 1.2 + mixed precision (though this is limited to AI in use cases, developers just don't use mixed precision atm for console or PC gaming, it's used heavily in mobile and in Switch development though). This means ultimately that the PS5's GPU is about 6.64 times as powerful as the PS4, and around 3 times the PS4 Pro.

Switch 2 uses Ampere, specifically GA10F, which is a custom GPU architecture that will be introduced with the Switch 2 in 2024 (hopefully), it has 3.456TFLOPs available to it. The performance of Ampere is a factor of 1.2 + mixed precision* (this uses the tensor cores, and is independent of the shader cores). Mixed precision offers 5.2TFLOPs to 6TFLOPs. It also reserves 1/4th of the tensor cores for DLSS according to our estimates, much like the PS5 using FSR2, this allows the device to render the scene at 1/4th the resolution of the output with minimal loss to image quality, greatly boosting available GPU performance, and allowing the device to force a 4K image.

When comparing these numbers to PS4 GCN, Switch 2 has 4.14TFLOPs to 7.2TFLOPs, and PS5 12.24TFLOPs GCN equivalent, meaning that Switch 2 will do somewhere between 34% to 40% of PS5. It should also manage RT performance, and while PS5 will use some of that 10.2TFLOPs to do FSR2, Switch 2 can freely use the remaining 1/4th of it's tensor cores to manage DLSS. Ultimately there are other bottlenecks, the CPU is only going to be about 2/3rd as fast as the PS5's, and bandwidth with respect to their architectures, will only be about half as much, though it could offer 10GB+ for games, which is pretty standard atm for current gen games.

Switch 2 is going to manage current gen much better than Switch did with last gen games. The jump is bigger, the technology is a lot newer, and the addition of DLSS has leveled the playing field a lot, not to mention Nvidia's edge in RT playing a factor. I'd suggest that Switch 2 when docked, if using mixed precision will be noticeably better than the Series S, but noticeably behind current gen consoles.
Interesting... I knew mixed precision was a factor but how do we know Switch games already leverage it a decent amount? Since, it's admittedly a lot more useful for the Drake chip. That said, are current gen consoles doing mixed precision in any game period? Those 10.2 TFLOPS are coming up shorter than they should, then...
 
you-dumbass.gif


Let's look at what was said and compare what the video addresses:
"But storage issues aside the game will very likely be able to run on the Switch 2, it's not as resource intensive as many might think. The game is slow, the camera usually on a top-down view (atleast on PC) so not much to render at once. The combat is turn based, so not much CPU resources needed for AI and such"

I did not at any point say in one way or another say that a Switch 2 could not run Baldur's Gate 3 in some form. Those are your words and that is called checks notes...a strawman fallacy. All i am saying is, just as the video is BG3 has performance issues later on in the game. If that was not clear to start with. I am sorry.
No issue here. My statement was not necessarily addressed at you, even though I quoted your post. It was more of a global statement made at people who say x game can’t run on switch. The biggest issue for switch would be storage. Optimization goes a long way. I remember people saying no way Arkham Knight could run on switch, and here we are.
 
Interesting... I knew mixed precision was a factor but how do we know Switch games already leverage it a decent amount? Since, it's admittedly a lot more useful for the Drake chip. That said, are current gen consoles doing mixed precision in any game period? Those 10.2 TFLOPS are coming up shorter than they should, then...

From what I can remember, a developer over on Beyond3D had told me that the half precision shaders on the Tegra X1 just kind of work. Essentially, during the compiling process, all shaders that can be done in half precision will be done in half precision. It didn't sound like there was much work on the developers end to make use of this feature.
 
Outside of using Tensor cores for dlss, how much of the CPU gap between Switch 2 and PS5/Series X could be made up by the Tensor cores for cpu tasks like Ai and animations?
probably zero. no one has demonstrated if such a hypothesis is viable

Interesting... I knew mixed precision was a factor but how do we know Switch games already leverage it a decent amount? Since, it's admittedly a lot more useful for the Drake chip. That said, are current gen consoles doing mixed precision in any game period? Those 10.2 TFLOPS are coming up shorter than they should, then...
Ubisoft said all of their games make use of mixed precision and I assume every major engine does as much as they can
 
Not not relevant, as per the Nvidia subreddit it seems to be for all RTX cards.
Well if Nintendo and nVidia put a chip that's DLSS 3.0 ready, sure. And if you're willing to pay around 600 bucks for it.

Reminder : DLSS 3.x is only available for GeForce 40xx
 
Well if Nintendo and nVidia put a chip that's DLSS 3.0 ready, sure. And if you're willing to pay around 600 bucks for it.

Reminder : DLSS 3.x is only available for GeForce 40xx
not exactly correct

DLSS 3 is:
  • Super Resolution - on all rtx cards
  • Reflex - on all rtx cards
  • Frame Generation - on 4000 cards
  • Ray Reconstruction - on all rtx cards
 
Please read this new, consolidated staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited by a moderator:


Back
Top Bottom