StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

About off-topic posts
Staff Communication
Hello, everyone.

After the reveal of the Switch 2, we have noticed a change in the culture of the thread. Namely, that the amount of off-topic talk has seemingly increased, despite the thread having slowed down considerably. We do not see the need to action this, since it reflects a wish by the community to talk about matters that are only tangentially related to hardware more broadly. From the moderation side of things, this thread has also been a taxing one to regulate, and our consensus is that removing any special treatment is warranted at this point.

Considering this, we have agreed that a slightly more lax approach to the moderation of this thread is in order. Going forward and until this thread is sunset with the release of the Switch 2, we will treat it like any other thread on this forum, and we will not action posts just for being off-topic.

-MarcelRguez, Biscuit, PixelKnight, DecoReturns, Phendrift, Lord Azrael, WonderLuigi, IsisStormDragon, KilgoreWolfe, Zellia, Tangerine_Cookie, Brofield, OctoSplattack
 
i'm guessing nintendo will give it their own quirky name like "nintendo super resolution"
Honestly I'd be shocked if they call it anything at all! I don't think they've named something like that since the SuperFX era. My suspicion is that they'll just have it in the games and never draw attention to it if they can help it
 
I think if the CPU clocks were equal in handheld and docked, or if the docked clocks were higher (which would make more intuitive sense), folks would more readily accept it. The handheld clocks being higher is an interesting quirk, it's been speculated this may be due to a handheld only feature, or some kind of compensatory measure.

There's also this exchange that hasn't left my brain,

Matt:


The way I interpreted this is that the presented clocks are inaccurate (i.e. 'there's a problem with the given data'), but I can also see a reading of this as 'there was a problem that required faster handheld CPU clocks'.

I'm not worried about the CPU anyways, Matt clarified that this wasn't a major impediment. But I do find this an interesting mystery.
The second reading doesn't really gel for me, in context with everything else. What we've heard is that there's nothing to worry about re: hardware performance, and that doesn't seem to be able to be consistent with them being forced to have higher handheld clocks due to a problem. If that were true, it feels like we'd be hearing way more pessimism, or at least some level of concerns.

But yeah, regardless, not anything to actually be worried about. Just a puzzle to help us pass the time.
 
Staff Communication
Hello, everyone.

After the reveal of the Switch 2, we have noticed a change in the culture of the thread. Namely, that the amount of off-topic talk has seemingly increased, despite the thread having slowed down considerably. We do not see the need to action this, since it reflects a wish by the community to talk about matters that are only tangentially related to hardware more broadly. From the moderation side of things, this thread has also been a taxing one to regulate, and our consensus is that removing any special treatment is warranted at this point.

Considering this, we have agreed that a slightly more lax approach to the moderation of this thread is in order. Going forward and until this thread is sunset with the release of the Switch 2, we will treat it like any other thread on this forum, and we will not action posts just for being off-topic.

-MarcelRguez, Biscuit, PixelKnight, DecoReturns, Phendrift, Lord Azrael, WonderLuigi, IsisStormDragon, KilgoreWolfe, Zellia, Tangerine_Cookie, Brofield, OctoSplattack
🫡
 
About Reflex 2 and a possible future tech of frame-extrapolation based on it, I think this article is very interesting.
He speaks how these technologies have evolved in the context of VR gaming in the last decade.

 
Honestly I'd be shocked if they call it anything at all! I don't think they've named something like that since the SuperFX era. My suspicion is that they'll just have it in the games and never draw attention to it if they can help it
I agree. Ultimately, upscaling is "cheating". Wouldn't Nintendo rather advertise their titles as "Ultra High Definition!" rather than "Upscaled 1900p.". I think the branding will be VERY similar to how they describe the current Nintendo Switch's technical specifications - Ultra HD gaming at home, Full HD gaming on the go, with "HDR10" and "Enhanced Nvidia Custom Tegra Processor " down the page. "Ultra HD with enhanced colour compared to Nintendo Switch.", "Play in higher resolutions (compared to Nintendo Switch).", etc.

On the technical side, I sort of think we might get more than one neural upscaling solution. We MIGHT get a Nintendo specific one as well as DLSS, and perhaps platform specific optimisations for DLSS. But what I HOPE we get is that Nintendo worked with Nvidia to get a flexible, optimised DLSS implementation specific to the platform that can inherit advantages of updates on the PC side seamlessly.
 
Hypothetically speaking, yes ,of course they could go all-out on memory and every other hardware feature. They could even jump ship and reach an agreement with Samsung and go straight to LPDDR6.
Will they, though? No.
That's the point. Everything we know so far points to LPDDR5 memory controllers, despite the chips Nintendo decided to source being capable of 5X speeds.
Could they overclock the chips to 5X speeds? Yes. Will they? No.

I don't particularly like to shade other posts or be part of the doom and gloom cycle. But some of you really need to keep your feet on the ground, and your expectations realistic.
I'm not just talking about this memory discussion - but also DLSS 4 @ 4K, ray reconstruction, etc.
Think of it like this, you're not only setting yourself up for disappointment - but realise that a lot of people read this forum and take the info they read for granted.
So, let's all be a bit more responsible and stop daydreaming on hypotheticals and focus on what we do know and speculate and extrapolate from there instead
I think you take your estimates too seriously, and simply discard any possibility that they are overestimating DLSS Frametime Cost (which is what they are most likely doin).

First you calculate Cost based on the T239 Tensor performance, simply ignoring that DLSS has never scaled linearly with TOPs, neither in the CNN nor in the transformer model, DLSS has always been more efficient in weaker GPUs (efficiency counting , shorter time for the same tensor power).

You have also ignored any form of DLSS implementation other than PC, NVN2 leaked documentation points to DLSS advantages to implementing API level, and we had a developer here in the Fami itself pointing out that DLSS implantation in Switch 2 has Advantages compared to the PC.

You cannot be upset, just because some people does not agree with your calculations and think that it is possible to see DLSS used at high resolutions or high rates of frame viable.

Edit: typo
 
Last edited:
Staff Communication
Hello, everyone.

After the reveal of the Switch 2, we have noticed a change in the culture of the thread. Namely, that the amount of off-topic talk has seemingly increased, despite the thread having slowed down considerably. We do not see the need to action this, since it reflects a wish by the community to talk about matters that are only tangentially related to hardware more broadly. From the moderation side of things, this thread has also been a taxing one to regulate, and our consensus is that removing any special treatment is warranted at this point.

Considering this, we have agreed that a slightly more lax approach to the moderation of this thread is in order. Going forward and until this thread is sunset with the release of the Switch 2, we will treat it like any other thread on this forum, and we will not action posts just for being off-topic.

-MarcelRguez, Biscuit, PixelKnight, DecoReturns, Phendrift, Lord Azrael, WonderLuigi, IsisStormDragon, KilgoreWolfe, Zellia, Tangerine_Cookie, Brofield, OctoSplattack
🙏🏾
 
I think you take your estimates too seriously, and simply discard any possibility that they are overestimating DLSS Frametime Cost (which is what they are most likely doin).

First you calculate Cost based on the T239 Tensor performance, simply ignoring that DLSS has never scaled linearly with TOPs, neither in the CNN nor in the transformer model, DLSS has always been more efficient in weaker GPUs (efficiency counting , shorter time for the same tensor power).

You have also ignored any form of DLSS implementation other than PC, NVN2 leaked documentation points to DLSS advantages to implementing API level, and we had a developer here in the Fami itself pointing out that DLSS implantation in Switch 2 has Advantages compared to the PC.

You cannot be upset, just because some people does not agree with your calculations and think that it is possible to see DLSS used at high resolutions or high rates of frame viable.

Edit: typo
It's true, you've made some good points.
I already accounted that linear scaling estimations are inaccurate by design, but the potential NVN2 API optimizations (which is pretty much a given) is something to consider. If it shaves off 1 ms, it's already a huge impact for 60 FPS games.
 
Thank you for bringing some clarity into this discussion. The amount of people here breathing hopium for certain DLSS 4 @ 4K60 is downright toxic and just setting everyone up for disappointment. Inb4 games will target sub-4K resolutions and everyone will call the devs lazy like DLSS is a simple slider you can turn up to max with no compromises

There'll definitely be 4K 60FPS games, just generally the least demanding ones. Like I'd fully expect Famicom Detective Club 4 to hit that because of how little animation is required on each frame. It might not even need to be DLSSed at all, or just by the 2.25x from 1440p to 4K.

4K 30FPS and 1440p 60FPS will be way more common but it's not like 4K 60FPS is off limits for some new games.




Unrelated question: is it even realistically possible Switch 2's encryption copy protection on games lasts the whole generation? I know the PC crowd is out saying Switch 2 will be emulated within a year or two but the only reason Switch emulation went from theoretical to practical is because the encryption keys got out there. Otherwise an emulator would have no software to run. Is it a realistic possibility that NVIDIA could do a good enough job on protecting Switch 2 that its games' copy protection still hasn't been broken by like 2031 or 2032 or should we expect a repeat of the Switch on this front?
 
The Switch 1 had multiple 1080p 60fps games (like Smash) and some big 720p 60fps games (like Odyssey). I have an incredibly hard time imagining they’ve waited all these years to release a new system that will be focusing on 4K 30fps or that only gets to 60fps but at 1080p. We’ve also heard for years they’ve been telling devs to prepare for 4K. So although I’d be perfectly happy with a system hyper focused on 1080p at 60fps, if rumors are to be believed, Nintendo has bigger plans and I can’t imagine it’s for a world where most games come in at 30fps.

I get setting expectations based on specs, but there’s also a little bit of common sense that should be applied here. Why would they tell people to prepare assets for 4K if they didn’t think their system could handle it at a good framerate?
 
Napkin math disclaimer: DLSS 4 @ 1440p60 is 7.22 ms, which leaves 9,45 ms for rendering and post (16.67 ms total).
In Ultra-Performance mode the native resolution would be 480p.

So yeah, we could see demanding 2D games, and simpler/smaller-scoped 3D games with DLSS-SR 4 (Ultra-Performance) @ 1440p60
7.22ms is really not bad for 30fps though. I could imagine games where they really don't prioritize 60fps going all in on rendering a few very pretty pixels, and then leveraging the transformer model to compensate. eg. Zelda, Xenoblade. It's not hard to imagine these games leaning into a lot more RT, etc. and then rendering at 480p with a DLSS 4 ultra-performance mode. By contrast, I would expect games where latency is more critical (shooters, platformers, fighers) to target 60fps and either target 1080p or fallback to the CNN model to shave ms.
 
Absoloutly agree people are underestimating how many titles will hit 4k. I get its a really demanding target, but for say, the kinds of games which hit 1080 or 900p on switch one, should hitting 4k really be that much of an issue for switch 2 at all in games of those scale. Natively, I mean.
 
I expect a lot of games will pull the Persona 5 move of having the maximum resolution UI assets while the 3D visuals dynamically scale, which actually has a big impact on perceived sharpness. Persona 5 Royal on Switch has a 720p / 1080p UI handheld / docked layered on top of a 540p / 720p scene, which is actually a brilliant move considering how much of the game is really just a visual novel with a lot of 2D assets dancing about.

The human eye is already not great at discerning the minute differences of 1440p+ resolutions at a living room viewing distance, so a game can have a 4K UI which will look nice and sharp overlaid on top of a 1440p-1800p 3D visuals (DLSS, native, etc.) but the overall impression will be '4K' which has basically become its own buzzword for 'very sharp and pretty on an ultra high def screen'. And who's gonna complain? I won't.

Anyways, even a GTX 1650 can natively run a game like Hades II at 4K High 80 FPS. A Turing based 2019 GPU with 4 GB of VRAM and 896 cores at 1.9 Ghz ~ 3.4 TFLOPs.
 
Last edited:
There'll definitely be 4K 60FPS games, just generally the least demanding ones. Like I'd fully expect Famicom Detective Club 4 to hit that because of how little animation is required on each frame. It might not even need to be DLSSed at all, or just by the 2.25x from 1440p to 4K.

4K 30FPS and 1440p 60FPS will be way more common but it's not like 4K 60FPS is off limits for some new games.




Unrelated question: is it even realistically possible Switch 2's encryption copy protection on games lasts the whole generation? I know the PC crowd is out saying Switch 2 will be emulated within a year or two but the only reason Switch emulation went from theoretical to practical is because the encryption keys got out there. Otherwise an emulator would have no software to run. Is it a realistic possibility that NVIDIA could do a good enough job on protecting Switch 2 that its games' copy protection still hasn't been broken by like 2031 or 2032 or should we expect a repeat of the Switch on this front?
Unless FDC4 goes 3D, it wouldn't use dlss anyway since it's not made for 2D games.

That said, there is a way to use ML upscaling for 2D games, but it's just not needed unless you do something crazy
 
Absoloutly agree people are underestimating how many titles will hit 4k. I get its a really demanding target, but for say, the kinds of games which hit 1080 or 900p on switch one, should hitting 4k really be that much of an issue for switch 2 at all in games of those scale. Natively, I mean.
With the right optimisations, it's probably possible, but the question is. Why? If you can get a better overall image by increasing scene complexity, reducing rendering resolution, and leaning on upscaling, upscaling could often end up as the better option. Games at the complexity of Nintendo Switch should absolutely have the ability to reach 4K on Switch 2 without upscaling, but;

It's possible that even at the same scene complexity, there's image quality advantages to upscaling.

Upscaling rather than native could stabilise the framerate.

And many games on Switch 2 will probably be much MORE complicated than Switch games.

However, there is a certain subset of software that probably could push native resolutions - Nintendo's own casual titles. Mario Party, Worldwide Games, 1-2-Switch, etc. could all benefit from higher resolutions without upscaling.

I kind of agree though that 4K will be reasonably "present" on the successor. Nintendo Switch had a fraction of the performance of PS4 and still had plenty of first party titles targeting 1080p. Now the successor is a LARGER fraction of PS5, and a SUBSTANTIAL fraction of PS4 Pro, 4K is on the table if that's what developers want.

Here's a question for everyone - do you think it would be more reasonable for Nintendo to patch 51 Worldwide Games with higher resolutions, improved networking and screen size aware Mosaic Mode, or for them to just outright make a Switch 2 exclusive, richly detailed 65 Worldwide Games? I think option one, keep it evergreen!
 
Non lo considero un atteggiamento pessimista, perché non ho mai detto che qualcosa sia del tutto impossibile.
Ciò che intendo dire è che, se i nostri calcoli stimano che 1080p30 per DLSS 4 e 4K30 per DLSS 3 è ciò che l'hardware può raggiungere in sicurezza , questo è ciò che la gente dovrebbe aspettarsi. Punto.

Naturalmente è sempre possibile ottenere di più dall'hardware, se si utilizza il caso d'uso giusto.
Ma l'80% dei giochi non si spingerà oltre, perché non ne varrà la pena per il loro caso d'uso e per le loro risorse.
Ad esempio, FSR 2 non era fattibile su Switch 1; e tuttavia è stato implementato da No Man's Sky perché hanno visto un'opportunità, hanno calcolato i costi e ci hanno provato.
Tutti gli altri giochi su Switch 1 utilizzavano FSR 1, persino TotK, perché il costo aggiuntivo di FSR 2 o il costo ingegneristico per integrare FSR 2 nel motore grafico di Nintendo non ne valevano la pena.
Quindi, FSR 2 non era fattibile su Switch 1, ma ciò non significa che fosse del tutto impossibile.

Lo stesso vale per DLSS su Switch 2: 1080p30 per DLSS 4 e 4K30 per DLSS 3 sono obiettivi sicuri; non significa che gli altri siano impossibili. Ma non aspettatevi che molti giochi li utilizzino.

Modifica: Nintendo non ha scelto di includere i Tensor core nella GPU. Sono parte integrante dell'architettura della GPU. Immagino che sia più un caso di "se ci sono, tanto vale usarli dove possiamo".
But the calculations that have been done take it for granted that the transformer model is 4 times heavier to use! from the tests carried out it is absolutely not true so those calculations are not to be taken into consideration absolutely, also 12 rt cores are good for switch2 you can definitely see a bit of ray tracing .
 
I expect a lot of games will pull the Persona 5 move of having the maximum resolution UI assets while the 3D visuals dynamically scale, which actually has a big impact on perceived sharpness. Persona 5 Royal on Switch has a 720p / 1080p UI handheld / docked layered on top of a 540p / 720p scene, which is actually a brilliant move considering how much of the game is really just a visual novel with a lot of 2D assets dancing about.

The human eye is already not great at discerning the minute differences of 1440p+ resolutions at a living room viewing distance, so a game can have a 4K UI which will look nice and sharp overlaid on top of a 1440p-1800p 3D visuals (DLSS, native, etc.) but the overall impression will be '4K' which has basically become its own buzzword for 'very sharp and pretty on an ultra high def screen'. And who's gonna complain? I won't.

Anyways, even a GTX 1650 can natively run a game like Hades II at 4K High 80 FPS. A Turing based 2019 GPU with 4 GB of VRAM and 896 cores at 1.9 Ghz ~ 3.4 TFLOPs.

I'll be honest, I'm one of those people who REALLY notices the difference between 1440p and 2160p - but I also don't think the jump from a clean 1080p to 2160p is all that much, visually.

I think Nintendo's marketing wording will be specific, accurate, and in doing so, a little vague - I know, I know, I mean I think they'll choose "Ultra HD" over "2160p" in marketing, like how Nintendo Switch is 'Full HD at home, HD gaming on the go'.

For me, an image that "looks" 1080p but doesn't need scaling by the system is pretty much visually perfect at living room distances - which is part of why I often bring up the idea of layered upscaling. DLSS (Trans?) -> 1080p, FSR1 -> 2160p, and that's a genuinely delicious presentation that looks good on a modern display even with low input resolutions.

With that said, developers will also have the option of native 4K - I think games like Smash could benefit from this. I think the next Splatoon with an upscaler (DLSS, FSR 1 or 2) targeting 4K, with a DRS up to 4K, is probably the best option for that series.

Something I'm very hopeful about is how many OPTIONS developers have to get good IQ out of the successor.

Small anecdote, I got an Xbox Series X and played Forza Horizon 4 within a week of playing Pikmin 3 Deluxe - and I went back and forth between them nearly every day. That's 720p30SDR Vs 2160p60HDR. A noticeable difference? Absolutely. But a big deal? Ugly Vs. good looking? Playable Vs. unplayable? Absolutely not. Same TV, same viewing distance, 720p30 still looked good. Pikmin 3 looks INCREDIBLE. Maybe it's my eyes, but the jump to HD felt bigger than the jump to 4K. Diminishing returns, times nine.
 
GTA v isn't on switch because you'd have to do the most miricle of ports JUST for the base game to function with no hope of including the literal decade of content added via GTA V online. The compromises that would need to be made to actual game content and still have no guarantee of reaching witcher 3 port stability makes it not worth it at all.
Please, don't post these lies. The switch would run gta 5 with both hands tied behind it's back.
 
I saw a lot of recent-ish posts referring to super resolution, ray reconstruction, and frame generation, assuming that they can all be conflated (or perhaps I assumed people were assuming that) - all while we're talking about how DLSS-SR 4 alone is already pushing our estimations for 1080p30, never mind RR and FG.
I do also want a healthy discussion, and don't take my words as shade - speculate away, that's what we're here for.
I'd just prefer and think it would be healthier for the community to keep that speculation grounded on the facts we already have

The issue with discussing DLSS performance is that we don't really have facts, we have estimates, and those are only very roughly extrapolated from a very small number of data points measuring much more powerful GPUs. I've done similar estimates myself all the way back in 2023, but they're still only rough extrapolations from the same limited data, and there's no guarantee that this represents how DLSS performance scales to a much smaller GPU like T239's. This means any of our estimates could be off by large margins, in either direction. I genuinely wouldn't be surprised if my estimates end up being off by as much as 2x one way or the other.

The other big unknown is how tensor core concurrency plays into things. We know that Ampere can concurrently execute both tensor core and FP32/INT32 code, which opens up the possibility of running DLSS on one frame while also rendering the next. Effectively hiding the performance impact of DLSS at the cost of a little bit of extra latency. Of course DLSS and rendering would be competing for non-compute resources such as registers and bandwidth (and a little bit of compute resources, for activation layers, etc.), but we have no idea the extent to which this would impact performance.

In the theoretical best case scenario for concurrency in Ampere you could take as much as 16ms to run DLSS and still hit 60fps by running it fully concurrently with the rendering of the next frame, and in the worst case the two workloads compete so heavily that there's no speedup at all over running them sequentially. The reality is somewhere between these two extremes, but we have no way of knowing whether it's in the middle, or nearer one end or the other.

These two factors are completely independent of each other, which means we have an even wider spread of possibilities. We could have a situation where DLSS execution times on T239 are a lot quicker than expected, but concurrent execution isn't really viable, or we could end up in a place where execution times are slower than expected, but it's very efficient to run DLSS concurrently with graphics workloads.

I also expect that, regardless of hardware performance, we'll see a wide range of approaches taken with DLSS by different developers depending on what works best for their games. Trading off between quality and performance is a core part of console development, and devs will have the ability to tweak input resolution, output resolution and choose between the CNN and transformer models, all of which are going to impact image quality and performance to some extent, so it will be up to developers to figure out what works best for their games.

In some cases this may mean using the transformer model, perhaps with a lower internal resolution, in other cases this may be the CNN model, and in some cases developers may skip DLSS altogether, particularly if they're already able to hit native res without it.

This is why I'm very hesitant to say that Switch 2 will or won't do X, Y, or Z when it comes to DLSS. We have a range of unknowns on the performance side, but even then there's going to be a lot of variance between how different developers manage the tradeoffs that come with using DLSS in a console environment. I wouldn't be surprised if you took four different launch-window titles for the system and came back with four completely different impressions of what the system could do in terms of DLSS.
 
The issue with discussing DLSS performance is that we don't really have facts, we have estimates, and those are only very roughly extrapolated from a very small number of data points measuring much more powerful GPUs. I've done similar estimates myself all the way back in 2023, but they're still only rough extrapolations from the same limited data, and there's no guarantee that this represents how DLSS performance scales to a much smaller GPU like T239's. This means any of our estimates could be off by large margins, in either direction. I genuinely wouldn't be surprised if my estimates end up being off by as much as 2x one way or the other.

The other big unknown is how tensor core concurrency plays into things. We know that Ampere can concurrently execute both tensor core and FP32/INT32 code, which opens up the possibility of running DLSS on one frame while also rendering the next. Effectively hiding the performance impact of DLSS at the cost of a little bit of extra latency. Of course DLSS and rendering would be competing for non-compute resources such as registers and bandwidth (and a little bit of compute resources, for activation layers, etc.), but we have no idea the extent to which this would impact performance.

In the theoretical best case scenario for concurrency in Ampere you could take as much as 16ms to run DLSS and still hit 60fps by running it fully concurrently with the rendering of the next frame, and in the worst case the two workloads compete so heavily that there's no speedup at all over running them sequentially. The reality is somewhere between these two extremes, but we have no way of knowing whether it's in the middle, or nearer one end or the other.

These two factors are completely independent of each other, which means we have an even wider spread of possibilities. We could have a situation where DLSS execution times on T239 are a lot quicker than expected, but concurrent execution isn't really viable, or we could end up in a place where execution times are slower than expected, but it's very efficient to run DLSS concurrently with graphics workloads.

I also expect that, regardless of hardware performance, we'll see a wide range of approaches taken with DLSS by different developers depending on what works best for their games. Trading off between quality and performance is a core part of console development, and devs will have the ability to tweak input resolution, output resolution and choose between the CNN and transformer models, all of which are going to impact image quality and performance to some extent, so it will be up to developers to figure out what works best for their games.

In some cases this may mean using the transformer model, perhaps with a lower internal resolution, in other cases this may be the CNN model, and in some cases developers may skip DLSS altogether, particularly if they're already able to hit native res without it.

This is why I'm very hesitant to say that Switch 2 will or won't do X, Y, or Z when it comes to DLSS. We have a range of unknowns on the performance side, but even then there's going to be a lot of variance between how different developers manage the tradeoffs that come with using DLSS in a console environment. I wouldn't be surprised if you took four different launch-window titles for the system and came back with four completely different impressions of what the system could do in terms of DLSS.
This is gonna be the longest two months.
 
running DLSS on one frame while also rendering the next. Effectively hiding the performance impact of DLSS at the cost of a little bit of extra latency.

Before the idea of added latency worries anyone, this is a pretty insignificant penalty. Almost all Switch games are double buffered, but on Wii U almost all games were triple buffered. If running DLSS concurrently is a real thing, its essentially the same latency cost that is there for triple buffered games. We also have a wild variance for input delay from various controllers, and even many games have significantly different input delays despite running at the same framerate. So basically, actual input could end up the same despite this added frame of latency.
 
Absoloutly agree people are underestimating how many titles will hit 4k. I get its a really demanding target, but for say, the kinds of games which hit 1080 or 900p on switch one, should hitting 4k really be that much of an issue for switch 2 at all in games of those scale. Natively, I mean.
Doesn't it depend on how Nintendo will develop Switch 2 games? If they are aiming to make games that look roughly like their Switch 1 games, its doable to use the added power of the Switch 2 to make Switch 1 looking games run at 4K. But if they will make games that look more like PS4 games in terms of fidelity and complexity, it may be hard to make many of their games run at 4K on Switch 2.
 
I think if the CPU clocks were equal in handheld and docked, or if the docked clocks were higher (which would make more intuitive sense), folks would more readily accept it. The handheld clocks being higher is an interesting quirk, it's been speculated this may be due to a handheld only feature, or some kind of compensatory measure.

There's also this exchange that hasn't left my brain,

Matt:


The way I interpreted this is that the presented clocks are inaccurate (i.e. 'there's a problem with the given data'), but I can also see a reading of this as 'there was a problem that required faster handheld CPU clocks'.

I'm not worried about the CPU anyways, Matt clarified that this wasn't a major impediment. But I do find this an interesting mystery.
Personally, it makes no sense to me because to make the assumption that the higher clock is for portable mode features is assuming that the cores used by the OS are being nearly maxed out outside of that. Not even Switch is pushing the OS that hard. It typically stays below 50%.
 
Staff Communication
Hello, everyone.

After the reveal of the Switch 2, we have noticed a change in the culture of the thread. Namely, that the amount of off-topic talk has seemingly increased, despite the thread having slowed down considerably. We do not see the need to action this, since it reflects a wish by the community to talk about matters that are only tangentially related to hardware more broadly. From the moderation side of things, this thread has also been a taxing one to regulate, and our consensus is that removing any special treatment is warranted at this point.

Considering this, we have agreed that a slightly more lax approach to the moderation of this thread is in order. Going forward and until this thread is sunset with the release of the Switch 2, we will treat it like any other thread on this forum, and we will not action posts just for being off-topic.

-MarcelRguez, Biscuit, PixelKnight, DecoReturns, Phendrift, Lord Azrael, WonderLuigi, IsisStormDragon, KilgoreWolfe, Zellia, Tangerine_Cookie, Brofield, OctoSplattack
I'm not surprised, we know almost everything about Switch 2 by now, it will take years to return to the Switch Pro levels of speculation.
 
With the right optimisations, it's probably possible, but the question is. Why? If you can get a better overall image by increasing scene complexity, reducing rendering resolution, and leaning on upscaling, upscaling could often end up as the better option. Games at the complexity of Nintendo Switch should absolutely have the ability to reach 4K on Switch 2 without upscaling, but;

It's possible that even at the same scene complexity, there's image quality advantages to upscaling.

Upscaling rather than native could stabilise the framerate.

And many games on Switch 2 will probably be much MORE complicated than Switch games.

However, there is a certain subset of software that probably could push native resolutions - Nintendo's own casual titles. Mario Party, Worldwide Games, 1-2-Switch, etc. could all benefit from higher resolutions without upscaling.

I kind of agree though that 4K will be reasonably "present" on the successor. Nintendo Switch had a fraction of the performance of PS4 and still had plenty of first party titles targeting 1080p. Now the successor is a LARGER fraction of PS5, and a SUBSTANTIAL fraction of PS4 Pro, 4K is on the table if that's what developers want.

Here's a question for everyone - do you think it would be more reasonable for Nintendo to patch 51 Worldwide Games with higher resolutions, improved networking and screen size aware Mosaic Mode, or for them to just outright make a Switch 2 exclusive, richly detailed 65 Worldwide Games? I think option one, keep it evergreen!
Oh yeah, absolutely agree they should leverage ither things like updating and other details instead. Should've been clearer, was moreso talking about the idea it wouldn't be doable at all, which seems crazy to me
 
I expect a lot of games will pull the Persona 5 move of having the maximum resolution UI assets while the 3D visuals dynamically scale, which actually has a big impact on perceived sharpness. Persona 5 Royal on Switch has a 720p / 1080p UI handheld / docked layered on top of a 540p / 720p scene, which is actually a brilliant move considering how much of the game is really just a visual novel with a lot of 2D assets dancing about.

The human eye is already not great at discerning the minute differences of 1440p+ resolutions at a living room viewing distance, so a game can have a 4K UI which will look nice and sharp overlaid on top of a 1440p-1800p 3D visuals (DLSS, native, etc.) but the overall impression will be '4K' which has basically become its own buzzword for 'very sharp and pretty on an ultra high def screen'. And who's gonna complain? I won't.

Anyways, even a GTX 1650 can natively run a game like Hades II at 4K High 80 FPS. A Turing based 2019 GPU with 4 GB of VRAM and 896 cores at 1.9 Ghz ~ 3.4 TFLOPs.

Wait, is persona 5 on switch really only 720p? Guess that proves your point then, because I'd always sort of assumed it was about 900 or so docked. Brilliant port btw
 
Absoloutly agree people are underestimating how many titles will hit 4k. I get its a really demanding target, but for say, the kinds of games which hit 1080 or 900p on switch one, should hitting 4k really be that much of an issue for switch 2 at all in games of those scale. Natively, I mean.
This is the most anticipation I’ve had to see new Nintendo games since GameCube. Let’s just say (for conversation sake) we are looking at bare minimum ps4 specs. I can’t imagine what Nintendo games will look like. That footage of MK did nothing for me honestly it was just too short for me to have any real analysis of the visuals and what they are doing. I’m nervously anticipating what there games will deliver. I really wish we get a new IP with a distinct artstyle that could show off the hardware. 🤞🏾
 
One of the most interesting thing regarding ordinary people's expectations is that they don't realize just how many really demanding games were already running on PS4/XBox One hardware and that something stronger and more modern than that will do the same.

Specifically, I've seen people making fun of the idea that Red Dead Redemption 2 (2018) or Elden Ring (2022) will run acceptably on Switch 2 and saying that the versions will be utter garbage if they're capable of getting the games on in the first place.

Both games ran at 900p/30 Xbox One 1080p/30 PS4, with the PS4 Pro and Xbox 1X coming close to 4K (still at 30 FPS for RDR 2, 60 with drops on Elden Ring).

Just like people didn't really mentally think a portable could run Skyrim even if they knew Switch was going to be stronger than an XBox 360 or PS3, I think some people are going to be in for a shock when Switch 2 not only runs both of those two titles I mentioned way better on docked, but hits 1080p in handheld (I don't think RDR2 will shoot for 60 fps but 1080p 30 in handheld in a title like that could be pretty damned stunning).
 
Wait, is persona 5 on switch really only 720p? Guess that proves your point then, because I'd always sort of assumed it was about 900 or so docked. Brilliant port btw
I spoke in error, it's actually 1440x810p docked, 960x540p handheld. The Xbox One version has a 900p UI with 900p 3D visuals.

Ace Combat 7 is another game with 720p / 504p visuals but a 1080p UI docked / 720p UI handheld, again exceeding the XBO's UI resolution.
Ace-Combat-7-Switch-Video-8-33-screenshot.png
 
Last edited:
I know there haven't been much estimation work done on this, but I can imagine like how some '4K' games on the stationary consoles actually veer closer to a 3200x1800 target, if that might be a reasonable DLSS target with a spatial upscale bringing it all the way to 3840x2160. Performance DLSS 1800p would be a 900p internal resolution.

1800p is 69% of 2160p and pretty much an imperceptible difference at a typical viewing distance. While I think 4K 60 DLSS3 is feasible on the console, I wonder if 1800p60 could be targeted and have some headroom left over.

I'm also feeling a little goofy and wondering about both dynamic DLSS input and output that scales based on both GPU and tensor core load, so the input range is 720p-900p and the output is 1440p-2160p. But I can imagine this introducing too much complexity.
Nice.

Sorry I don’t have much to contribute, but the addition of DRS also needs to be taken into account.

Also we’ve comments from insiders about how they are content with the system.

So the concerns about the DLSS being able to run at an acceptable level may be misguided. With comments from Matt about Nintendo switch 2 being factored into NVidia’s marketing plans, do you believe they’d make a system that can’t operate with the evolving DLSS?

I think not.
 
Nice.

Sorry I don’t have much to contribute, but the addition of DRS also needs to be taken into account.

Also we’ve comments from insiders about how they are content with the system.

So the concerns about the DLSS being able to run at an acceptable level may be misguided. With comments from Matt about Nintendo switch 2 being factored into NVidia’s marketing plans, do you believe they’d make a system that can’t operate with the evolving DLSS?

I think not.
As far as I seen, no of has disputed dlss being able to run at an acceptable level. Unless you only find 4K 60hz to be acceptable.

Consoles (all of them) are cost optimized devices. Yes I can fully believe they would make a system that isn’t built with transformer model in mind, as it was ultimately Nintendos choice that the ampere architecture fits their requirements.
 
So the concerns about the DLSS being able to run at an acceptable level may be misguided. With comments from Matt about Nintendo switch 2 being factored into NVidia’s marketing plans, do you believe they’d make a system that can’t operate with the evolving DLSS?

I think not.
I think there's only so much NVIDIA can do about it. For the next 7+ years avoid doing stuff with DLSS impressive enough that it can't be done well with a slower version of their 2020 PC GPU releases?
 
Honestly I'd be shocked if they call it anything at all! I don't think they've named something like that since the SuperFX era. My suspicion is that they'll just have it in the games and never draw attention to it if they can help it
Pretty much. There's no need for consumers to know about these things at all. It's not dissimilar to Xenoblade 3 or Mario Odyssey natively rendering at 640x360p / 640x720p while using temporal upscaling/ jittering to present a much richer and higher quality image quality.

Every modern game does upsampling/upscale of some sort, be it for visual output or for lower effects costs. Marketing this to consumers is bound to confuse folks.
I expect a lot of games will pull the Persona 5 move of having the maximum resolution UI assets while the 3D visuals dynamically scale, which actually has a big impact on perceived sharpness. Persona 5 Royal on Switch has a 720p / 1080p UI handheld / docked layered on top of a 540p / 720p scene, which is actually a brilliant move considering how much of the game is really just a visual novel with a lot of 2D assets dancing about.
Hopefully games going forward do more of decoupling main render from UI render. The fact Persona 5R does that on Switch but doesn't do for XOne port is ??? It benefits everyone who isn't playing the game at standard display resolution.
 
Doesn't it depend on how Nintendo will develop Switch 2 games? If they are aiming to make games that look roughly like their Switch 1 games, its doable to use the added power of the Switch 2 to make Switch 1 looking games run at 4K. But if they will make games that look more like PS4 games in terms of fidelity and complexity, it may be hard to make many of their games run at 4K on Switch 2.
With dlss, it wouldn't be hard at all. But I would actually prefer a focus on detail and performance at 1440p after dlss. 4k is overrated. Thankfully, I think 4k on switch 2 will only be a little more common than 1080p on switch while their heavy hitters will use 1440p
 
With dlss, it wouldn't be hard at all. But I would actually prefer a focus on detail and performance at 1440p after dlss. 4k is overrated. Thankfully, I think 4k on switch 2 will only be a little more common than 1080p on switch while their heavy hitters will use 1440p
The thing with upscaling is, you don't necessarily have to choose. You could do a 1080p render and a 4K output rather than a 1440p render. Or even a 4K output with a 1440p render if everything lines up. They don't need to be "wasting" resources on getting to 4K when they can focus on getting enough details out of a scene on balance and then upscale it to the target.
 
As far as I seen, no of has disputed dlss being able to run at an acceptable level. Unless you only find 4K 60hz to be acceptable.

Consoles (all of them) are cost optimized devices. Yes I can fully believe they would make a system that isn’t built with transformer model in mind, as it was ultimately Nintendos choice that the ampere architecture fits their requirements.
I'm not sure. If you intend your device to output 4K60, you want to have the technology to make that possible even in larger titles. Like how Nintendo Switch can do 1080p60 in Splatoon 3 (DRS) or Smash Ultimate (Locked). Meanwhile, developing a platform alongside a hardware and software partner like Nvidia, they'd know what their goals and long term technologies are, and would want to integrate them if they think it would help. I think it's premature to assume the Transformer Model absolutely positively cannot do 4K60 on the successor - it MIGHT, even with some napkin math based on PC frame times, with concurrency it could squeeze in without further optimisations. That's a big "could", but it's not a "won't".

From my perspective, Switch 2 seems to be a device where a balance of "looks good on a 4K TV", "plays modern games", "low power consumption" and "mass market price" was struck - 4K60 won't be impossible, but it also won't be as simple as brute-forcing it. Something has to give, like latency, scene complexity, upscaling quality or some combination of the three. But if a developer WANTS to do it, it certainly seems like it CAN.
 
I expect a lot of games will pull the Persona 5 move of having the maximum resolution UI assets while the 3D visuals dynamically scale, which actually has a big impact on perceived sharpness. Persona 5 Royal on Switch has a 720p / 1080p UI handheld / docked layered on top of a 540p / 720p scene, which is actually a brilliant move considering how much of the game is really just a visual novel with a lot of 2D assets dancing about.

The human eye is already not great at discerning the minute differences of 1440p+ resolutions at a living room viewing distance, so a game can have a 4K UI which will look nice and sharp overlaid on top of a 1440p-1800p 3D visuals (DLSS, native, etc.) but the overall impression will be '4K' which has basically become its own buzzword for 'very sharp and pretty on an ultra high def screen'. And who's gonna complain? I won't.

Anyways, even a GTX 1650 can natively run a game like Hades II at 4K High 80 FPS. A Turing based 2019 GPU with 4 GB of VRAM and 896 cores at 1.9 Ghz ~ 3.4 TFLOPs.

Especially since one of the easiest things to train AI upscalers for is UI.
 
Especially since one of the easiest things to train AI upscalers for is UI.
When in motion, and/or with dynamic and moving UIs, less so. I believe Serif means that the system should be fully able to render the UI at full output resolution - 4K in TV Mode, without much trouble, and that it would be wise for developers to do so.
 
Please read this new, consolidated staff post before posting.
Last edited:


Back
Top Bottom