• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

Why do they need everyone to move over? I can’t think of any good reason.
Why would they need to move people from the Gameboy to the Gameboy Advance?
Why would they need to move people from the DS to the 3DS?
Why would they need to move people from the Wii to the Wii U?
Like, even putting aside all the graphical power discussion for a moment, the good reason is simply because Switch sales are steadily declining because of market saturation and will eventually flatline. You know, just like every videogame console ever?
It doesn't matter how successful a console is, you need to move from it when it's life cycle is about to end.
Nintendo will stop making first party games for the switch by the end of 2025 and even then it will only be small cross gen titles and small remake/remasters.
Switch, Switch Lite and Switch OLED will all be out of production by 2026 max.
Why wouldn’t it be?
For the same reason Mario Galaxy/New Super Mario Bros Wii didn't come out on the Gamecube and Mario 3D world/New Super Mario Bros U didn't come out on the Wii even though they could have, because they need to push people to buy the new hardware.
 
Sony/Microsoft could barely archive true 4K resolution on they consoles, 4K on Switch sucessor will be a upcaled 1080p resolution, not true 4K as many hope.
native 4K is a pointless target to reach, it's much more efficient to use image reconstruction/image upscaling to get there and use the available resources on something else
Are there are any "true" native 4k games on consoles at all? I'm not aware of any, but it's not something I pay a lot of attention to.
 
Lets just look at the PS5 and Switch 2 from the specs we have, it's simple to figure out if this is a next gen product or a "pro" model.

PS4 uses GCN from 2011 as it's GPU architecture, it has 1.84TFLOPs available to it. (We will use GCN as a base) The performance of GCN is a factor of 1

Switch uses Maxwell v3 from 2015, it has 0.393TFLOPs available to it. The performance of Maxwell V3 is a factor of 1.4 + mixed precision for games that use this... This means when docked Switch is capable of 550GFLOPs to 825GFLOPs GCN, still a little less than half the GPU performance of PS4, this doesn't factor in far lower bandwidth, RAM amount or CPU performance, all of which sit around 30-33% of the PS4, with the GPU somewhere around 45% when completely optimized.

PS5 uses RDNA1.X, customized in part by Sony, introduced with the PS5 in 2020, it has up to 10.2TFLOPs available to it. The performance of RDNA 1.X is a factor of 1.2 + mixed precision (though this is limited to AI in use cases, developers just don't use mixed precision atm for console or PC gaming, it's used heavily in mobile and in Switch development though). This means ultimately that the PS5's GPU is about 6.64 times as powerful as the PS4, and around 3 times the PS4 Pro.

Switch 2 uses Ampere, specifically GA10F, which is a custom GPU architecture that will be introduced with the Switch 2 in 2024 (hopefully), it has 3.456TFLOPs available to it. The performance of Ampere is a factor of 1.2 + mixed precision* (this uses the tensor cores, and is independent of the shader cores). Mixed precision offers 5.2TFLOPs to 6TFLOPs. It also reserves 1/4th of the tensor cores for DLSS according to our estimates, much like the PS5 using FSR2, this allows the device to render the scene at 1/4th the resolution of the output with minimal loss to image quality, greatly boosting available GPU performance, and allowing the device to force a 4K image.

When comparing these numbers to PS4 GCN, Switch 2 has 4.14TFLOPs to 7.2TFLOPs, and PS5 12.24TFLOPs GCN equivalent, meaning that Switch 2 will do somewhere between 34% to 40% of PS5. It should also manage RT performance, and while PS5 will use some of that 10.2TFLOPs to do FSR2, Switch 2 can freely use the remaining 1/4th of it's tensor cores to manage DLSS. Ultimately there are other bottlenecks, the CPU is only going to be about 2/3rd as fast as the PS5's, and bandwidth with respect to their architectures, will only be about half as much, though it could offer 10GB+ for games, which is pretty standard atm for current gen games.

Switch 2 is going to manage current gen much better than Switch did with last gen games. The jump is bigger, the technology is a lot newer, and the addition of DLSS has leveled the playing field a lot, not to mention Nvidia's edge in RT playing a factor. I'd suggest that Switch 2 when docked, if using mixed precision will be noticeably better than the Series S, but noticeably behind current gen consoles.
"3.456TFLOPs available to it" - at what wattage?
 
0
Are there are any "true" native 4k games on consoles at all? I'm not aware of any, but it's not something I pay a lot of attention to.

I believe most Xbox One, and PS4 titles run on Xbox Series X, and PS5, respectively at a native 4K.

As far as current gen, they’re more few and far between, though many Indie games run at 4K, some even 4K120 I believe.

I know The Touryst famously runs at an internal 8K resolution at 60fps on PS5, but is outputted on screen at 4K.
 
It's still relevant in terms of potential upgrades for Switch 3
Indeed, it means better RT quality with virtually no performance hit. It's especially useful for devices like Drake because of their inherent limitations in this department, more mileage out of the limited implementations is always a bonus.
 
0
Sony/Microsoft could barely archive true 4K resolution on they consoles, 4K on Switch sucessor will be a upcaled 1080p resolution, not true 4K as many hope.
True schmue. The history of real-time graphics is the history of finding an increasing number of alternate ways to accomplish things that come close to the same result but cheaper, and that's what things like DLSS2/FSR2 are.
You mean RR can be used without FG?

Unless 3.5 is meant to work on all cards? (doubts)
It's just confusing number naming they should stop bothering with. Each higher number contains all the technologies they're lumping into "DLSS" to date, regardless of whether a new feature has higher or lower requirements. 3.5 means DLSS Super Resolution + DLSS Frame Generation + DLSS Ray Reconstruction, and older cards just can't use the middle part.
 
0
Why would they need to move people from the Gameboy to the Gameboy Advance?
Why would they need to move people from the DS to the 3DS?
Why would they need to move people from the Wii to the Wii U?
Like, even putting aside all the graphical power discussion for a moment, the good reason is simply because Switch sales are steadily declining because of market saturation and will eventually flatline. You know, just like every videogame console ever?
It doesn't matter how successful a console is, you need to move from it when it's life cycle is about to end.
Nintendo will stop making first party games for the switch by the end of 2025 and even then it will only be small cross gen titles and small remake/remasters.
Switch, Switch Lite and Switch OLED will all be out of production by 2026 max.

For the same reason Mario Galaxy/New Super Mario Bros Wii didn't come out on the Gamecube and Mario 3D world/New Super Mario Bros U didn't come out on the Wii even though they could have, because they need to push people to buy the new hardware.

You’re wasting your time.

He still thinks this new console is no different to a Switch Lite or Switch OLED being introduced to the Switch family line.
 
Re: DLSS 3.5. Fucking Finally. To be a bit of a diva and quote myself from earlier this year (emphasis added)

When you cast rays backwards from the camera, it seems like jittering the camera should give you the extra data DLSS needs. But at multiple steps of the process this goes wrong.

The biggest one is just that RT already introduces a kind of noise that looks identical to the behavior of a jittering camera, so the RT denoiser actually prevents the data from getting to DLSS. The other is that ray tracing introduces unpredictability on what pixels are being effectively being sampled.

The inevitable endgame here is to get DLSS to replace the denoiser, but mixing raster and RT effects makes this tricky.

Here is a simplified way of looking at the problem and what DLSS 3.5 does.

Imagine you're in an empty room with a single light source - a 40W light bulb. That light bulb shoot outs rays of light (you know, photons), and those bounce against you and the walls over and over again, changing frequency - ie color - at each bounce. Then those rays hit your eyes, boom, you see an image.

Ray tracing replicates that basic idea when rendering. The problem is that real life has a shitload of rays - that 40W light bulb is pumping out ~148,000,000,000,000,000,000 (148 quintillion) photons a second. RT uses lots of clever optimizations but the basic one is just not use that many rays.

In fact, RT uses so few rays that the raw image generated looks worse than a 2001 flip phone camera taking a picture at night. It doesn't even have connected lines, just semi-random dots. The job of the denoiser is to go and connect those dots into a coherent image. You can think of a denoiser kinda like anti-aliasing on steroids - antialiasing takes jagged lines, figures out what the artistic intent of those lines was supposed to be, and smoothes it out.

The problem with anti-aliasing is that it can be blurry - you're deleting "real" pixels, and replacing them with higher res guesses. It's cleaner, but deletes real detail to get a smoother output. That's why DLSS 2 is also an anti-aliaser - DLSS 2 needs that raw, "real" pixel data to feed its AI model, with the goal of producing an image that keeps all the detail and smoothes the output.

And that's why DLSS 2 has not always interacted well with RT effects. The RT denoiser runs before DLSS 2 does, deleting useful information that DLSS would normally use to show you a higher resolution image. DLSS 3.5 now replaces the denoiser with its own upscaler, just as it replaced anti-aliasing tech before it. This has four big advantages.

The first and obvious one is that DLSS's upscaler now has higher quality information about the RT effects underneath. This should allow upscaled images using RT to look better.

The second is that, hopefully, DLSS 3.5 produces higher quality RT images even before the upscaling.

The third is that DLSS 3.5 is generic for all RT effects. Traditionally, you needed to write a separate denoiser for each RT effect. One for shadows, one for ambient occlusion, one for reflections, and so on. By using a single, adaptive tech for all of these, performance should improve in scenes that use multiple effects. This is especially good for memory bandwidth, where multiple denoisers effectively required extra buffers and passes.

And finally, this reduces development costs for engine developers. Before this, each RT effect required the engine developer to write a hand-tuned denoiser for their implementation. Short term, as long as software solutions and cross-platform engines exist, that will still be required. But in the case of Nintendo specifically, first party development won't need to write their own RT implementations, instead leveraging a massive investment from Nvidia.

There is a downside - DLSS 3.5 isn't a "free" upgrade from DLSS 2. It requires the developer to remove their existing denoisers from the rendering path, and provide new inputs to the DLSS model. In the case of NVN2, this would require an API update, not just a library version update. I haven't seen an integration guide for 3.5 yet, so I have no idea how extensive the new API is, but there will be some delay before this is in the hand of Nintendo devs.
 
Be wary of posts proclaiming ‘better than Series S’. Just expect PS4 with some resolution boosts since anything else is based on extreme best case scenarios which are unlikely to come to fruition.
 
sequels are never as good as the original

Hmmm... :unsure:

Empire Strikes Back
Aliens
Wrath of Kahn
Godfather Part 2
Back to the Future Part 2
The Two Towers
Metroid Prime 2
Super Metroid
Majora's Mask
Uncharted 2
Super NES
PlayStation 2
Xbox 360

That was purely off the top of my head btw


Come%C3%A7em-os-jogos-gif-4.gif
 
Be wary of posts proclaiming ‘better than Series S’. Just expect PS4 with some resolution boosts since anything else is based on extreme best case scenarios which are unlikely to come to fruition.
All rumors I've read point to PS4 levels(undocked) and around PS4 Pro(docked). I would assume this is a reasonable expectation or close at least?

PS: I don't know $*** about hardware, lol
 
Well if Nintendo and nVidia put a chip that's DLSS 3.0 ready, sure. And if you're willing to pay around 600 bucks for it.

Reminder : DLSS 3.x is only available for GeForce 40xx
Uh...Actually you're wrong...

dlss-3-5-geforce-rtx-gpu-support.jpg


Only Frame Generation is available to 40xx (and up?) cards. Ray reconstruction is available for older RTX cards, including Turing.
 
I believe most Xbox One, and PS4 titles run on Xbox Series X, and PS5, respectively at a native 4K.
None of them do, they all use checkerboarding. There isn't a native 4k game on that gen that I am aware of.

As far as current gen, they’re more few and far between, though many Indie games run at 4K, some even 4K120 I believe.

I know The Touryst famously runs at an internal 8K resolution at 60fps on PS5, but is outputted on screen at 4K.
Going through the PS5's list of 120fps games, they appear to be mostly "native" 4k, though almost all of them use DRS and some form of TAA, and Unreal's TAA is reconstructing, but for the most part, yeah, these do "native" 4k.

I don't think native 4k is actually all it's cracked up to be, but there is this meme that DLSS's 4k output is "fake" 4k because it's temporally reconstructed, but PS4 Pro's 4k is "real" 4k despite the fact that it is temporally reconstructed.
 
0
Hmmm... :unsure:

Empire Strikes Back
Aliens
Wrath of Kahn
Godfather Part 2
Back to the Future Part 2
The Two Towers
Metroid Prime 2
Super Metroid
Majora's Mask
Uncharted 2
Super NES
PlayStation 2
Xbox 360

That was purely off the top of my head btw


Come%C3%A7em-os-jogos-gif-4.gif
I love how you listed a bunch of sequels that were better than the original followed by a gif from a movie whose sequel absolutely sucked :ROFLMAO:
 
All rumors I've read point to PS4 levels(undocked) and around PS4 Pro(docked). I would assume this is a reasonable expectation or close at least?

PS: I don't know $*** about hardware, lol
If you don't know shirt about hardware, then yes, I would call that a reasonable assumption.

A slightly more nuanced expectation - less "powerful" than both in both modes, in terms of raw numbers, but a more modern feature set. Good ports and exclusives will look as-good-or-better, but low-effort ports may fall behind the PS4/Pro versions.
 
Uh...Actually you're wrong...

dlss-3-5-geforce-rtx-gpu-support.jpg


Only Frame Generation is available to 40xx (and up?) cards. Ray reconstruction is available for older RTX cards, including Turing.
To play devil's advocate, Nvidia's at fault for the confusing marketing.
 
I work in developing software but my hardware knowledge is terrible (I know this doesn't make sense lol), but from my understanding, the Switch 2 is very likely going to be less capable than the Ally but far more efficient, correct?

I just messed around with my buddy's Ally (installing Baldur's Gate 3 mods for out current 4 player coop run) and while it's impressive this thing just rips through its battery and gets quite hot. It does manage to push out some impressive visuals and hits 120fps on some games. I'd probably choose it over the Steam Deck if I were to choose and I like it but not enough to drop $600 on it. I am curious how the ARM CPU will stack up using far less electricity same with the Drake's GPU.

Not sure how much credit the Switch deserves credit but I do think it earned some because I am loving all the options in the current high end handheld gaming space.
 
dlss-3-5-games-and-apps-coming-this-fall.jpg

So with DLSS 3.5 (sans frame generation), how likely will we get Portal RTX on the Switch NG?

Nvidia has a lot to gain if the Next Gen Switch is successful so maybe they create a custom version of DLSS 3.0 but call it NLSS so we can see Portal run in it's full rtx glory with FG at 4k 30fps...
 
If you don't know shirt about hardware, then yes, I would call that a reasonable assumption.

A slightly more nuanced expectation - less "powerful" than both in both modes, in terms of raw numbers, but a more modern feature set. Good ports and exclusives will look as-good-or-better, but low-effort ports may fall behind the PS4/Pro versions.
If this doesn't reach better than PS4 pro specs,.going by the Series S cry from developers, should be interesting to see.
 
Uh...Actually you're wrong...

dlss-3-5-geforce-rtx-gpu-support.jpg


Only Frame Generation is available to 40xx (and up?) cards. Ray reconstruction is available for older RTX cards, including Turing.
Yep, that's I'm aware now ^^

 
If you don't know shirt about hardware, then yes, I would call that a reasonable assumption.

A slightly more nuanced expectation - less "powerful" than both in both modes, in terms of raw numbers, but a more modern feature set. Good ports and exclusives will look as-good-or-better, but low-effort ports may fall behind the PS4/Pro versions.
Sorry to ask and only if you don't mind answering, could you ELI5 for us non-tech people how modern feature sets would offset less raw power to produce on-par or better results, and would you be able to take a shot from that at how the scale and presentation of the next Zelda game may compare to Tears of the Kingdom?
 
If this doesn't reach better than PS4 pro specs,.going by the Series S cry from developers, should be interesting to see.
Different situations.

Devs can't delay or skip Series S without doing the same for Series X. They apparently can't cut features present in other platforms as well.

NG Switch will be just like the Switch instead.

Not mention that in this specific case (BG3) their complaint is total RAM, which is the one spec NG Switch might end up ahead of Series S.
 
Last edited:
Sorry to ask and only if you don't mind answering, could you ELI5 for us non-tech people how modern feature sets would offset less raw power to produce on-par or better results, and would you be able to take a shot from that at how the scale and presentation of the next Zelda game may compare to Tears of the Kingdom?
newer architectures can fix problems with older architectures (like adding more on-die cache) and can have changes that allow them to do things concurrent to other things, speeding up the pipeline.

for example, if Switch had to do ray tracing (it technically can), it would have to do some important steps in the same cores it does the non-ray tracing tasks. in Drake (let's assume it's exactly the same TFLOPs as Switch), those ray tracing tasks are done on separate pieces of silicon, allowing the main set of cores to do all the non-ray tracing tasks at the same time. so while Switch might have a low frame rate, Drake will have a higher frame rate

this image is a visual version of that paragraph

geforce-rtx-gtx-dxr-one-metro-exodus-frame.png
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom