• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

I'm leaning towards a $399 launch price base model, $450 upgraded model with larger on-board storage.

Switch 2 Lite at $329 for Holiday 2025.
 
Points at PlayStation numbering consoles and all major phones from Apple, Samsung, and Google.. or OS systems like Windows

We have 5 PlayStation consoles with a numbering system. Sure it's boring, but everyone gets the point it's a successor with each number succeeding.
I'm NOT saying numbering doesn't work. I'm saying it doesn't convey what enthusiasts think it does.

An enthusiast will see the 2 and think "it will have exclusive games, higher resolutions, it should be able to run PS4 AAA games, probably it will have faster loading too", etc.

The general public will see the 2 and think "it's new, so it should be better in some way". All the selling points needs to be properly conveyed by the marketing.

People are really concerned about the name because of the Wii U mess, but... the name wasn't the cause of it, "Wii U" doesn't suggest at all it is an tablet accessory, anyone thinking so did it from seeing the terrible marketing campaign.

"Wii 2" would have cleared the confusion, but that's thinking backwards. The marketing should be clearing any confusion/doubt about the console, not creating them.
 
The price is such a dangerous thing for this new device. Ultimately one of the reasons the Switch is successful is because it is seen as being affordable and a comparatively inexpensive product.

If Nintendo make the new console much more expensive because the last one was so successful then they run the risk of forgetting one of those reasons it was so successful in the first place.

We’ve seen this mistake by Nintendo before with the launch of the 3DS. They thought that because the DS was so successful they could charge a premium price and it ended up biting them on the arse.
I don’t think this is true. Consumers are willing to pay what they deem worth the price. Phones wouldn’t have ballooned to over $1000 if price was such a huge factor. There’s still limits of course depending on what it is, but my point is there’s no reason for Nintendo to make a cheap product for the sake of it.

Most people simply weren’t convinced the 3DS was worth its initial launch price. The Switch was not at all cheap. There were complaints about the price at launch. The Switch already set precedence for a pricier dedicated gaming handheld.

The Vita at the time was the most expensive handheld at $249. The Switch eclipsed that. Also, the OLED switch is outselling the regular model despite being more expensive. It is also FAR outselling the even cheaper Switch Lite.
 
Last edited:
When Nvidia showed the Tegra X1 (the Switch GPU) at CES 2015 (basically it's big blow out) they demoed it with the Unreal Engine Elemental demo:



So The Matrix Awakens Unreal 5 demo for the Switch 2 chipset would make a lot of sense as Nvidia/Epic already have a history of using an Unreal Engine demo to show off the newest Tegra tech.
 
jaodk_1024px.png


Showing up with something like this to show off some demos makes a lot of sense if Nintendo is trying to keep some key details a secret. We don't know what Nintendo was telling these developers along with showing them the demos. For example, Nintendo could have showed them the demos, articulated this is what to expect in terms of performance and that it would still be very similar to Switch in form factor. This is enough for most developers to start preparing games and ports even without having dev kits.
Good point. If anything that bodes well for the gimmick(s). The day somebody sees the true thing in action and says there is none, then the system will just be more powerful (which is a win as it opens a whole new set of experiences, but was hoping to be yet again surprised by something very fresh).
 
Was this already shared?

VGC reports Switch 2 was demoed with "The Matrix Awakens Unreal Engine 5 tech demo" feauring DLSS, advanced raytracing and "visuals comparable to Sony and Microsoft’s current-gen consoles"




Also found this:

Universo Nintendo says the final Switch 2 hardware will feature 12 GB of RAM and raytracing; The Matrix demo used DLSS 3.1 and not 3.5 as initially reported by VGC

And

Nintendo rumored to be collaborating with Google on a standalone VR Headset
 
Last edited:
Okay I think this is how it goes:

DLSS 2.x - This is Super Resolution only. Last update was DLSS 2.5.1 which released Dec 2022.
DLSS 3.x - When DLSS 3.0 was announced, it IIRC was only Frame Generation which was exclusive to the RTX 40 series. Later on, NVIDIA released DLSS 3.1.1 in Feb 2023 which now also encompassed Super Resolution which is compatible with all RTX cards. It continued to get updates up until now with DLSS 3.5 which now also includes Ray Reconstruction that is also available for all RTX cards.

Edit: I forgot what point I was making here
 
Last edited:
A few more thoughts on the UE5 Matrix demo:

The more I think about this, the more it makes sense as a demo for Nintendo to use. Obviously they want to show off UE5 itself to show that third party games can run well on the hardware, but there are a few reasons the Matrix demo itself works well. The first one is obvious; it was originally used to show off the PS5 and XBSX/S, so it's a statement of intent from Nintendo that the new Switch hardware should be considered in a similar league to those. Secondly, it really plays to Switch 2's strengths.

The new hardware won't match the PS5 or XBSX (or even the XBSS) in raw horsepower on either the CPU or GPU, but it has a much better upscaling solution (DLSS), much better relative RT performance than the RDNA architecture, and potentially much better RT denoising integrated into their upscaling solution if they're using DLSS-RR. So, if you want a demo that plays to Switch 2 strengths, you want to go heavy on the RT and the upscaling. The UE5 Matrix demo is pretty much the most RT-heavy thing on consoles, and I think it's the only UE5 software on consoles which actually uses hardware RT Lumen (Immortals of Aveum might, but either way it's not a great showcase). Switch 2's better RT hardware means it's, relatively, taking a much smaller hit to run Lumen with full hardware RT, doubly so if it's using DLSS-RR and can get away with lower ray counts. Then, on top of that, DLSS itself will produce much better results than Epic's TSR. Digital Foundry noted that XBSS had "very chunky artefacts" on the demo (at approx 4x scaling), and you can push DLSS pretty hard without it getting "very chunky".

Put Switch 2 next to the PS5 or XBSX in a pure native res benchmark with no RT and it will obviously look a lot worse. Crank up the RT (which PS5 and XBSX are relatively bad at, and Switch 2 is relatively good at) and temporal upscaling (again, benefitting Switch 2), and you can close the gap a lot. I have no doubt the PS5 version of the demo looks better side-by-side than Switch 2, but if they can get results with are anywhere near the ballpark of the far more power hungry home consoles, then that's a win.

A lot of people think that because Switch 2 is a hybrid it's necessarily in Nintendo's interests to hold back on ray tracing (or even disable it in handheld mode, as I've heard suggested a few times), but the reality is the opposite. Nintendo now has the best RT hardware in the console space, and will do for the rest of the generation, and it's in their interest to push that as hard as possible, particularly when selling the hardware to devs. Relative to its performance in purely rasterised graphics, RT is much cheaper on Switch 2, so the harder you're pushing RT, the better the Switch 2 looks compared to the competition.

As a point of reference, let's compare the Nvidia RTX 3070 (about 20Tflops, same Ampere arch as Switch 2), and the AMD RX 6800XT (about 20Tflops, full RDNA2 that's better in some ways than the architectures used in PS5 and XBSS/X). The RTX 3070 launched at $499, and the RX 6800XT launched at $649 about a month later. In benchmarks of Cyberpunk 2077 without RT, the 6800XT beats the 3070 by about 25%, which is about what you'd expect from the price difference.

However, in the game's recently added RT overdrive mode (which is the most RT-intensive game around by some margin), the results are very different. At native 1080p, the RX 6800XT hits an average of 7.9 fps, where the RTX 3070 hits 21.3 fps. Now, obviously neither of these are playable (and Switch 2 definitely won't be running this), but by moving from a purely rasterisation test to an extremely heavy RT test, we're moving from RDNA2 beating Ampere by about 25% at a similar Tflop count, to Ampere being 2.7x faster than RDNA2. Furthermore, these tests are without upscaling. Cyberpunk's RT overdrive mode looks far better with the new DLSS ray reconstruction, so if we were comparing Ampere with DLSS-RR vs RDNA2 with traditional denoising and FSR2, the win for Ampere becomes even bigger.
 
The price is such a dangerous thing for this new device. Ultimately one of the reasons the Switch is successful is because it is seen as being affordable and a comparatively inexpensive product.

If Nintendo make the new console much more expensive because the last one was so successful then they run the risk of forgetting one of those reasons it was so successful in the first place.

We’ve seen this mistake by Nintendo before with the launch of the 3DS. They thought that because the DS was so successful they could charge a premium price and it ended up biting them on the arse.
That´s why i was wondering if Nintendo will subsidize the sale of the new Switch to some extent, ie selling it for a loss solely to build a giant userbase on the next Switch and then earn money solely on subscription services and software sales. Seems a good strategy but not the usual Nintendo strategy of selling hardware for profit that they´ve always tried to achieve previously.
 
0
Okay I think this is how it goes:

DLSS 2.x - This is Super Resolution only. Last update was DLSS 2.5.1 which released Dec 2022.
DLSS 3.x - When DLSS 3.0 was announced, it IIRC was only Frame Generation which was exclusive to the RTX 40 series. Later on, NVIDIA released DLSS 3.1.1 in Feb 2023 which now also encompassed Super Resolution which is compatible with all RTX cards. It continued to get updates up until now with DLSS 3.5 which now also includes Ray Reconstruction that is also available for all RTX cards.
DLSS3.x always encompassed Super Resolution. DLSS2 uses the 3.1.x version of Super Resolution

don't pay attention to the numbers. they don't tell you what you think they are telling you
 
Okay I think this is how it goes:

DLSS 2.x - This is Super Resolution only. Last update was DLSS 2.5.1 which released Dec 2022.
DLSS 3.x - When DLSS 3.0 was announced, it IIRC was only Frame Generation which was exclusive to the RTX 40 series. Later on, NVIDIA released DLSS 3.1.1 in Feb 2023 which now also encompassed Super Resolution which is compatible with all RTX cards. It continued to get updates up until now with DLSS 3.5 which now also includes Ray Reconstruction that is also available for all RTX cards.

This:
A Quick Deconfuser on DLSS 3.5 - it's not your fault you're confused. It's Nvidia's.

DLSS is a tool for using AI to improve the visual quality of video games. DLSS 2, 3, and 3.5 each introduced major new features. Because of that, gamers tend to use the version number to refer to the feature added.

But the version you use doesn't mean you use every feature.

There are four features we care about in DLSS.

DLSS Upscaling - this was the only real feature in DLSS 1, and DLSS 2 radically changed how it worked, vastly improving. So most of the time when people say "DLSS" they mean "DLSS 2 Upscaling." It lets you take a low res game, that runs at a higher frame rate, and then keep that good frame rate while upscaling the image to a higher resolution.

DLAA - a high quality antialiasing. DLSS Upscaling always includes AA. DLAA just lets you use the anti-aliasing by itself.

DLSS-G is the official name of DLSS Frame Generation. This uses AI to make new frames between the frames the game draws directly, increasing smoothness. It was introduced in DLSS 3 so it is sometimes called DLSS 3, which is confusing, and we're all trying to stop. It is very cool, but it has lots of non-obvious limitations.

DLSS-RR
short for DLSS Ray Reconstruction. This replaces part of the ray tracing pipeline with the same AI that Upscaling uses. It can vastly increase RT quality, and sometimes increases RT performance too. It was introduced in DLSS 3.5, so it is sometimes called DLSS 3.5, but I think at this point you can see how that is super fucking confusing.

These DLSS features can be combined in lots of different combinations. Just because you have DLSS 3.5 in your game, doesn't mean you are using every feature.

Just to add to the complications Every version of DLSS as brought improvements to all of these features, not just adding new ones. So in general, you want the latest version, even if you don't use any of the new features.

TL;DR: Just because the New Switch has the latest DLSS version, doesn't mean every feature is on in every game, or that they will all work on the new hardware.

DLSS upscaling and DLAA will definitely work on the New Switch.

DLSS-G will probably not
though there are some smart people who think otherwise, but even those folks would recognize there are serious caveats.

DLSS-RR probably will but the tech is very early, so there isn't the kind of data out there to be super sure. Yet.
 
Question about DLSS:

Can you just create lower res assets and use DLSS to bump to higher resolutions?

Or, do you need to create 4k assets first so the algorithm knows what to reference when doing the up-res?

I am wondering if DLSS helps in reducing development time creating tons of super high resolution assets or if creating them is still necessary technically to train the model.
 
Question about DLSS:

Can you just create lower res assets and use DLSS to bump to higher resolutions?

Or, do you need to create 4k assets first so the algorithm knows what to reference when doing the up-res?

I am wondering if DLSS helps in reducing development time creating tons of super high resolution assets or if creating them is still necessary technically to train the model.
the best practice is to tune your assets for your output resolution. DLSS doesn't actually affect the assets, it samples what's given. so if the texture resolution is low, you get a blurrier texture in the output because there is no data there. a high resolution texture will allow DLSS to sample data that does exist
 
Question about DLSS:

Can you just create lower res assets and use DLSS to bump to higher resolutions?

Or, do you need to create 4k assets first so the algorithm knows what to reference when doing the up-res?

I am wondering if DLSS helps in reducing development time creating tons of super high resolution assets or if creating them is still necessary technically to train the model.
IIRC DLSS does not by default improve texture resolution, if that's what you're referring to here.

I think there are some other methods for upressing textures with AI using tensor cores, probably experimental ones. I vaguely remember an Nvidia demo about Morrowind being redone (just as a tech demo) this way. So there may be something bespoke they can do about textures but as I understand it it's not a current DLSS feature.
 
Question about DLSS:

Can you just create lower res assets and use DLSS to bump to higher resolutions?

Or, do you need to create 4k assets first so the algorithm knows what to reference when doing the up-res?

I am wondering if DLSS helps in reducing development time creating tons of super high resolution assets or if creating them is still necessary technically to train the model.
You don't NEED to do either... but the recommendation is, if your target output is 4K, you should use 4K assets. You just don't HAVE to. There are other solutions that could be more effective, like increased compression on textures.
 
the best practice is to tune your assets for your output resolution. DLSS doesn't actually affect the assets, it samples what's given. so if the texture resolution is low, you get a blurrier texture in the output because there is no data there. a high resolution texture will allow DLSS to sample data that does exist

IIRC DLSS does not by default improve texture resolution, if that's what you're referring to here.

I think there are some other methods for upressing textures with AI using tensor cores, probably experimental ones. I vaguely remember an Nvidia demo about Morrowind being redone (just as a tech demo) this way. So there may be something bespoke they can do about textures but as I understand it it's not a current DLSS feature.

You don't NEED to do either... but the recommendation is, if your target output is 4K, you should use 4K assets. You just don't HAVE to. There are other solutions that could be more effective, like increased compression on textures.

Thank you all, very helpful.
 
jaodk_1024px.png


Showing up with something like this to show off some demos makes a lot of sense if Nintendo is trying to keep some key details a secret. We don't know what Nintendo was telling these developers along with showing them the demos. For example, Nintendo could have showed them the demos, articulated this is what to expect in terms of performance and that it would still be very similar to Switch in form factor. This is enough for most developers to start preparing games and ports even without having dev kits.
My one big worry is that it would be good to know what the handheld performance will look like, as that's the mode I'll mainly be playing in.
 
Am I correcting in assuming your hesitation is at least in part because you're already expecting to do pre-Direct and post-Direct episodes soon so it would be better to just add the switch 2 discussion to one of those?

Probably more of what he said a few times already, anything that comes out right now is very dangerous for any kind of source (the smaller pool of dev know, the higher chance Nintendo can find out who leaked). He just said a few posts ago that his contact didn't like that Eurogamer is publishing this info right now.
 
What are the improvements Nintendo will be able to do with 12 GB RAM instead of 4 GB RAM? I get better framerates and load times but what more will happen with that big increase in RAM?
high resolution textures, less loading, higher resolution buffers, that kind of thing
 
I gotta say with the info given today, this thread has food for weeks!
Was getting a little full on the endless cycle of trying to name this thing
 
2D game from Shin'en: "Native 4K Challenge accepted!"

Pffft. Shin’en will pull a Touryst on PS5, and give us dat 8K god tier goodness.

There’s no reason to do native anything when it has DLSS baked in (Unless the game doesn’t support it obv).

Sure there is. It ultimately depends on what the developers are going for. That, and just because they can.

giphy.gif



This Raymond Tracing fella sounds pretty important to have on board

Sounds like Dick Tracy’s brother, or something.


dicktracy-fight.gif


Yeah. I’m onboard with this. :p
 
Nintendo Switch Advance news? On a Thursday in September. What a time to be alive.
 
I still believe in the 16GB RAM because the Nvidia tweet I shared, written this past January, post-tapeout, was from the horse’s mouth. There are other reasons, but I prefer to lend an official account more credence.
Nintendo Switch was able to have very impressives games with just 4GB RAM, imagine what they could with a 12GB RAM, next 3D Mario will be amazing
 
Keep the floodgates going I need 10 new pages spawned in every day lol. Can't wait till we get a slew of rumoured ports 0.0
 
A few more thoughts on the UE5 Matrix demo:

The more I think about this, the more it makes sense as a demo for Nintendo to use. Obviously they want to show off UE5 itself to show that third party games can run well on the hardware, but there are a few reasons the Matrix demo itself works well. The first one is obvious; it was originally used to show off the PS5 and XBSX/S, so it's a statement of intent from Nintendo that the new Switch hardware should be considered in a similar league to those. Secondly, it really plays to Switch 2's strengths.

The new hardware won't match the PS5 or XBSX (or even the XBSS) in raw horsepower on either the CPU or GPU, but it has a much better upscaling solution (DLSS), much better relative RT performance than the RDNA architecture, and potentially much better RT denoising integrated into their upscaling solution if they're using DLSS-RR. So, if you want a demo that plays to Switch 2 strengths, you want to go heavy on the RT and the upscaling. The UE5 Matrix demo is pretty much the most RT-heavy thing on consoles, and I think it's the only UE5 software on consoles which actually uses hardware RT Lumen (Immortals of Aveum might, but either way it's not a great showcase). Switch 2's better RT hardware means it's, relatively, taking a much smaller hit to run Lumen with full hardware RT, doubly so if it's using DLSS-RR and can get away with lower ray counts. Then, on top of that, DLSS itself will produce much better results than Epic's TSR. Digital Foundry noted that XBSS had "very chunky artefacts" on the demo (at approx 4x scaling), and you can push DLSS pretty hard without it getting "very chunky".

Put Switch 2 next to the PS5 or XBSX in a pure native res benchmark with no RT and it will obviously look a lot worse. Crank up the RT (which PS5 and XBSX are relatively bad at, and Switch 2 is relatively good at) and temporal upscaling (again, benefitting Switch 2), and you can close the gap a lot. I have no doubt the PS5 version of the demo looks better side-by-side than Switch 2, but if they can get results with are anywhere near the ballpark of the far more power hungry home consoles, then that's a win.

A lot of people think that because Switch 2 is a hybrid it's necessarily in Nintendo's interests to hold back on ray tracing (or even disable it in handheld mode, as I've heard suggested a few times), but the reality is the opposite. Nintendo now has the best RT hardware in the console space, and will do for the rest of the generation, and it's in their interest to push that as hard as possible, particularly when selling the hardware to devs. Relative to its performance in purely rasterised graphics, RT is much cheaper on Switch 2, so the harder you're pushing RT, the better the Switch 2 looks compared to the competition.

As a point of reference, let's compare the Nvidia RTX 3070 (about 20Tflops, same Ampere arch as Switch 2), and the AMD RX 6800XT (about 20Tflops, full RDNA2 that's better in some ways than the architectures used in PS5 and XBSS/X). The RTX 3070 launched at $499, and the RX 6800XT launched at $649 about a month later. In benchmarks of Cyberpunk 2077 without RT, the 6800XT beats the 3070 by about 25%, which is about what you'd expect from the price difference.

However, in the game's recently added RT overdrive mode (which is the most RT-intensive game around by some margin), the results are very different. At native 1080p, the RX 6800XT hits an average of 7.9 fps, where the RTX 3070 hits 21.3 fps. Now, obviously neither of these are playable (and Switch 2 definitely won't be running this), but by moving from a purely rasterisation test to an extremely heavy RT test, we're moving from RDNA2 beating Ampere by about 25% at a similar Tflop count, to Ampere being 2.7x faster than RDNA2. Furthermore, these tests are without upscaling. Cyberpunk's RT overdrive mode looks far better with the new DLSS ray reconstruction, so if we were comparing Ampere with DLSS-RR vs RDNA2 with traditional denoising and FSR2, the win for Ampere becomes even bigger.
Yep, pretty much my general argument on the comparison.

Absolute best case (Assuming your calculated clocks), Docked Switch 2 in pure raster performance may be able to match up to Series S assuming

  1. The Scene is not CPU Bound/Is Purely GPU Raster Bound
  2. The CPU L3 Cache Access/SysLC helps to a good extent, and Ampere Mixed precision (Taking it from the ~3.3TFLOP FP32 to Around 5-5.5TFLOPs Mixed, leaving some headroom on the tensor cores for DLSS and RR to function).
  3. Series S is not memory limited in the scene because Switch 2 is very much looking like it will have 12GB of Memory on the Retail unit, allowing at least 10GB for Devs versus the 8 on Series S.
However, that is an absolute best case, and even then has some speculation on how much Cache, Efficiency, and Optimization for the console can help the architecture versus RDNA2.

However, when trying to compare Ray Tracing, things become a lot easier for Switch 2 to keep up or surpass like you mentioned.

Heck, looking the Path Tracing Efficiency mod for Cyberpunk 2077 on an RTX 3050 Desktop, it can hit 1080p at a stable 30fps using DLSS Performance Mode.

Add in
  • DLSS Ray Reconstruction to help claw back more detail
  • A proper fine-tuned pass for the final implementation of RT Overdrive, or more specifically a console-tuned version for Switch 2 (Object culling, distance of the Path Tracing.etc)
  • Maybe % scale specifically for the Path Tracer's Internal Res rather than rely on the screen res being 1:1 (So EX: 1440p Performance Mode Output, but the Path Tracing for GI/Reflections is done at 480p)
And it probably could run similarly on Switch 2
  • So like,
    • Quality (4K RT at 30),
    • Performance (1440-4K at 60)
    • Performance RT (1080-1440p at 60 with Reduced RT)
    • Overdrive (1080p Output at 30 with Path Tracing)

RT and Upscaling is where Swtich 2's legs will stretch, and shifting the pipeline of content to utilize how efficcently it can perform those effects would help a lot.

I would not be surprised if we see a lot of games use scalable RTGI on Switch 2 for example, we already see a very scalable RTGI solution on OG Switch with SVOGI in Crysis Remastered.
 
I still believe in the 16GB RAM because the Nvidia tweet I shared, written this past January, post-tapeout, was from the horse’s mouth. There are other reasons, but I prefer to lend an official account more credence.
12 being the consumer product and 16 being the dev kit makes a lot of sense, 12 would still likely be really impressive!
 
Please read this new, consolidated staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited by a moderator:


Back
Top Bottom