Giancarlo
Nintendo connoiseur
trueI guess because the form factor was important, now we know it's probably going to be another hybrid and it's not that relevant
trueI guess because the form factor was important, now we know it's probably going to be another hybrid and it's not that relevant
No version of the RTX denoising, some improvements to the upscaler.What's the difference between 3.1 and 3.5? Just a lesser version of the RTX denoising right?
I'm NOT saying numbering doesn't work. I'm saying it doesn't convey what enthusiasts think it does.Points at PlayStation numbering consoles and all major phones from Apple, Samsung, and Google.. or OS systems like Windows
We have 5 PlayStation consoles with a numbering system. Sure it's boring, but everyone gets the point it's a successor with each number succeeding.
I don’t think this is true. Consumers are willing to pay what they deem worth the price. Phones wouldn’t have ballooned to over $1000 if price was such a huge factor. There’s still limits of course depending on what it is, but my point is there’s no reason for Nintendo to make a cheap product for the sake of it.The price is such a dangerous thing for this new device. Ultimately one of the reasons the Switch is successful is because it is seen as being affordable and a comparatively inexpensive product.
If Nintendo make the new console much more expensive because the last one was so successful then they run the risk of forgetting one of those reasons it was so successful in the first place.
We’ve seen this mistake by Nintendo before with the launch of the 3DS. They thought that because the DS was so successful they could charge a premium price and it ended up biting them on the arse.
Then it is time to do a podcast with MVGI have a few details about how it ran on the target spec hardware -- but am reluctant to share such details right now.
Good point. If anything that bodes well for the gimmick(s). The day somebody sees the true thing in action and says there is none, then the system will just be more powerful (which is a win as it opens a whole new set of experiences, but was hoping to be yet again surprised by something very fresh).
Showing up with something like this to show off some demos makes a lot of sense if Nintendo is trying to keep some key details a secret. We don't know what Nintendo was telling these developers along with showing them the demos. For example, Nintendo could have showed them the demos, articulated this is what to expect in terms of performance and that it would still be very similar to Switch in form factor. This is enough for most developers to start preparing games and ports even without having dev kits.
Was this already shared?
VGC reports Switch 2 was demoed with "The Matrix Awakens Unreal Engine 5 tech demo" feauring DLSS, advanced raytracing and "visuals comparable to Sony and Microsoft’s current-gen consoles"
Reddit - Dive into anything
reddit.com
[VGC] Sources Claim Switch Successor Runs Matrix UE5 Tech Demo With Comparable Graphics to PS5/XSX Rumor - Nintendo
Another VGC source claimed that Nintendo showcased Epic's impressive The Matrix Awakens Unreal Engine 5 tech demo – originally released to showcase the power of PlayStation 5 and Xbox Series X in 2021 – running on target specs for its next console. The demo is said to have been running...www.resetera.com
Maybe.Then it is time to do a podcast with MVG
That´s why i was wondering if Nintendo will subsidize the sale of the new Switch to some extent, ie selling it for a loss solely to build a giant userbase on the next Switch and then earn money solely on subscription services and software sales. Seems a good strategy but not the usual Nintendo strategy of selling hardware for profit that they´ve always tried to achieve previously.The price is such a dangerous thing for this new device. Ultimately one of the reasons the Switch is successful is because it is seen as being affordable and a comparatively inexpensive product.
If Nintendo make the new console much more expensive because the last one was so successful then they run the risk of forgetting one of those reasons it was so successful in the first place.
We’ve seen this mistake by Nintendo before with the launch of the 3DS. They thought that because the DS was so successful they could charge a premium price and it ended up biting them on the arse.
DLSS3.x always encompassed Super Resolution. DLSS2 uses the 3.1.x version of Super ResolutionOkay I think this is how it goes:
DLSS 2.x - This is Super Resolution only. Last update was DLSS 2.5.1 which released Dec 2022.
DLSS 3.x - When DLSS 3.0 was announced, it IIRC was only Frame Generation which was exclusive to the RTX 40 series. Later on, NVIDIA released DLSS 3.1.1 in Feb 2023 which now also encompassed Super Resolution which is compatible with all RTX cards. It continued to get updates up until now with DLSS 3.5 which now also includes Ray Reconstruction that is also available for all RTX cards.
Shipping with 12, yes, likely 10-11GB usable by games with 1-2GB reserved for the OS.https://x.com/mynintendonews/status/1699839696724193740?s=46&t=5ccP6zc-BR6WRVjrS0XY7g
“12GB for the consumer”
As in its shipping with 12? 12 is usable?
Okay I think this is how it goes:
DLSS 2.x - This is Super Resolution only. Last update was DLSS 2.5.1 which released Dec 2022.
DLSS 3.x - When DLSS 3.0 was announced, it IIRC was only Frame Generation which was exclusive to the RTX 40 series. Later on, NVIDIA released DLSS 3.1.1 in Feb 2023 which now also encompassed Super Resolution which is compatible with all RTX cards. It continued to get updates up until now with DLSS 3.5 which now also includes Ray Reconstruction that is also available for all RTX cards.
A Quick Deconfuser on DLSS 3.5 - it's not your fault you're confused. It's Nvidia's.
DLSS is a tool for using AI to improve the visual quality of video games. DLSS 2, 3, and 3.5 each introduced major new features. Because of that, gamers tend to use the version number to refer to the feature added.
But the version you use doesn't mean you use every feature.
There are four features we care about in DLSS.
DLSS Upscaling - this was the only real feature in DLSS 1, and DLSS 2 radically changed how it worked, vastly improving. So most of the time when people say "DLSS" they mean "DLSS 2 Upscaling." It lets you take a low res game, that runs at a higher frame rate, and then keep that good frame rate while upscaling the image to a higher resolution.
DLAA - a high quality antialiasing. DLSS Upscaling always includes AA. DLAA just lets you use the anti-aliasing by itself.
DLSS-G is the official name of DLSS Frame Generation. This uses AI to make new frames between the frames the game draws directly, increasing smoothness. It was introduced in DLSS 3 so it is sometimes called DLSS 3, which is confusing, and we're all trying to stop. It is very cool, but it has lots of non-obvious limitations.
DLSS-RR short for DLSS Ray Reconstruction. This replaces part of the ray tracing pipeline with the same AI that Upscaling uses. It can vastly increase RT quality, and sometimes increases RT performance too. It was introduced in DLSS 3.5, so it is sometimes called DLSS 3.5, but I think at this point you can see how that is super fucking confusing.
These DLSS features can be combined in lots of different combinations. Just because you have DLSS 3.5 in your game, doesn't mean you are using every feature.
Just to add to the complications Every version of DLSS as brought improvements to all of these features, not just adding new ones. So in general, you want the latest version, even if you don't use any of the new features.
TL;DR: Just because the New Switch has the latest DLSS version, doesn't mean every feature is on in every game, or that they will all work on the new hardware.
DLSS upscaling and DLAA will definitely work on the New Switch.
DLSS-G will probably not though there are some smart people who think otherwise, but even those folks would recognize there are serious caveats.
DLSS-RR probably will but the tech is very early, so there isn't the kind of data out there to be super sure. Yet.
Nintendo Switch was able to make impressive games with just 4GB RAM, imagine what they can do 12GB RAM, next 3D Mario will be amazinghttps://x.com/mynintendonews/status/1699839696724193740?s=46&t=5ccP6zc-BR6WRVjrS0XY7g
“12GB for the consumer”
As in its shipping with 12? 12 is usable?
I'm more excited for Splatoon 3 in 4K60HDR.Just picture Luigi's Mansion 4 with heavy use of ray tracing....
the best practice is to tune your assets for your output resolution. DLSS doesn't actually affect the assets, it samples what's given. so if the texture resolution is low, you get a blurrier texture in the output because there is no data there. a high resolution texture will allow DLSS to sample data that does existQuestion about DLSS:
Can you just create lower res assets and use DLSS to bump to higher resolutions?
Or, do you need to create 4k assets first so the algorithm knows what to reference when doing the up-res?
I am wondering if DLSS helps in reducing development time creating tons of super high resolution assets or if creating them is still necessary technically to train the model.
IIRC DLSS does not by default improve texture resolution, if that's what you're referring to here.Question about DLSS:
Can you just create lower res assets and use DLSS to bump to higher resolutions?
Or, do you need to create 4k assets first so the algorithm knows what to reference when doing the up-res?
I am wondering if DLSS helps in reducing development time creating tons of super high resolution assets or if creating them is still necessary technically to train the model.
You don't NEED to do either... but the recommendation is, if your target output is 4K, you should use 4K assets. You just don't HAVE to. There are other solutions that could be more effective, like increased compression on textures.Question about DLSS:
Can you just create lower res assets and use DLSS to bump to higher resolutions?
Or, do you need to create 4k assets first so the algorithm knows what to reference when doing the up-res?
I am wondering if DLSS helps in reducing development time creating tons of super high resolution assets or if creating them is still necessary technically to train the model.
Calm down guy, Nate came here specifically to comment he knows we want his inputStop quote-tagging Nate, for fucks sake people.
the best practice is to tune your assets for your output resolution. DLSS doesn't actually affect the assets, it samples what's given. so if the texture resolution is low, you get a blurrier texture in the output because there is no data there. a high resolution texture will allow DLSS to sample data that does exist
IIRC DLSS does not by default improve texture resolution, if that's what you're referring to here.
I think there are some other methods for upressing textures with AI using tensor cores, probably experimental ones. I vaguely remember an Nvidia demo about Morrowind being redone (just as a tech demo) this way. So there may be something bespoke they can do about textures but as I understand it it's not a current DLSS feature.
You don't NEED to do either... but the recommendation is, if your target output is 4K, you should use 4K assets. You just don't HAVE to. There are other solutions that could be more effective, like increased compression on textures.
My one big worry is that it would be good to know what the handheld performance will look like, as that's the mode I'll mainly be playing in.
Showing up with something like this to show off some demos makes a lot of sense if Nintendo is trying to keep some key details a secret. We don't know what Nintendo was telling these developers along with showing them the demos. For example, Nintendo could have showed them the demos, articulated this is what to expect in terms of performance and that it would still be very similar to Switch in form factor. This is enough for most developers to start preparing games and ports even without having dev kits.
Where was this being said?So it seems that no prototype or dev kit was used at this showing, just hardware mimicking S2 specs. What does this mean for design and release schedule? Has Nintendo finalized the outward design and are just not showing it? Or are there still potential changes to be made?
Am I correcting in assuming your hesitation is at least in part because you're already expecting to do pre-Direct and post-Direct episodes soon so it would be better to just add the switch 2 discussion to one of those?Maybe.
Am I correcting in assuming your hesitation is at least in part because you're already expecting to do pre-Direct and post-Direct episodes soon so it would be better to just add the switch 2 discussion to one of those?
high resolution textures, less loading, higher resolution buffers, that kind of thingWhat are the improvements Nintendo will be able to do with 12 GB RAM instead of 4 GB RAM? I get better framerates and load times but what more will happen with that big increase in RAM?
This Raymond Tracing fella sounds pretty important to have on board
Where was this being said?
better N64 emulation!What are the improvements Nintendo will be able to do with 12 GB RAM instead of 4 GB RAM? I get better framerates and load times but what more will happen with that big increase in RAM?
better framerate/loading, more detailed texturesWhat are the improvements Nintendo will be able to do with 12 GB RAM instead of 4 GB RAM? I get better framerates and load times but what more will happen with that big increase in RAM?
2D game from Shin'en: "Native 4K Challenge accepted!"
There’s no reason to do native anything when it has DLSS baked in (Unless the game doesn’t support it obv).
This Raymond Tracing fella sounds pretty important to have on board
The most important feature!better N64 emulation!
Dylan Esses also sounds like a pretty great guy. Heard nothing but good things for years.This Raymond Tracing fella sounds pretty important to have on board
Nintendo Switch was able to have very impressives games with just 4GB RAM, imagine what they could with a 12GB RAM, next 3D Mario will be amazingI still believe in the 16GB RAM because the Nvidia tweet I shared, written this past January, post-tapeout, was from the horse’s mouth. There are other reasons, but I prefer to lend an official account more credence.
Yep, pretty much my general argument on the comparison.A few more thoughts on the UE5 Matrix demo:
The more I think about this, the more it makes sense as a demo for Nintendo to use. Obviously they want to show off UE5 itself to show that third party games can run well on the hardware, but there are a few reasons the Matrix demo itself works well. The first one is obvious; it was originally used to show off the PS5 and XBSX/S, so it's a statement of intent from Nintendo that the new Switch hardware should be considered in a similar league to those. Secondly, it really plays to Switch 2's strengths.
The new hardware won't match the PS5 or XBSX (or even the XBSS) in raw horsepower on either the CPU or GPU, but it has a much better upscaling solution (DLSS), much better relative RT performance than the RDNA architecture, and potentially much better RT denoising integrated into their upscaling solution if they're using DLSS-RR. So, if you want a demo that plays to Switch 2 strengths, you want to go heavy on the RT and the upscaling. The UE5 Matrix demo is pretty much the most RT-heavy thing on consoles, and I think it's the only UE5 software on consoles which actually uses hardware RT Lumen (Immortals of Aveum might, but either way it's not a great showcase). Switch 2's better RT hardware means it's, relatively, taking a much smaller hit to run Lumen with full hardware RT, doubly so if it's using DLSS-RR and can get away with lower ray counts. Then, on top of that, DLSS itself will produce much better results than Epic's TSR. Digital Foundry noted that XBSS had "very chunky artefacts" on the demo (at approx 4x scaling), and you can push DLSS pretty hard without it getting "very chunky".
Put Switch 2 next to the PS5 or XBSX in a pure native res benchmark with no RT and it will obviously look a lot worse. Crank up the RT (which PS5 and XBSX are relatively bad at, and Switch 2 is relatively good at) and temporal upscaling (again, benefitting Switch 2), and you can close the gap a lot. I have no doubt the PS5 version of the demo looks better side-by-side than Switch 2, but if they can get results with are anywhere near the ballpark of the far more power hungry home consoles, then that's a win.
A lot of people think that because Switch 2 is a hybrid it's necessarily in Nintendo's interests to hold back on ray tracing (or even disable it in handheld mode, as I've heard suggested a few times), but the reality is the opposite. Nintendo now has the best RT hardware in the console space, and will do for the rest of the generation, and it's in their interest to push that as hard as possible, particularly when selling the hardware to devs. Relative to its performance in purely rasterised graphics, RT is much cheaper on Switch 2, so the harder you're pushing RT, the better the Switch 2 looks compared to the competition.
As a point of reference, let's compare the Nvidia RTX 3070 (about 20Tflops, same Ampere arch as Switch 2), and the AMD RX 6800XT (about 20Tflops, full RDNA2 that's better in some ways than the architectures used in PS5 and XBSS/X). The RTX 3070 launched at $499, and the RX 6800XT launched at $649 about a month later. In benchmarks of Cyberpunk 2077 without RT, the 6800XT beats the 3070 by about 25%, which is about what you'd expect from the price difference.
However, in the game's recently added RT overdrive mode (which is the most RT-intensive game around by some margin), the results are very different. At native 1080p, the RX 6800XT hits an average of 7.9 fps, where the RTX 3070 hits 21.3 fps. Now, obviously neither of these are playable (and Switch 2 definitely won't be running this), but by moving from a purely rasterisation test to an extremely heavy RT test, we're moving from RDNA2 beating Ampere by about 25% at a similar Tflop count, to Ampere being 2.7x faster than RDNA2. Furthermore, these tests are without upscaling. Cyberpunk's RT overdrive mode looks far better with the new DLSS ray reconstruction, so if we were comparing Ampere with DLSS-RR vs RDNA2 with traditional denoising and FSR2, the win for Ampere becomes even bigger.
12 being the consumer product and 16 being the dev kit makes a lot of sense, 12 would still likely be really impressive!I still believe in the 16GB RAM because the Nvidia tweet I shared, written this past January, post-tapeout, was from the horse’s mouth. There are other reasons, but I prefer to lend an official account more credence.