• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

so apologies in advance if i seemed too many questions regarding games lol, too overexcited sadly, but i'd like to ask:
Aside from possibly Mihoyo games, massively possibly FF7R & 3D mario, what other rumored/speculated games we have so far for Drake?
Baldur's Gate 3 is probably thanks to a Nintendo third party relations guy. RDR2 has been rumored for ages. Mario Rabbids 2 getting an upgrade was outright confirmed by Ubisoft's CEO.
 
I realized I never did a deep dive into understanding a few facets of what’s going on, let me elaborate.

There seems to be two camps that exist within here, the “H1 camp” and the “H2 Camp”


The arguments for H1 is using what Nate has said, and using the data from Gamescom. March is what it is centered around for reasons unknown. Reports that aren’t 100% clear and a comment from Nintendo regarding hardware this year and not actually saying no new hardware outright.



The arguments for H2 is using historical precedents for Nintendo’s predictable (hehe) actions. Their current lineup, the announced Direct lineup. The historical launch of other consoles and some Nintendo consoles being the second half of the year, and some other loosey bits that are reasons they feel comfortable with.

Nate only provided that “March is significant” and that’s all. He’s not entirely sure what it is, speculates on it, but knows it’s significant for developers.





Here are my thoughts: there’s a possibility that it is announced “Soon” but not soon as in this year, but early next year. After the Christmas season.

Trailers and information should be ready for developers by March of the following year, but Nintendo will not release a new system this fiscal year.


They will release the console about 2 months after the fact, so around May (days after the financial briefing).


You may be asking, isn’t that a bit too soon? The switch released within a 5 month window after it was announced. 5 months seems plenty for Nintendo to distribute, market, and deliver a console as they’ve done so already. It went from what? October 20 to March 3rd?

You may be wondering “but what about the switch?” The switch still has games going for it, smaller games, but games nonetheless. A game can potentially gain a boost and sell well with the newer system if it launched around then. But the people that are interested in the casual titles aren’t typically the early adopters for the new platforms where it is competitive and the new ones are generally the core audience members that bring in the casuals. Aka, those that buy a few games aren’t representative of the ones likely to buy the system day one. It’s the ones that buy several dozen who are most likely to buy a system day 1.

With the new system being compatible with the switch 1 games, it doesn’t detract from potential for those games to seek a good adoption on the new device. Wait how do I know it’s BC? I don’t, but I’m not going to assume it’s not because it makes no sense otherwise. BC has its advantages even in cross gen period.


I can see the possibility of a May launch only because in theory, it’s not an impossible metric. We have the gamescom leak. We have Nate’s Leak. We have the VGC and Eurogamer July leak.

So, it’s possible that March is about having things ready from partners for advertising by then already. It should be ready by around March for something that will be releasing later. The reason why I mentioned announcing the system in January, basically after the holiday shopping, or early February, is because this is during the slow period for consoles in terms of software and hardware. Quarter 1 of the calendar year isn’t as strong as some of the other quarters at times.

They aren’t going to gain a lot from not announcing vs announcing it. And the device is going to be 7 years old. It’s probably something that’s worth it for them.


Well; this sorta really isn’t a deep dive.

But I don’t see March or this Fiscal year at all for release, but can flirt with the idea of a May-ish release.

But can also see a September release being just as realistic and possible as a May release. October isn’t out of the question, nor is November, but there will be plenty of traffic around that time.
 
Probably, like the Gamescom demo apparently showed Matrix Awaken with everything enabled so it's effectively the closest to Path Tracing that has been put out as a proper demo (as it has RTGI, RT shadows, and RT Reflections. Heck at night it even has Direct Illumination from windows/lights as the only lightsoruces in the city)

So running Matrix Awakens/City Demo with all its RT features enabled would produce the same.

Can we expect full RT in 60fps games like the next mario kart, splatoon, and [probably] 3D mario too? Is the frame time budget enough?

It will be amazing if every 1st party game has good RT. I was expecting it with some selected games only.
 
These past few weeks have seen this thread blitz through pages in short intervals, for better or for worse.......
 
but Nintendo will not release a new system this fiscal year.
That was the crux of a subthreaded discussion I was part of (the one you just responded to, before this comment) - it began with this question

If your reasoning for saying Nintendo will not release a new system this fiscal year because of a filing, it has been made clear Nintendo never explicitly mentioned they are not releasing a new system this fiscal year. That led to my next question, if Nintendo had plan to release a new system this fiscal year, are they obligated to disclose it in their financial report? I was told (by multiple others here) the answer is no, they don't necessarily have to.

TLDR: Nintendo never explicitly ruled out a launch this fiscal year.

That said, I'm still of opinion the launch happens after Nintendo's current fiscal year ends.
 
Last edited:
Can we expect full RT in 60fps games like the next mario kart, splatoon, and [probably] 3D mario too? Is the frame time budget enough?

It will be amazing if every 1st party game has good RT. I was expecting it with some selected games only.
if they really want 60fps and RT (RT what, though, is an important question that needs to be asked), they can get it. there are tradeoffs of course, just a question of how hard they push

IMO, every gaming having some kind of RT effect isn't crazy. but you're gonna have people going "I can't see the RT!" but RT wasn't put in to be "seen"
 
My "whole argument" is that the Switch 2 will likely not be powerful enough to use ray-tracing in a meaningful way outside of a few edge cases. This is shown by the fact that much more powerful hardware is still brought to its knees by ray-tracing. There is just not enough raw power. The 2070 is 3-6x as powerful as the Switch 2 and runs Portal RTX (a case of a game with more modern looking visuals with high quality ray-tracing) at a non-playable framerate.

The PS5 is complete garbage at ray-tracing and the Switch's superior architecture for RT isn't making up a 6x gap of power between the Switch 2 and the PS5 and then going further beyond to "actually not garbage at ray-tracing."
I suggest you not use Portal RTX because what was done to achieve that isn’t representative of how it will be in real life for other games.

RTX remix effectively requires you to run the game twice at the same time. The underlying game, and this mod that gives it these features.

Portal is completely untouched underneath if it used RTX Remix.

Like it is so stupidly expensive, you shouldn’t really be using that as a gotcha because it’s unrealistic to how it would even operate on a console, even if they got it to work.

To put in a different way, you’re emulating something that’s being emulated
 
Last edited:
if they really want 60fps and RT (RT what, though, is an important question that needs to be asked), they can get it. there are tradeoffs of course, just a question of how hard they push

IMO, every gaming having some kind of RT effect isn't crazy. but you're gonna have people going "I can't see the RT!" but RT wasn't put in to be "seen"

Humm :unsure:

When people talk about RT, I'm expecting reflections, shadows, global illumation, ambient occlusion...
So you're saying that's not how it works? Like, you can't have all those things together? Or perhaps you can, but not pushing it hard, so in the end it's more of a subtle effect? Help me here
 
Just woundering if UFS technologies also have particular fab nodes. i.e. SEC8N was actually meant to refer to the UFS NAND flash technology.
Flash memory uses completely different process nodes from SoCs, etc. (Micron's UFS 4.0, for example, uses Micron's 232-layer 3D NAND architecture.)

Microsoft wants to get 6nm as soon as possible to make the device cheaper. Not one, but two of them.
I don't know if Microsoft necessarily plans on having Ellewood's (the Xbox Series S refresh's) APU die shrunk to TSMC's N6 process node since unlike with Brooklin (the Xbox Series X refresh), where Microsoft explictly said "6nm die shrink for improved efficiency", Microsoft made no such comment with Ellewood.

brooklyn.jpg

ellewood.jpg

(That being said, I think having Ellewood's APU remain fabricated using TSMC's N7P process node is a bad idea on Microsoft's part.)

 
So you're saying I can expect the next AAA Zelda to use RT in its full glory?
Yes, the next Zelda will almost certainly leverage raytracing as its only lightning solution. I'm expecting a couple of titles to also fully transition to probe based RT, the days of rasterization on Nintendo consoles are counted.
 
Humm :unsure:

When people talk about RT, I'm expecting reflections, shadows, global illumation, ambient occlusion...
So you're saying that's not how it works? Like, you can't have all those things together? Or perhaps you can, but not pushing it hard, so in the end it's more of a subtle effect? Help me here
You can have everything, or just one or two things. All of these are traced against objects in your acceleration structure, which is commonly, a bvh. Fidelity of these objects do matter as you're tracing against the triangles. There's also the number of rays per pixel, which adds to the performance cost. Commonly, 1 ray per 4 pixels is done (think tracing at 540p for a 1080p image) though you can go lower like 1 ray per 16 pixels (270p for a 1080p image). This is what UE5 does for lumen. And then once you do all that, you shade each pixel.

So there's a lot to account for when looking at performance. But there are a surprising number of 60fps and rt games out there, so not impossible for Drake if devs really want it
 
I think your read on things is likely accurate. But, just to answer your rhetorical question, Nintendo worked with MegaChips (Edit: and Macronix?) to design the current Switch's card controller, so they're another candidate.

...And now that I'm looking at some of these MegaChips docs on the Lotus ASIC, it mentions that eMMC protocol is used between the SoC and the card controller ASIC. So there's a decent chance the same is being done here.

I was thinking whoever doing it for the Switch. Could be Samsung for all I know but I never heard of Samsung being involved for anything in the Switch.

Edit: guess it's MegaChips, that what I was expecting. An unknown company producing a small rather low production part (no actually knowledge in this field so they might be the biggest company ever and I just never heard of them)
100% what I get for posting on my phone during break - not saying that Samsung is the only possible provider, just that I think LSI is a perfectly cromulent choice, and that Samsung doesn't need the deal sweetened. Which is not to say that it isn't sweetened, just that I don't find it suspicious off the bat, especially if Nintendo is trying to achieve UFS level speeds.

But if I wanted to stoke the fire, I believe LSI is an extension of their foundry business, so maybe it's part of a deal to get SEC8 really cheap...
 
Here’s a question for anyone who’s bored: how powerful COULD the OG Switch have been? A common phrase uttered is, “TX1 was a binned chip. Drake is custom.” What could a custom chip look like in 2017 terms? 16nm? Dedicated decompressor? LPDDR4X? CPU/GPU? I just wonder how much more powerful a custom chip would have been than binned TX1s. Could it have come within spitting distance of XBone/PS4?
 
Was it? I thought the CEO just lamented that they didn't wait until the next console to release it.
Yeah

On Nintendo, games like this never die. There are 25 Mario games on Switch.... And we think it will last for ten years, because we will update it for the new machine that will come in the future.
 
There's absolutely no way a (PS4-like) Switch 2 portable could hit that resolution internally, it'd have to be at least twice as low plus lower settings. Just saying, but the math doesn't add up in there, and that's without including CPU and memory bandwidth bottlenecks which are absolutely a thing in that game. T239 will be legitimately stronger for this reason.
I think we're more talking over each other than disagreeing with this. I'm saying a modern device that seems to have PS4/One-like listed specs in things like teraflops would be a significant fraction of a Series S, while you're saying a literal PS4/One would be a much smaller fraction of a Series S.
So we supposedly have Xbox Series S refresh and PS5 Pro both planned for Holiday 2024

Chances that Nintendo joins the bbq party are very slim. They like to operate in their own world

I think it makes H1 even more plausible now.
I wouldn't read too much into that. GameCube, Wii, DS all had major launches within days of competitors.
So the key to LinkedIn sleuthing is to include various misspellings like 'Nitendo'
Interesting crossover with how to search for best eBay deals.
 
You can have everything, or just one or two things. All of these are traced against objects in your acceleration structure, which is commonly, a bvh. Fidelity of these objects do matter as you're tracing against the triangles. There's also the number of rays per pixel, which adds to the performance cost. Commonly, 1 ray per 4 pixels is done (think tracing at 540p for a 1080p image) though you can go lower like 1 ray per 16 pixels (270p for a 1080p image). This is what UE5 does for lumen. And then once you do all that, you shade each pixel.

So there's a lot to account for when looking at performance. But there are a surprising number of 60fps and rt games out there, so not impossible for Drake if devs really want it
Yeah, not to mention some RT Techniques, like the aforementioned Software Lumen (UE5), are incredibly scalable.

Heck, the Original Switch technically does a type of Ray Traced Global Illumination in Crysis 1 and 2 Remastered (SVOGI), just like the rest of the consoles (Lower fidelity, but still notably present)
 
I think we're more talking over each other than disagreeing with this. I'm saying a modern device that seems to have PS4/One-like listed specs in things like teraflops would be a significant fraction of a Series S, while you're saying a literal PS4/One would be a much smaller fraction of a Series S.

I wouldn't read too much into that. GameCube, Wii, DS all had major launches within days of competitors.

Interesting crossover with how to search for best eBay deals.
T239 does not have "PS4-One like" listed specs, I was just telling you the hypothetical scenario the chip ended up way weaker than expected. The 4N/5nm T239 we know about should be around half of an Series S in raw GPU power when handheld, that's the thing. Just GPU wise, it's already a significantly better machine than the fat PS4 and that's important to consider when discussing such an scenario.
 
0
Here’s a question for anyone who’s bored: how powerful COULD the OG Switch have been? A common phrase uttered is, “TX1 was a binned chip. Drake is custom.” What could a custom chip look like in 2017 terms? 16nm? Dedicated decompressor? LPDDR4X? CPU/GPU? I just wonder how much more powerful a custom chip would have been than binned TX1s. Could it have come within spitting distance of XBone/PS4?
I think TX1, as a design, was about as good as Nvidia could have given Nintendo. The CPUs were as good as you could get from ARM (admittedly at a period of stagnation), and LPDDR4X didn't see production till 2017. Nvidia was offering their current gen GPU tech, unadulterated (if cut down). Nintendo could get UFS2.1 if they wanted it, but as they were limited by gamecard speeds anyway, I don't think that would have been anything but an increase in costs for no real win.

What Nintendo could have gotten was Mariko right off the bat. 16nm without the random controllers and CPUs that went unused. Nintendo could have, with that chip, made different choices about clock speeds and battery life. There are only two, potentially big, additions I could see with a fully custom chip.

Maybe there is enough room on the motherboard with a smaller chip to have 4x1GB chips instead of 2x2GB. Switch is brutally memory bandwidth constrained despite having a totally normal amount of bandwidth for a mobile device - no other mobile device was trying to support a desktop class GPU architecture at the time. You can imagine most of the stutters and frame drops in the two Zedla games just vanishing with that sort of change.

The second possibility is that, instead of pushing clocks, Nintendo could have increased the size of the GPU. I think a 4SM GPU is possible, instead of the 2SM we got, doubling GPU power. With some combination of these tricks - pushing CPU clocks, doubling the GPU size (even possibly pulling the clocks back to regain some battery life) and changing the memory layout, Nintendo could have arrived in a baseline that looks like what overclockers have already done to the system.

The TX1 wasn't a bad chip - I think the tendency to slag it off now is a way to get folks to Believe In Drake. But TX1 wasn't just designed for a different customer (Google's Pixel C laptop) it also did a different job for Nintendo. Nintendo needed to establish a successor to the WiiU and the 3DS, and the TX1s level of performance was great for that.

Drake's job is to solidify Nintendo's position in this new niche, and guarantee their longevity. It's a different need, so a different chip
 
0
Here’s a question for anyone who’s bored: how powerful COULD the OG Switch have been? A common phrase uttered is, “TX1 was a binned chip. Drake is custom.” What could a custom chip look like in 2017 terms? 16nm? Dedicated decompressor? LPDDR4X? CPU/GPU? I just wonder how much more powerful a custom chip would have been than binned TX1s. Could it have come within spitting distance of XBone/PS4?
TSMC's 16FF process node's certainly possible, considering that's what Nvidia's used to fabricate the Tegra X2.

Considering SK Hynix announced being the first to launch LPDDR4X DRAM on 9 January 2017, I don't know if there's enough time for Nintendo and Nvidia to integrate a LPDDR4X controller inside the SoC, assuming Nintendo was still targeting launching on 3 March 2017. But Nintendo and Nvidia could have increased the bus width for the RAM from 64-bit to 128-bit since the Tegra X2 did have a 128-bit bus width for the RAM, and perhaps increase the amount of RAM as well.

Hypothetically, Nintendo and Nvidia could opt to use the Cortex-A72 instead of the Cortex-A57 for the CPU. (Perhaps Nintendo and Nvidia could have been able to take advantage of big.LITTLE in the scenario Nintendo and Nvidia opted to use the Cortex-A72 since the four Cortex-A53 cores were disabled on the Tegra X1.) But I don't know if Nvidia had access to the Cortex-A72 IP licence in 2016 and/or 2017, considering as with the Tegra X1, Nvidia used four Cortex-A57 cores, alongside two Denver 2 (Nvidia's custom Arm based CPU) cores, for the Tegra X2.
As for the GPU, considering the Tegra X2's GPU doesn't have DP4a support, despite Nvidia advertising the Tegra X2's GPU as being Pascal based, and Nvidia advertising the introduction of DP4a support with Pascal GPUs, I think that implies the Tegra X2's GPU's very similar, if not identical, to the Tegra X1's GPU. So I don't think much customisation can be done on the GPU side.

Anyway, assuming Nintendo and Nvidia did increase the bus width for the RAM, and used the Cortex-A72 instead of the Cortex-A57, with regards to how close Nintendo and Nvidia could be to the PlayStation 4 and the Xbox One hypothetically, depends on how high the CPU and RAM frequencies are, I think. My guess is probably won't be close to being on par with the PlayStation 4 and the Xbox One, but probably considerably closer than with the Tegra X1.
 
Last edited:
TSMC's 16FF process node's certainly possible, considering that's what Nvidia's used to fabricate the Tegra X2.

Considering SK Hynix announced being the first to launch LPDDR4X DRAM on 9 January 2017, I don't know if there's enough time for Nintendo and Nvidia to integrate a LPDDR4X controller inside the SoC, assuming Nintendo was still targeting launching on 3 March 2017. But Nintendo and Nvidia could have increased the bus width for the RAM from 64-bit to 128-bit since the Tegra X2 did have a 128-bit bus width for the RAM, and perhaps increase the amount of RAM as well.

Hypothetically, Nintendo and Nvidia could opt to use the Cortex-A72 instead of the Cortex-A57 for the CPU. (Perhaps Nintendo and Nvidia could have been able to take advantage of big.LITTLE in the scenario Nintendo and Nvidia opted to use the Cortex-A72 since the four Cortex-A53 cores were disabled on the Tegra X1.) But I don't know if Nvidia had access to the Cortex-A72 IP licence in 2016 and/or 2017, considering as with the Tegra X1, Nvidia used four Cortex-A57 cores, alongside two Denver 2 (Nvidia's custom Arm based CPU) cores, for the Tegra X2.
As for the GPU, considering the Tegra X2's GPU doesn't have DP4a support, despite Nvidia advertising the Tegra X2's GPU as being Pascal based, and Nvidia advertising the introduction of DP4a support with Pascal GPUs, I think that implies the Tegra X2's GPU's very similar, if not identical, to the Tegra X1's GPU. So I don't think much customisation can be done on the GPU side.

Anyway, assuming Nintendo and Nvidia did increase the bus width for the RAM, and used the Cortex-A72 instead of the Cortex-A57, with regards to how close Nintendo and Nvidia could be to the PlayStation 4 and the Xbox One hypothetically, depends on how high the CPU and RAM frequencies are, I think. My guess is probably won't be close to being on par with the PlayStation 4 and the Xbox One, but probably considerably closer than with the Tegra X1.
Great minds!
 
0
We can extrapolate the ray-tracing performance we can get on Switch 2 and PS5/SX/SS by using existing PC parts, and then we can compare how they perform relative to each other. This is by no means fully accurate, but it can give us some idea of what to expect as long as we don't focus on specific FPS numbers, but instead we focus on the relative performance to each other.

The closest GPU to the PS5 is a 6700xt. It has slightly more cores (2304 vs 2560), and in the video I'll link below it's running at 2.6GHz vs 2.2 on the PS5.

The 3050 mobile is (much) faster than the t239 on Switch 2, it has more cores (2048 vs 1536 shader cores), more ray-tracing cores (16 vs 12) and also runs at much higher clocks. In the second video below, the 3050 is consuming 40-45w and running at 1.8GHz, Switch 2 will probably run at 1.1GHz while docked.

The closest GPU to Series S is the 6500xt. It has fewer cores (1024 vs 1280 on Series S) but it usually runs at higher clocks on PC, so we can expect similar performance.

Minecraft RTX on a 6700xt (closest match to PS5):


Minecraft RTX on a 3050 mobile (closest match to Switch 2):


Minecraft RTX on a 6500xt (closest match to Series S):


If we compare the performance of each card relative to each other at ray-tracing, we get 6700xt (PS5) > 3050m (Switch 2) > 6500xt (Series S).

Let's keep in mind that the 6500xt is more similar to the Series S performance-wise than the other two cards are to their respective consoles, so the actual ray-tracing performance of the t239 should be closer to Series S than the difference between the 6500xt and the 3050 mobile in the videos. The gap between PS5 and the 6700xt is not that big, but the gap between the 3050 mobile and the t239 is more substantial, so take that into consideration while comparing.

If we take all that into consideration, and extrapolate those PC numbers to consoles, Switch 2 will probably perform as well as - or slightly better than - a Series S at pure ray-tracing, but it won't match the PS5, even with the advantage of using dedicated ray-tracing cores.
 
Last edited:
Articles like from TheVerge say Activision mentioned the performance of the Switch 2 as something definitive, but what Activision actually said was this...

They aren't sure of its power, but feel confident that what they offered on PS4/XB1 would likely be possible on the Switch 2. Seems more like an "at least" scenario.
That makes sense that it's a ballpark/at least scenario. Though my previous statement about this ballpark being in terms of solely rasterization performance still stands since there's no real point of comparison RT-wise as the PS4 and Xbox One did not feature ray-tracing of course.
 
Last edited:
0
We can extrapolate the ray-tracing performance we can get on Switch 2 and PS5/SX/SS by using existing PC parts, and then we can compare how they perform relative to each other. This is by no means fully accurate, but it can give us some idea of what to expect as long as we don't focus on specific FPS numbers, but instead we focus on the relative performance to each other.

The closest GPU to the PS5 is a 6700xt. It has slightly more cores (2304 vs 2560), and in the video I'll link below it's running at 2.6GHz vs 2.2 on the PS5.

The 3050 mobile is (much) faster than the t239 on Switch 2, it has more cores (2048 vs 1536 shader cores), more ray-tracing cores (16 vs 12) and also runs at much higher clocks. In the second video below, the 3050 is consuming 40-45w and running at 1.8GHz, Switch 2 will probably run at 1.1GHz while docked.

The closest GPU to Series S is the 6500xt. It has fewer cores (1024 vs 1280 on Series S) but it usually runs at higher clocks on PC, so we can expect similar performance.

Minecraft RTX on a 6700xt (closest match to PS5):


Minecraft RTX on a 3050 mobile (closest match to Switch 2):


Minecraft RTX on a 6500xt (closest match to Series S):


If we compare the performance of each card relative to each other at ray-tracing, we get 6700xt (PS5) > 3050m (Switch 2) > 6500xt (Series S).

Let's keep in mind that the 6500xt is more similar to the Series S performance-wise than the other two cards are to their respective consoles, so the actual ray-tracing performance of the t239 should be closer to Series S than the difference between the 6500xt and the 3050 mobile in the videos. The gap between PS5 and the 6700xt is not that big, but the gap between the 3050 mobile and the t239 is more substantial, so take that into consideration while comparing.

If we extrapolate those PC numbers to consoles, Switch 2 will probably perform as well as - or slightly better than - a Series S at pure ray-tracing, but it won't match the PS5, even with the advantage of using dedicated ray-tracing cores.

Yep, this is close to what I believe to be the case in terms of how powerful the Super Switch will be at RT. Although even if it's less powerful than the PS5 in like-for-like scenarios with ray-tracing enabled, the gap in image quality could very well be filled by DLSS to produce an image that is either on par or perhaps even better at least in some instances.
 
0
We can extrapolate the ray-tracing performance we can get on Switch 2 and PS5/SX/SS by using existing PC parts, and then we can compare how they perform relative to each other. This is by no means fully accurate, but it can give us some idea of what to expect as long as we don't focus on specific FPS numbers, but instead we focus on the relative performance to each other.

The closest GPU to the PS5 is a 6700xt. It has slightly more cores (2304 vs 2560), and in the video I'll link below it's running at 2.6GHz vs 2.2 on the PS5.

The 3050 mobile is (much) faster than the t239 on Switch 2, it has more cores (2048 vs 1536 shader cores), more ray-tracing cores (16 vs 12) and also runs at much higher clocks. In the second video below, the 3050 is consuming 40-45w and running at 1.8GHz, Switch 2 will probably run at 1.1GHz while docked.

The closest GPU to Series S is the 6500xt. It has fewer cores (1024 vs 1280 on Series S) but it usually runs at higher clocks on PC, so we can expect similar performance.

Minecraft RTX on a 6700xt (closest match to PS5):


Minecraft RTX on a 3050 mobile (closest match to Switch 2):


Minecraft RTX on a 6500xt (closest match to Series S):


If we compare the performance of each card relative to each other at ray-tracing, we get 6700xt (PS5) > 3050m (Switch 2) > 6500xt (Series S).

Let's keep in mind that the 6500xt is more similar to the Series S performance-wise than the other two cards are to their respective consoles, so the actual ray-tracing performance of the t239 should be closer to Series S than the difference between the 6500xt and the 3050 mobile in the videos. The gap between PS5 and the 6700xt is not that big, but the gap between the 3050 mobile and the t239 is more substantial, so take that into consideration while comparing.

If we take all that into consideration, and extrapolate those PC numbers to consoles, Switch 2 will probably perform as well as - or slightly better than - a Series S at pure ray-tracing, but it won't match the PS5, even with the advantage of using dedicated ray-tracing cores.

I'm not a massive expert on this, but i wonder if the latest DLSS enhancements will actually help the switch 2 push above what it would otherwise be capable of in ray-tracing, when you factor in both the super resolution and ray reconstruction additions.

DLSS-3.5-Ray-Reconstruction---Cyberpunk-2077-comparison.jpg

Man, this new age of technology is SUPER exciting and interesting!
 
I'm not a massive expert on this, but i wonder if the latest DLSS enhancements will actually help the switch 2 push above what it would otherwise be capable of in ray-tracing, when you factor in both the super resolution and ray reconstruction additions.

DLSS-3.5-Ray-Reconstruction---Cyberpunk-2077-comparison.jpg

Man, this new age of technology is SUPER exciting and interesting!
Yeah I believe that's the entire gimmick if the Switch 2. On paper, it doesn't have the power to compete with other modern consoles. But thanks to Nvidia's wizard technology, the hardware Nintendo has will be able to do a lot more than expected.
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom