• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

That's how UMA (Unified Memory Architecture) works, yes. Both CPU and GPU can fetch data at the same time, without the need of copy transfer between CPU and GPU. The disavantage of said model is that, by the fact CPU and GPU share memory, they compete with each other for bandwidth, which can lead to contention issues. But that's something that the OS (On PCs) and developers (On fixed hardware. i.e: consoles) need to work around.

In other words, when talking about the comparison made (dedicated 96GB/s for the GPU versus switch 2 with total 102GB/s), we would have less BW for switch 2's GPU, is that right?
 
Usually they don't have much of a choice either, since Nintendo hardware is often very limited since the Wii. This time, there needn't be that kind of constraint. Zelda and XB are big games with bigger worlds vs 3D Mario and Smash.

Or they could just throw in a 60fps mode if and get with the program!
I don't see Nintendo going the different modes route. Doesn't seem like their style
 
I know this is specifically a Nintendo forum, but it's super cool that we have two different companies (Valve and Nintendo) offering two different handhelds at good prices with great indie support and performance. Like that's just Cool and Good
 
In other words, when talking about the comparison made (dedicated 96GB/s for the GPU versus switch 2 with total 102GB/s), we would have less BW for switch 2's GPU, is that right?
In a unified setup, the amount of bandwidth varies at any given point. Could have less, could have more. Depends on the task for the frame being drawn

Neither do options like getting rid of the talking flowers or changing voice acting language, but hey, here we are.
neither of those things require changes to assets or what's being drawn on the screen
 
Do you own a Steam Deck? (I don't)

Was wondering what made you say that considering the other replies are saying what's shown in DF video shows the setup, which we clearly know cannot dupe T239 exactly but is at lower end of spectrum of what T239 can do, is outperforming Steam Deck easily. Maybe you saw something others didn't, what was it?
For the record, I do own a steam deck. Impressive piece of tech, got a bunch of funky technology, but it is not as powerful as you'd expect. Running Deathloop at low-medium settings at 30fps dropping down to 20 frequently, and that's with FSR enabled making it a blurry mess... at 720p.

Hitman 3? Medium-low, 40fps at a stretch. Cyberpunk 2077, got a specialised preset just for the Steam Deck, 30fps with framerate drops. Control? 30fps, light ray-tracing on a good day.

The Switch 2 DF Simulation Tests managed to do 1080p-1440p at strong 30-60fps with occasional drops, and does far tougher and strenuous Ray-tracing. And that's before you acknowledge that they're tests and will likely be better when optimised for the real thing.
 
Three other things that can be taken away from that video…
  • DF vetting Nate’s BotW loading report.
  • DF mentioning somewhere between 8 & 12GB of RAM. Couple this with Nate saying nothing below 8GB, we’re likely looking at 12GB.
  • T239 is capable of HDMI 2.1 throughput.
 
In other words, when talking about the comparison made (dedicated 96GB/s for the GPU versus switch 2 with total 102GB/s), we would have less BW for switch 2's GPU, is that right?
Yes. But it depends on what each game is executing in the background. If the game is fully stressing both GPU and CPU at the same time, the GPU will have to fight with the CPU for bandwidth. From what I remember of old Arm and PS4 slides, the CPU alone could alone upwards to 30 GB/s. But that number vary per workload.
But as I said, the developer will manage this and do workaround to limit any resource contetion that might happen in their project.
 
For the record, I do own a steam deck. Impressive piece of tech, got a bunch of funky technology, but it is not as powerful as you'd expect. Running Deathloop at low-medium settings at 30fps dropping down to 20 frequently, and that's with FSR enabled making it a blurry mess... at 720p.

Hitman 3? Medium-low, 40fps at a stretch. Cyberpunk 2077, got a specialised preset just for the Steam Deck, 30fps with framerate drops. Control? 30fps, light ray-tracing on a good day.

The Switch 2 DF Simulation Tests managed to do 1080p-1440p at strong 30-60fps with occasional drops, and does far tougher and strenuous Ray-tracing. And that's before you acknowledge that they're tests and will likely be better when optimised for the real thing.
Anyone who's claiming this isn't a big jump over Steam Deck even on its off-the-shelf testing state didn't pay attention to the video, tbh. It smacks it on every game in every single department, settings, framerate, resolution... Like did we watch the same video, legitimately? :LOL:
 
So would it be possible to do something like

0-16.6 ms: CPU works on first frame
16.6-33.3 ms: CPU works on second frame, GPU works on first frame
33.3-50 ms: CPU works on third frame, GPU works on second frame, tensor cores do DLSS step on first frame

etc

But with the latency of a game running at 60 Hz by separating game logic from the rendering?

Or am I missing something and saying something stupid here.

I would very much like this to be the case so that Smash is forced to separate game logic from rendering so that rollback is easy to introduce.

Quoting in case it's possible D:
 
I feel like people who don't own a steam deck assume it's far more powerful than it actually is
In all fairness, it's tech that's kind of hard to "understand" the power of. Valve employed a lot of trickery, smoke, mirrors, bells and whistles to allow games to run a lot better than they reasonably should. Deathloop running at 720p with FSR is a very impressive feat for a portable device, as is running stuff like RDR2. The key difference is that a lot of the tests that were run by DF were with RT on. The Steam Deck can rarely do RT at all, only in select titles and with notable downsides.
Anyone who's claiming this isn't a big jump over Steam Deck even on its off-the-shelf testing state didn't pay attention to the video, tbh. It smacks it on every game in every single department, settings, framerate, resolution... Like did we watch the same video, legitimately? :LOL:
Steam Deck is a potent device, but the Switch 2 comparisons blow it out of the water, and that's without bringing up the "it's a DF test and not the actual specs of the device" elephant in the room.

The Steam Deck isn't even more powerful than a 1050ti Laptop. I can say that with confidence because I own both and ran tests on both. Sure, the 1050ti wasn't underclocked... but neither was the Steam Deck.
 
The only game where I'm allowing myself to expect an upgrade from 30fps to 60fps is whatever the follow up to 3D Kirby is gonna be. If the Xenoblade and Zelda teams release their big games at 60fps I'll be very pleasantly surprised.

I don't see Nintendo going the different modes route. Doesn't seem like their style
They only published it but Fire Emblem Warriors had a 30fps/60fps toggle in the menu well before it was the new hotness.

On one hand, I agree it doesn't seem like their style. But the other consoles have demonstrated that a simple toggle doesn't really confuse people when there's a default setting.
 
Steam Deck is a potent device, but the Switch 2 comparisons blow it out of the water, and that's without bringing up the "it's a DF test and not the actual specs of the device" elephant in the room.

The Steam Deck isn't even more powerful than a 1050ti Laptop. I can say that with confidence because I own both and ran tests on both. Sure, the 1050ti wasn't underclocked... but neither was the Steam Deck.
Yeah, it simply doesn't compete with Switch 2's GPU, outright too much of a difference. The results shown here line up with what the Ally can do when it's not extremely power constrained, and the final image quality is still superior due to DLSS... Exactly as everyone expected this comparison would pan out, this is definitely a next gen device.
 
In a unified setup, the amount of bandwidth varies at any given point. Could have less, could have more. Depends on the task for the frame being drawn

But I believe it's safe to say you'll never have those 102GB/s available for the GPU only (which is the point the other user asked about)

Yes. But it depends on what each game is executing in the background. If the game is fully stressing both GPU and CPU at the same time, the GPU will have to fight with the CPU for bandwidth. From what I remember of old Arm and PS4 slides, the CPU alone could alone upwards to 30 GB/s. But that number vary per workload.
But as I said, the developer will manage this and do workaround to limit any resource contetion that might happen in their project.

I was exactly with ps4 in mind. It was like at least 20GB/s for the CPU.

But thinking more about it here, even though the PC has RAM and VRAM, we need to wait the assets loaded on RAM be transferred to the VRAM (except for the 2 or 3 games using DirectStorage? lol), so that's also a relevant hit on performance compared to one pool I think?

I find this area (memory and storage access) very interesting
 
I know Oldpuck already shared this video from DF Clips, but I can't reccommend it enough that folks watch. It's fully focused on Death Stranding testing, but Rich also shows us a testing with higher clocks and also a 1440p DLSS gameplay segment. The image quality is outstanding! Switch 2 really can't come soon enough.


so that's also a relevant hit on performance compared to one pool I think?
Yes, that copy from RAM to VRAM or vice-versa is something that is fully eliminated with unified memory.
 
I have always read (elsewhere) that when you have one single pool of memory, the bandwidth is shared through CPU and GPU. But if the way it works is that only one of them is having access to the RAM at a time, then I think it shouldn't matter and both should use the full bandwidth all the time?

But then all I have read so far in other places was I lie and there was never a problem to begin with lol which would be weird if this is the case.

EDIT: thinking about it. If we have a system where cpu and gpu have their own memory, they should access these memories at the same time. But if you have a system with only one pool, then the cpu needs to wait the gpu to finish its work before it can use the RAM too, and vice versa (if both aren't accessing it at the same time), so when we talk about bandwidth here we're talking about time. If the GPU has to wait for the CPU to use the RAM, then the GPU will have less time to use it, which mean less bandwidth in practice. And if both access it at the same time (if that's how it could also work), them bandwidth is still shared anyway. Someone correct me if I'm wrong here
It’s a bit apples and oranges. More RAM, better latency, but the bandwidth pool is only slightly larger but has to feed the CPU as well, which the laptop doesn’t.

Ultimately, the only way to get closer is to wait for the thing itself to come out.

Hmmmm, ok. Honestly this just makes me want LPDDR5X even more, since I thought the GPU would have that full 102GB/s. Hopefully the lower latency helps, at least.

The video shows every game tested exceed Steam Deck in performance, image quality and resolution.

I’m not sure what it would take to make you excited?

I think he might be meaning that the handheld Switch 2 looks no better than Steam Deck (I haven't watched the video yet but isn't it largely attempting to estimate docked Switch 2, complete with 2160p DLSS testing?
 
0
In a unified setup, the amount of bandwidth varies at any given point. Could have less, could have more. Depends on the task for the frame being drawn


neither of those things require changes to assets or what's being drawn on the screen

Doesn't matter. Your point seemed to hinge on Nintendo not really offering options as their style - that has clearly been slowly changing as time goes on. Adding on technical details is just shifting the goalpost at this point.
 
0
But I believe it's safe to say you'll never have those 102GB/s available for the GPU only (which is the point the other user asked about)

I have no idea if this is how it works or not, but couldn't you have many games where the CPU work is relatively light and completes halfway through the rendering process, leaving the GPU with all available bandwidth for the rest of that frame?
 
Very unpopular opinion but post DF video I'm... significantly less excited now. Not really shaping up to be any better than the Steam Deck it seems.
For me it's not whether or not its strictly better than the Steam Deck that mattered. It's the fact that I was hoping the third party games would be mostly 60fps 1080p in handheld mode and instead it's looking like the more likely result for third party games is going to be 1080p 30fps in handheld even with the benefits of DLSS.
 
I've tried to understand a bit better the DLSS numbers in DF's video but I have a hard time concluding anything.
Those numbers are... puzzling. Nonsensical, I'd almost dare to say - it just doesn't make any sense. But maybe someone here will be able to make something out of this.

Here's the thing :

We know DLSS, especially on low power devices, scales close to proportionally to resolution. And games in general scale at most proportionally to resolution; when going from 1080p to 4k some games will see the perfirmance divided by 2, some by 4... but I have yet to see a game that would do significantly more than 4, except in VRAM bottleneck situations.

DF's video indicates a "DLSS cost" (actual DLSS cost+ enhanced LODs and/or post-process and/or textures, which I will refer to as "other stuff") of 3.35ms for 1080p and 18.3ms for 4K.
That's a 5.46 factor. For a 4x res boost.
DLSS cost isn't EXACTLY proportional to the resolution, but this is a WAY larger difference than expected, especially so that it's always BELOW the resolution boost, not above. Which means this comes from the "other stuff", but it still doesn't make sense.
Considering the actual DLSS cost is still a big part of the "DLSS cost", and the factor for the actual DLSS cost is around 4x, and the factor for the "DLSS cost" is 5.46x, the factor for the "other stuff" has to be signifcantly MORE than 5.46x. For a 4x res boost. The fuck is that kind of scaling ?

At this point you might have an hypothesis - the memory one. Same I had. Would make sense there is some bottleneck at higher resolutions, may it be VRAM or bandwith or whatever. Just a bottleneck at higher resolutions.

B U T
I tried to confirm this by comparing the 1080p->1440p and 1440p->4K factors. Prove that 1080p->1440p "DLSS cost" has somewhat reasonable scaling and 1440p->4k is where it shits the bed.
Now for the funny part.
For 1440p, the "DLSS cost" is 7.7ms So, for 1440p->4K, which is a 2.25x resolution boost, the factor is... 2.38. Seems... reasonable ? Which means for 1440p->4K, the "DLSS cost" scales somewhat accordingly to resolution.

But if the 1440p->4K scaling is reasonable, why is the 1080p->4K scaling so weird ?
You can see where this is going.

For 1080P->1440p, the "DLSS cost" goes from 3.35 to 7.7ms.

Which means that for this 1.78x resolution boost, the cost increase is 2.30x.


What can we conclude from this ?
... Nothing. Whatever Death Stranding is doing with DLSS, it makes the "DLSS cost" scale nonsensically from 1080p to 1440p, but it can't be memory limitations because 1440p to 4k is as expected. Wether it has to do with DLSS istelf, the "other stuff", some kind of bug, laptop doing laptop things, borked testing, I DO NOT KNOW.

All I know is that something is just wrong. Wether or not the results are actually accurate and DF didn't run into some issue(s), I don't think we can extrapolate those results to anything more than Death Stranding specifically.
I was already treating these with a handful of salt, now I'm treating them with a mountain of it.



what the fuck
 
I have no idea if this is how it works or not, but couldn't you have many games where the CPU work is relatively light and completes halfway through the rendering process, leaving the GPU with all available bandwidth for the rest of that frame?

As I understand it, neither the GPU nor the CPU can have all the theoretically available bandwidth. They'll have to share it. The thing then is how much the CPU will eat from the BW in a game more GPU bound.

edit: don't know if it just here, but this forum is dying for me. Taking too long to respond
 
0
A part of me genuinely wonders if Nintendo's hardware engineers lurk this thread to see how much the tech heads actually know, if they're laughing or biting their fingernails in anxiety and frustration over how much has actually gotten out or unlikely as it may be a little bit impressed with what's been gathered, analyzed and inferred
 
For me it's not whether or not its strictly better than the Steam Deck that mattered. It's the fact that I was hoping the third party games would be mostly 60fps 1080p in handheld mode and instead it's looking like the more likely result for third party games is going to be 1080p 30fps in handheld even with the benefits of DLSS.
That's what I was expecting as well. Doesn't look like it'll be feasible.
Have fun with whatever this ends up to be I suppose. Probably won't be as bad as the Wii U
 
Anyone who's claiming this isn't a big jump over Steam Deck even on its off-the-shelf testing state didn't pay attention to the video, tbh. It smacks it on every game in every single department, settings, framerate, resolution... Like did we watch the same video, legitimately? :LOL:
These tests are mostly equivalent to the Switch 2's docked mode though. Like yeah I would hope that the docked switch 2 outperforms the steam deck, but devs are going to have to be able to make their games run in handheld mode too.
 
If the Switch 2 can run A Plague Tale Requiem at 30 fps, maybe bumped to Some Low/Some Medium settings with some specific hardware tuning ... I mean it's hard to complain too much about that.

That is definitely PS5-XS tier graphics and one of the better looking PS5-XS generation exclusives at that.
 
That's what I was expecting as well. Doesn't look like it'll be feasible.
Have fun with whatever this ends up to be I suppose. Probably won't be as bad as the Wii U
tony-stark.gif
 
I've tried to understand a bit better the DLSS numbers in DF's video but I have a hard time concluding anything.
Those numbers are... puzzling. Nonsensical, I'd almost dare to say - it just doesn't make any sense. But maybe someone here will be able to make something out of this.

Here's the thing :

We know DLSS, especially on low power devices, scales close to proportionally to resolution. And games in general scale at most proportionally to resolution; when going from 1080p to 4k some games will see the perfirmance divided by 2, some by 4... but I have yet to see a game that would do significantly more than 4, except in VRAM bottleneck situations.

DF's video indicates a "DLSS cost" (actual DLSS cost+ enhanced LODs and/or post-process and/or textures, which I will refer to as "other stuff") of 3.35ms for 1080p and 18.3ms for 4K.
That's a 5.46 factor. For a 4x res boost.
DLSS cost isn't EXACTLY proportional to the resolution, but this is a WAY larger difference than expected, especially so that it's always BELOW the resolution boost, not above. Which means this comes from the "other stuff", but it still doesn't make sense.
Considering the actual DLSS cost is still a big part of the "DLSS cost", and the factor for the actual DLSS cost is around 4x, and the factor for the "DLSS cost" is 5.46x, the factor for the "other stuff" has to be signifcantly MORE than 5.46x. For a 4x res boost. The fuck is that kind of scaling ?

At this point you might have an hypothesis - the memory one. Same I had. Would make sense there is some bottleneck at higher resolutions, may it be VRAM or bandwith or whatever. Just a bottleneck at higher resolutions.

B U T
I tried to confirm this by comparing the 1080p->1440p and 1440p->4K factors. Prove that 1080p->1440p "DLSS cost" has somewhat reasonable scaling and 1440p->4k is where it shits the bed.
Now for the funny part.
For 1440p, the "DLSS cost" is 7.7ms So, for 1440p->4K, which is a 2.25x resolution boost, the factor is... 2.38. Seems... reasonable ? Which means for 1440p->4K, the "DLSS cost" scales somewhat accordingly to resolution.

But if the 1440p->4K scaling is reasonable, why is the 1080p->4K scaling so weird ?
You can see where this is going.

For 1080P->1440p, the "DLSS cost" goes from 3.35 to 7.7ms.

Which means that for this 1.78x resolution boost, the cost increase is 2.30x.


What can we conclude from this ?
... Nothing. Whatever Death Stranding is doing with DLSS, it makes the "DLSS cost" scale nonsensically from 1080p to 1440p, but it can't be memory limitations because 1440p to 4k is as expected. Wether it has to do with DLSS istelf, the "other stuff", some kind of bug, laptop doing laptop things, borked testing, I DO NOT KNOW.

All I know is that something is just wrong. Wether or not the results are actually accurate and DF didn't run into some issue(s), I don't think we can extrapolate those results to anything more than Death Stranding specifically.
I was already treating these with a handful of salt, now I'm treating them with a mountain of it.



what the fuck

this would be a time where comparing other low end gpus would come in handy. but these kinds of tests are hella niche
 
These tests are mostly equivalent to the Switch 2's docked mode though. Like yeah I would hope that the docked switch 2 outperforms the steam deck, but devs are going to have to be able to make their games run in handheld mode too.
You're still forgetting this is a test made on Windows, with PC code and without the actual hardware of the console but a close approximation. Handheld mode will be more than superior, come on.
 
A part of me genuinely wonders if Nintendo's hardware engineers lurk this thread to see how much the tech heads actually know, if they're laughing or biting their fingernails in anxiety and frustration over how much has actually gotten out or unlikely as it may be a little bit impressed with what's been gathered, analyzed and inferred
There's probably a non-zero number of Nintendo employees lurking the forum, at least. We're a fairly small community, but enough stuff links back to here that we probably haven't escaped their notice. Whether or not that includes any of their hardware devs is impossible to know.
 
Great video and its a great thought experiment but that's it. I was reminded that before Doom 2016 came out, DF did a pseudo switch pc similar experiment. And the problem with that is, that there are a number of caveats that he cannot account for from games (specially first party) to console optimization that to even enthusiast might not visualize (myself included). That leads to somewhat odd takes that feels like the sky is falling. In the end, these are funs experiments, we are less than 4 months (my guess lol) from reveal so lets enjoy the ride.
 
You have a point. The constant warnings just seem tantamount to nagging to me, especially when I don't consider going off topic all that detrimental to the thread.
When you have 3+ pages going off topic for every 1 page on topic it's absolutely detrimental.

When you have a few hundreds of very informative posts buried into over one hundred thousand posts, it's absolutely detrimental.

I don't have a problem with one off jokes, shit posts or going a bit off topic for a few posts sometimes. I believe every single person who posted dozens of times here did it at some point (me included) and I doubt they have a problem either.

But there are limits to this. And in the last few months, these limits have been broken over and over. It's off topic and it's not over after a couple exchanges? Move it somewhere, it's that simple. Mods even created a thread now to let people go overboard on all kind of Switch 2 discussions, they asked us gently to go there for non-hardware/tech talk, but people simply can't resist doing it here despite being asked to.

All these 5+ pages per day of off topic or barely on topic discussion is bothering many of us. If there has been mod intervention, and if they followed up by handing many warnings is because there has been many reports and they keep happening.

And to be clear, I'm not asking anyone to leave or stop posting. Just, please, please, keep any derailing short or move it elsewhere before it gets longer than the actual tech talk.
 
Last edited:
For me it's not whether or not its strictly better than the Steam Deck that mattered. It's the fact that I was hoping the third party games would be mostly 60fps 1080p in handheld mode and instead it's looking like the more likely result for third party games is going to be 1080p 30fps in handheld even with the benefits of DLSS.

That depends entirely on the game, and what the developers optimise around. One of the games Rich showed, Plague Tale Requiem, is a current-gen game which runs at 30fps on PS5 and XBSX. The settings he's using for Control (with RT enabled) are the same as the 30fps mode of the game on PS5/XBSX. On Cyberpunk, he's getting close to 30fps with PS5 settings (performance mode PS5 settings, but still). He's also enabling basically all the UE5 bells and whistles in Fortnite (including hardware RT, which isn't used on PS5/XBSX).

Outside of Death Stranding, all the tests he's running are either current-gen exclusive games (Plague Tale) or cross-gen games at PS5/XBSX level settings. I think it says a lot that these are able to run at these settings at all, but I'm not going to expect PS5/XBSX level graphics at 60fps on Switch 2. Games like Cyberpunk and Control have a wide range of graphics settings, and I'm sure 60fps modes would be feasible if settings were suitably reduced, but that's not really what Rich was going for.

For current gen exclusives like Plague Tale I definitely don't expect 60fps modes, although that game only runs at 30fps on a PS5, so I'm honestly kind of impressed that it can run on Switch 2-level hardware at all. For cross-gen games and last-gen ports I could see 60fps being relatively common depending on the game, if developers decide to prioritise frame rates. Performance and quality options are also pretty common in console games now, and although I don't think Nintendo will start providing those kinds of options themselves, I wouldn't be surprised to see quite a few third party games give Switch 2 players the option of 30fps and 60fps modes.
 
For me it's not whether or not its strictly better than the Steam Deck that mattered. It's the fact that I was hoping the third party games would be mostly 60fps 1080p in handheld mode and instead it's looking like the more likely result for third party games is going to be 1080p 30fps in handheld even with the benefits of DLSS.
1080p60 for the majority of games was never going to happen. Targeting 60 FPS is a developer decision, but hardware compute is finite and it's hard for developers to pass on 4x frametime boost that 30 FPS allows them. Not only that, but Switch 2 is still a tablet with limited CPU and GPU compared to PS5 or XSeries. 60 FPS versions of PS5/XSeries games was always going to be a tall order for the machine.

This is something a lot of people have said and it needs repeating: DLSS isn't a silver bullet. It's a way to make Switch 2 punch above its weight, but others consoles also have access and are using comparable upsampling/upscaling solutions.
 
Great video and its a great thought experiment but that's it. I was reminded that before Doom 2016 came out, DF did a pseudo switch pc similar experiment. And the problem with that is, that there are a number of caveats that he cannot account for from games (specially first party) to console optimization that to even enthusiast might not visualize (myself included). That leads to somewhat odd takes that feels like the sky is falling. In the end, these are funs experiments, we are less than 4 months (my guess lol) from reveal so lets enjoy the ride.
It's a decent practical experiment that's ultimately setting a lower-bound for what we should expect from the device, but considering that lower bound is "Running Death Stranding Native 1080p at a dodgy 30fps" with the upper bound being "Running Breath of the Wild at (DLSS/Native unknown) 4K 60FPS with No Loading times", I'd say that's a pretty fucking good system.
 
It's a decent practical experiment that's ultimately setting a lower-bound for what we should expect from the device, but considering that lower bound is "Running Death Stranding Native 1080p at a dodgy 30fps" with the upper bound being "Running Breath of the Wild at (DLSS/Native unknown) 4K 60FPS with No Loading times", I'd say that's a pretty fucking good system.
No doubt. I hope they revisit this with other test cause DS was a choice lol.
 
No doubt. I hope they revisit this with other test cause DS was a choice lol.
I think DS was a decent idea on paper, but the VRAM limitation kinda irked me. I also think a few additional tests from a few more publishers would be a good idea. Resident Evil 4, Doom Eternal, Alan Wake 2 (assuming the laptop can play it without bursting into flames), or even former PS4 exclusives such as God of War, Horizon and Spiderman would all be good choices for an additional round-up.
 
I think DS was a decent idea on paper, but the VRAM limitation kinda irked me. I also think a few additional tests from a few more publishers would be a good idea. Resident Evil 4, Doom Eternal, Alan Wake 2 (assuming the laptop can play it without bursting into flames), or even former PS4 exclusives such as God of War, Horizon and Spiderman would all be good choices for an additional round-up.
I really hope we get another video, this one was cool but I need more current-gen titles. I think DS already gives you an idea of how all those exclusives will run, since it's brutally demanding.
 
If we could get A Plague Tale Requiem running at 1440p DLSS Balance 30 fps on Switch 2 due to optimization, that would really be bonkers, but that's probably getting greedy.
 
I really hope we get another video, this one was cool but I need more current-gen titles. I think DS already gives you an idea of how all those exclusives will run, since it's brutally demanding.
It's a fun thing to test, and community feedback can result in more accurate testing as time goes by. Regardless, even with the small snippet we got from these tests, I am really excited for the upcoming system.
 
Oliver is covering Resident Evil 8 on IOS and he's hinting at a not very stable game. passively cooled and bandwidth-starved hardware might have some ways to go



in other news, Nvidia released their displaced micromaps samples. trying to make sense of displaced micromeshes, my best understanding is that it's like nanite but from the opposite direction (and hardware accelerated). instead of starting with a high poly mesh and use an algorithm to simplify for lods, you start with a low poly mesh, tessellate, then displace with a displacement map. so like displacement maps now, but compatible with mesh shaders

 
Oliver is covering Resident Evil 8 on IOS and he's hinting at a not very stable game. passively cooled and bandwidth-starved hardware might have some ways to go



in other news, Nvidia released their displaced micromaps samples. trying to make sense of displaced micromeshes, my best understanding is that it's like nanite but from the opposite direction (and hardware accelerated). instead of starting with a high poly mesh and use an algorithm to simplify for lods, you start with a low poly mesh, tessellate, then displace with a displacement map. so like displacement maps now, but compatible with mesh shaders



NVIDIA can sell me on a 5090 if it can run Cyberpunk without 30% of the game world popping in and out of existence every frame.
 
Man, those Matrix UE5 demo reports did a number on people’s expectations. What we see in that video is not even optimised by Nvidia & Nintendo’s tools but is literally on par with Series S visually which was the expectation. Now with all their internal bells and whistles, I suppose they’ll somehow get 4K DLSS working.

Honestly, even now what we saw looked great and Nintendo games will look beyond incredible. 3rd party games will have to be scaled down but nowhere near Witcher 3 levels.

EDIT: Also, remember no one ever said the Matrix demo was running at 4K, so it could very well run exactly like it does in that video but with more RAM and a bit more optimization at 1080p which is still fairly impressive for a console this size.
 
Last edited:
Just watched the Digital Foundry video. Always great videos but even as Richard acknowledges, there are still a LOT of unknowns that sound to me could completely change the power narrative still. So while the video was fun, it's still probably not very accurate in the end due to the large gaps in unknown information. Just to much still left unknown about the hardware feature set and software feature set. The results in these videos are fairly solid but nothing that would blow me away personally however I still have a feeling there will be more tech features and system focused optimizations made that will have games getting a lot more out of the Nintendo hardware then the Digital Foundry test system. We'll just have to wait and see. Wish Nintendo would just unveil this thing already. It's been so long waiting...
 
Blunty put out a Cyperpunk video about a month ago for the ROG Ally. 18W, 720p/Performance and 900p/Performance, medium-ish settings. frame rate was higher (and uncapped), but the underclocked 2050 was running at a higher resolution (1080p/Balanced, medium-ish)

 
I was drunk off hopium for Smash before, but my thinking makes no sense about the game logic thing I said earlier when I look at it now.

It seems like the much easier solution would just be to have two frames of input lag before system to display input lag.
 
0
Three other things that can be taken away from that video…
  • DF vetting Nate’s BotW loading report.
  • DF mentioning somewhere between 8 & 12GB of RAM. Couple this with Nate saying nothing below 8GB, we’re likely looking at 12GB.
  • T239 is capable of HDMI 2.1 throughput.

Nate said 12gb or 16gb.
 
Nate said 12gb or 16gb.
It was necrolipe.
Nate said he had not heard about the 8 GB.

I would never doubt Nate, but it seems to contradict Richard's source.
 
Last edited:
Blunty put out a Cyperpunk video about a month ago for the ROG Ally. 18W, 720p/Performance and 900p/Performance, medium-ish settings. frame rate was higher (and uncapped), but the underclocked 2050 was running at a higher resolution (1080p/Balanced, medium-ish)


Want to throw it out there than a lot of the DF tests probably would've benefitted from throwing on Ultra Performance literally every chance they got. Let's be honest, "image quality" is not something Nintendo cares all too much for. Trading that in for a few extra frames could've made the difference in these tests.
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom