• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Do you have audio editing experience and want to help out with the Famiboards Discussion Club Podcast? If so, we're looking for help and would love to have you on the team! Just let us know in the Podcast Thread if you are interested!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

Ok, this is new, I wasn't aware that Xbox support ML. What do they use it for?
Nothing basically. Since Xbox supports PC and they want to support older PCs, a significant portion of Xbox featureset haven't seen use so far.
So here's what I want to know. With traditional polygon. You have to use the CPU to draw call them is that the same here as well? Or is it still polygon but you can have a low polygon count model and mesh shader just boost it more? Isn't that similar to tesselation?
Whatever happen to tesselation? I thought it was suppose to be the next hip thing when gen 8 came out or at least around direct 10 or 11? Same with POM
Also, I guess we will have a while before mesh shader is the standard right? I mean gaming overall.
I don't know how it works and whatnot, but I believe the only game so fare to fully utilise mesh shaders is Alan Wake 2. I believe UE5 also makes use of them, but not 100% about this. There's no reason Nintendos teams wont use it though, as they typically use everything that's available to them.

Steam deck is way more advanced than the PS4 and Xbox one. Newer architecture. Supports ray-tracing better CPU. Idk what the memory bandwidth is but it had more ram
And Drake is more advanced than the SD.
 
This is why I would rather wait for either really solid leaks from reliable sources or actual details from Nintendo.
If only NatetheHate, Necrofilipe or the goat Midori or even Pyoro mentioned something about that damn battery node.

vegeta.gif
 
Nothing basically. Since Xbox supports PC and they want to support older PCs, a significant portion of Xbox featureset haven't seen use so far.
That's wild! I mean guess that makes sense since papa Microsoft tells Xbox to play with the cooler, older, sexier brother windows.


I don't know how it works and whatnot, but I believe the only game so fare to fully utilise mesh shaders is Alan Wake 2. I believe UE5 also makes use of them, but not 100% about this. There's no reason Nintendos teams wont use it though, as they typically use everything that's available to them.
Man, I wonder how much performance can Nintendo save using this feature. They already make good use of (relatively) lower polygon count.

And Drake is more advanced than the SD.
Bingo! So we don't have to worry about PS4 to Super Switch comparison. We will get switch ports....unless Sony has their grubby hands on certain IPs (looking at mainline FF.)
 
If only NatetheHate, Necrofilipe or the goat Midori or even Pyoro mentioned something about that damn battery node.

vegeta.gif
The odds are that midori is not aware of the details, and only Nate and pyoro are credible enough for the ones you mentioned.
 
0
Dammit Kevin, we need to find it ourself. We go to Canada and find a major switch developer. Offer everyone pancakes with maple syrup and they will surely let us in their office.

Time for a road trip? We need to buy an old Volkswagen and spray paint it red with Mario on the side and go fourth on adventure offering free pancakes to anyone who can offer up Switch 2 information. Let's GO!!!

Let's become the heroes that breaks us all free from the shackles of no Switch 2 news every year and let it rain in juicy gorey Switch 2 internals details! 👍
 
If only NatetheHate, Necrofilipe or the goat Midori or even Pyoro mentioned something about that damn battery node.

vegeta.gif

Nate is far too busy with more important things right now. He and Doug Bowser are busy playing open world 3D Mario in 8K on Switch 2 in their luxury hotel room at the moment.
 
Time for a road trip? We need to buy an old Volkswagen and spray paint it red with Mario on the side and go fourth on adventure offering free pancakes to anyone who can offer up Switch 2 information. Let's GO!!!

Let's become the heroes that breaks us all free from the shackles of no Switch 2 news every year and let it rain in juicy gorey Switch 2 internals details! 👍
I will bring the snacks and soda for the long trip! 🚗
less performance saving and more detail for similar performance.
Hmm, sexy, I like it.
 
Whatever happen to tesselation? I thought it was suppose to be the next hip thing when gen 8 came out or at least around direct 10 or 11? Same with POM
Also, I guess we will have a while before mesh shader is the standard right? I mean gaming overall.
afaik tessellation has been a standard feature of GPU rendering since DX11. it's everywhere.
 
I think also with VRR Nintendo could probably do it at 60hz if they wanted. Mobile screens with VRR tends to also be high end screens with 120hz, but there's nothing technologically preventing you from having 60hz vrr. There be a one time R&D fee, but probably cheaper in the long run.
I did mention that Nintendo could theoretically design with display manufacturer(s) (e.g. Sharp, etc.) to design and manufacture a custom, mobile 60 Hz display that also has VRR support.

But yes, there's nothing technologically speaking preventing a 60 Hz display from supporting VRR.

I just don't expect having VRR support in a 60 Hz display to be inexpensive, especially if display manufacturer(s) upcharge adding VRR support since VRR support is seen as a premium feature by display manufacturer(s).

The question is will Nintendo be the only customer for a custom 60 Hz display with VRR support?

If Nintendo's the only customer, then I don't really see a custom 60 Hz display with VRR support rapidly dropping down in price. Nintendo definitely doesn't have as much purchasing power as smartphone companies to rapidly drive down costs.

One benefit of using a 120 Hz display with VRR support is that Nintendo's one of many customers purchasing these displays. So Nintendo definitely benefits from the purchasing power of smartphone companies through rapidly decreasing costs.

Of course, that's assuming Nintendo wants VRR support across the board. And whether that's the case is currently unknown.
 
Whatever happen to tesselation? I thought it was suppose to be the next hip thing when gen 8 came out or at least around direct 10 or 11? Same with POM
Also, I guess we will have a while before mesh shader is the standard right? I mean gaming overall.
Tessellation is still around and kicking. It's just been so widely adopted that it isn't really advertised anymore, especially since it's more an "under the hood" thing VS something like raytracing where suits can go "Hey wowie look at the graphics!"

RE: mesh shaders, yeah it's gonna be a little bit until it becomes standard. Switch 2 is projected to have it, the Xbox Series systems have it, but the big problem is that the most popular current gen system - the PS5 - doesn't support it. Still uses primitive shaders. Give devs some more time to get acquainted with DX12 and a (hopefully) popular system like the next Switch as a reason to use 'em, and adoption rate should go up.
 
I did mention that Nintendo could theoretically design with display manufacturer(s) (e.g. Sharp, etc.) to design and manufacture a custom, mobile 60 Hz display that also has VRR support.

But yes, there's nothing technologically speaking preventing a 60 Hz display from supporting VRR.

I just don't expect having VRR support in a 60 Hz display to be inexpensive, especially if display manufacturer(s) upcharge adding VRR support since VRR support is seen as a premium feature by display manufacturer(s).

The question is will Nintendo be the only customer for a custom 60 Hz display with VRR support?

If Nintendo's the only customer, then I don't really see a custom 60 Hz display with VRR support rapidly dropping down in price. Nintendo definitely doesn't have as much purchasing power as smartphone companies to rapidly drive down costs.

One benefit of using a 120 Hz display with VRR support is that Nintendo's one of many customers purchasing these displays. So Nintendo definitely benefits from the purchasing power of smartphone companies through rapidly decreasing costs.

Of course, that's assuming Nintendo wants VRR support across the board. And whether that's the case is currently unknown.
That would actually be huge. plus Sharp and Nintendo have been partners, wouldn't be surprised if they would want to be in the action for the Switch 2 and be Nintendo main display partner.
 
Speaking of tessellation, it's actually usable for all Nanite assets now, which is nice because it can help save space on assets as well as allow for more detail on budget constrained projects. One big criticism of Nanite was that most indie devs wouldn't use it because they wouldn't have the budget to make assets detailed enough to need Nanite in the first place, and tessellation helps with that.

While use of tessellation is widespread, it is not cheap, and Nanite can keep it in check by making sure that the added geometric detail isn't rendered to a point that the human eye can't resolve. Nanite and tessellation go very well together...in theory. In practice, it's a bit of a mess in Unreal right now.
 
Lol, I remember that. I think some people even thought it was suppose to use the power of Watson. Watson was that cheating bastard "AI" who was on jeopardy who pretty much googled the answer on the internet and those decent but extremely smart folks were just jobbers too? I hope jeopardy paid those folks.



Looking back at it, at least for the handheld, Nintendo always use their previous gen. I guess it make sense for Nintendo to had that thought in their head reuse it again, at that time.

Historically, Nintendo would use some form of the existing hardware in their new system that would include native backwards compatibility. Yet, interestingly enough, the 3DS did have the GBA's guts as well which was made clear during the Ambassador program, but I think the GBA hardware was used for a different purpose I think?

Wii U also had native GCN support, but was locked out because the GCN ports were not there (they should've designed a compatibility layer using the Pro Controller, but whatevs). Nintendo have these weird habits of being supportive of older platforms, but also not for whatever reason. The GBA supported GB, and GBC, but DS only supported GBA. In theory, it could've supported everything, but Nintendo chose not to.

Nintendo can be quite daft, and genius at the same time. 🤷‍♂️
 
RE: mesh shaders, yeah it's gonna be a little bit until it becomes standard. Switch 2 is projected to have it, the Xbox Series systems have it, but the big problem is that the most popular current gen system - the PS5 - doesn't support it. Still uses primitive shaders. Give devs some more time to get acquainted with DX12 and a (hopefully) popular system like the next Switch as a reason to use 'em, and adoption rate should go up.
PS5 technically supports them, but it's a custom implementation specifically made by Sony based on primitive shaders, the "geometry engine". Developers need to put in the extra effort to support the both of them or simply go Nanite.
 
RE: mesh shaders, yeah it's gonna be a little bit until it becomes standard. Switch 2 is projected to have it, the Xbox Series systems have it, but the big problem is that the most popular current gen system - the PS5 - doesn't support it. Still uses primitive shaders. Give devs some more time to get acquainted with DX12 and a (hopefully) popular system like the next Switch as a reason to use 'em, and adoption rate should go up.
I heard they have something similar, the PS5. I don't know.
but I think the GBA hardware was used for a different purpose I think?
Yeah, I forgot. I think it was either security or audio or something.

Wii U also had native GCN support, but was locked out because the GCN ports were not there (they should've designed a compatibility layer using the Pro Controller, but whatevs). Nintendo have these weird habits of being supportive of older platforms, but also not for whatever reason. The GBA supported GB, and GBC, but DS only supported GBA. In theory, it could've supported everything, but Nintendo chose not to.

Nintendo can be quite daft, and genius at the same time.
Very daft. I can never for the live of me never truly understand their stance on legacy library. I think as far as perception goes they are in third place. dont get me wrong. As far as the Switch, I get it, different CPU architecture all together and the previous emulation on the Wii U wasn't that great. If they pushed harder I think they could've had something great.

While use of tessellation is widespread, it is not cheap, and Nanite can keep it in check by make sure that the added geometric detail isn't rendered to a point that the human eye can't resolve. Nanite and tessellation go very well together...in theory. In practice, it's a bit of a mess in Unreal right now.
Speaking of Unreal. Who in here think that Unreal got devkit first, like maybe before Nintendo's US first party territory?
 
Speaking of tessellation, it's actually usable for all Nanite assets now, which is nice because it can help save space on assets as well as allow for more detail on budget constrained projects. One big criticism of Nanite was that most indie devs wouldn't use it because they wouldn't have the budget to make assets detailed enough to need Nanite in the first place, and tessellation helps with that.

While use of tessellation is widespread, it is not cheap, and Nanite can keep it in check by making sure that the added geometric detail isn't rendered to a point that the human eye can't resolve. Nanite and tessellation go very well together...in theory. In practice, it's a bit of a mess in Unreal right now.
That's what the 1943 Marvel game is in fact betting on, which tells me it isn't that much of a mess. They even quoted it at the GDC conference and exemplified it, "Nanite Tesselation" which renders the old method of doing it obsolete.
 
That's post is from Mar 22. In September last year he doubled down on 8nm. We should go after the most recent thing he said.

That's not what hes actually saying. That's what you are projecting.
That's exactly what he's saying. And you're simply projecting in the opposite direction that the later post is "doubling down" and more confident.

I really can't wrap my head around how many pages of discussion there have been over this, to no benefit to anyone. When there's nothing to talk about, you can just not post. Instead of having the same discussion for the 50th time.
 
That's what the 1943 Marvel game is in fact betting on, which tells me it isn't that much of a mess. They even quoted it at the GDC conference and exemplified it, "Nanite Tesselation" which renders the old method of doing it obsolete.

I'm speaking based on my experience as a developer using it. It's just super buggy and needs constant fiddling. Not a good experience, even if I'm still using it.
 
I'm speaking based on my experience as a developer using it. It's just super buggy and needs constant fiddling. Not a good experience, even if I'm still using it.
Hopefully it improves for the regular dev once one big project leverages it, it is all shaping up to be this one.
 
I love how the discourse of the switch 2's power went from "It'll be between ps4 and xbox series S" to "barely reaching steam decks performance".
The Steam Deck is between the PS4 and the Series S.

I love talking about tech with folks, and finding ways to explain complex ideas. But part of why this discussion goes round in circles is no one agrees on a definition of "power" and trying to sort out the complexity results in three page long posts that get skipped by half the people.
 
The Steam Deck is between the PS4 and the Series S.

I love talking about tech with folks, and finding ways to explain complex ideas. But part of why this discussion goes round in circles is no one agrees on a definition of "power" and trying to sort out the complexity results in three page long posts that get skipped by half the people.
I mean, Oldpuck... Isn't the most sane thing to do with this entire discourse to outright assume Switch 2 will outperform the Deck at all the ways that matter and have bespoke software to it as a bonus? The GPU is bigger and Ampere has consistently shown better feats even with its somehow disappointing process node at the time... I personally wouldn't bother comparing these devices, RDNA2's raster advantage didn't really mean anything at the time against similarly sized 3000 series cards (VRAM is what held them back), why would it matter now?
 
Honestly, I think all 8nm means is that our performance per watt calculations are way off and Nvidia engineers did the impossible. There is simply a limit to how low ampere can go (I believe 420mhz is the absolute minimum). And even if that's the final clocks, I woudnt say that's terrible.

Correct. If it's 8nm, and I am accepting it as a real possibility, then the paper napkin math here was off the mark. From what I have read here, it seems to be accepted that even at 420mhz, it would draw too much power for portable mode. However, when you look at the Nvidia MX 570 with a TDP of 15w, base clock speed of 830mhz and a boost mode at 1150mhz, the waters get murky. The MX570 has 25% more GPU cores than the T239, so T239 GPU cores should be around 11watts at the same clock speeds, lets just split the difference on base clock speed and boost mode, and settle in at 1Ghz GPU for T239. 11 watts for GPU and 4 watts for CPU. CPU power draw will be constant, so the question is can Nvidia get 1536 GPU cores operating at 420mhz on 3-4 watts? We know Nvidia is employing some power saving tech from Ada, so.........maybe?
 
So here's what I want to know. With traditional polygon. You have to use the CPU to draw call them is that the same here as well? Or is it still polygon but you can have a low polygon count model and mesh shader just boost it more?
GPUs take geometry in a standardized format. You have to send it over in that format. If you want to change the geometry on the GPU, your primary option is a vertex shader. Vertex shaders can alter geometry, but it can't change the number of triangles - the number of polygons - in the mesh, just change their shape. Vertex shaders let you distort 3D objects, and there are clever developers who use vertex shaders for animation that is accelerated on the GPU.

Mesh shaders can read any data in any form you like and they can create and destroy triangles in the mesh. That allows developers to use any custom format for geometry they want. They can generate geometry in real-time if they like, without doing CPU work. They can destroy triangles to reduce GPU load.

So what change on the instruction level? Code wise? From what I understand is that a change in architecture is a change in the micro-architecture language. Like some instruction could take X cycle per second to finish but the newer instruction sets take X - Y Cycles per second due to some optimization.
A GPU is a pipeline of multiple stages. Each of those stages has specialized hardware to accelerate that stage in the pipeline.

Shaders are programs that run on the programmable cores inside the GPU, and have some kind of ability to control that specialized hardware.

Most of these features here are about updates to that specialized hardware. Which may or may not add custom instructions to the Instruction Set Architecture. ISAs are private to the GPU manufacturers, so devs never touch it. Instead, devs write shader code which gets converted to ISA by the driver.

Very few folks have done any reverse engineering work to understand what's going on at the ISA level for this hardware.
 
I mean, Oldpuck... Isn't the most sane thing to do with this entire discourse to outright assume Switch 2 will outperform the Deck at all the ways that matter
Sure, but...

You then get smart people who look at the benchmarks and ask how does it do that. Or people who ask by how much? Or will it run X.

Those are all interesting questions, but since it's not simple, you start to separate all the different definitions of power. Which leads to complexity, when most people want "tell me if the games will look as good or not."

Then you answer the simple question ("yes, they will, or better"), and then someone new pops up and says but how? And the circle begins again.

With AMD waking up to what Nvidia is doing, and Intel adding their own disruptions to the mix, I think very rapidly people are going to understand what Nvidia's hardware is offering, as it will become standard. But right now, people are so used to the "all consoles are just PC hardware built from standard PC technologies," it's understandably hard to "get" a device that isn't that. Especially when, on the surface, two gaming handhelds feel like things you really ought to be able to compare easily.
 
The Steam Deck is between the PS4 and the Series S.

I love talking about tech with folks, and finding ways to explain complex ideas. But part of why this discussion goes round in circles is no one agrees on a definition of "power" and trying to sort out the complexity results in three page long posts that get skipped by half the people.
Yup! It is hard to explain because it for the longest everyone preconceived notion is that more power just better overall. Even though you explain yourself quite well. It isn't helping that they see the Switch as a "PS3 level" machine despite all the advanced feature. It so easy to say "PS4 1.8 teraflops" than go over featureset and architectural advantages steam deck has.

Or maybe they do understand it a bit nut they don't know how to effectively communicate what they try to say? Like me.
Very few folks have done any reverse engineering work to understand what's going on at the ISA level for this hardware.
Gotcha! It is abstract. We don't need to know.
 
Last edited:
You then get smart people who look at the benchmarks and ask how does it do that. Or people who ask by how much? Or will it run X.

Those are all interesting questions, but since it's not simple, you start to separate all the different definitions of power. Which leads to complexity, when most people want "tell me if the games will look as good or not."
That's the thing, and what the rest of my comment is trying to say. We like to claim RDNA2 has a raster advantage over Ampere at the same flops which is true, but then you look at the benchmarks for both of these archs and you end up asking yourself... Where is the advantage (in gaming workloads at least)? Let's look at the RTX 3060 and the RX 6700 XT for example (12.74 TFLOPS vs 13.21 TFLOPS), the AMD one is gaining the upper hand but not enough to matter, like 10 FPS at best and that's with a slight advantage in the numbers either way. Splitting hairs gets even more pointless at this level of performance if the Switch 2 happens to achieve, say... 1.8 vs the SD's 1.6, the advantage might not be really there anymore (especially since they cut out the RT/intersection acceleration hardware in the Van Gogh chip, further crippling its forward thinking potential).
 
Last edited:
I did mention that Nintendo could theoretically design with display manufacturer(s) (e.g. Sharp, etc.) to design and manufacture a custom, mobile 60 Hz display that also has VRR support.

But yes, there's nothing technologically speaking preventing a 60 Hz display from supporting VRR.

I just don't expect having VRR support in a 60 Hz display to be inexpensive, especially if display manufacturer(s) upcharge adding VRR support since VRR support is seen as a premium feature by display manufacturer(s).

The question is will Nintendo be the only customer for a custom 60 Hz display with VRR support?

If Nintendo's the only customer, then I don't really see a custom 60 Hz display with VRR support rapidly dropping down in price. Nintendo definitely doesn't have as much purchasing power as smartphone companies to rapidly drive down costs.

One benefit of using a 120 Hz display with VRR support is that Nintendo's one of many customers purchasing these displays. So Nintendo definitely benefits from the purchasing power of smartphone companies through rapidly decreasing costs.

Of course, that's assuming Nintendo wants VRR support across the board. And whether that's the case is currently unknown.
here's hoping they support 120Hz, or at least 90Hz.

It's common enough with mobile phone these days, as well as latest gaming PC handhelds. SD OLED supports 90Hz.

120Hz is be more ideal and future proof, and allows a smooth 40FPS support for some games. For 90Hz, maybe a smooth 45fps could be thing and stop gap between 30 and 60fps (closer than 40), though its more uncommon that 40fps. Of course, both would be ideal for VR gaming. Even if VRR gaming ends up being a gimmick (likely will), having a stable sub 60fps framerate is good in the Kong run for demanding 1st party games and ports.

120Hz for gaming will be rare on Switch 2, but having it as an option for games like Super Smash Bros, and Mario Kart, (as well as indie games) would be nice. Might not be ideal in handheld mode, but for docked mode.. it would be great.

To think, it would be nuts if we get a 1080p HDR screen and 90Hz it 120Hz support. But I'd like and say I'd be disappointed if we get a 60Hz screen, and I am at least expecting a 1080p screen (or 800p for worst case scenario).
 
here's hoping they support 120Hz, or at least 90Hz.

It's common enough with mobile phone these days, as well as latest gaming PC handhelds. SD OLED supports 90Hz.

120Hz is be more ideal and future proof, and allows a smooth 40FPS support for some games. For 90Hz, maybe a smooth 45fps could be thing and stop gap between 30 and 60fps (closer than 40), though its more uncommon that 40fps. Of course, both would be ideal for VR gaming. Even if VRR gaming ends up being a gimmick (likely will), having a stable sub 60fps framerate is good in the Kong run for demanding 1st party games and ports.

120Hz for gaming will be rare on Switch 2, but having it as an option for games like Super Smash Bros, and Mario Kart, (as well as indie games) would be nice. Might not be ideal in handheld mode, but for docked mode.. it would be great.

To think, it would be nuts if we get a 1080p HDR screen and 90Hz it 120Hz support. But I'd like and say I'd be disappointed if we get a 60Hz screen, and I am at least expecting a 1080p screen (or 800p for worst case scenario).

I hope that's the case, but I'm expecting 60hz sadly.
 
ExtraSS Part 3: Frame Extrapolation

So this is all about Intel's frame generation, right? That's what is the headline from the paper (which I missed when it was first released last year, the demo videos are giving it the second rounds, thanks for flagging @Dakhil). So why all the talk about the upscaler improvements?

Because one of Intel's goals is to unify the upscaler and frame generator. FSR and DLSS do frame generation as a second pass, potentially with it's own inputs. In the case of FSR and DLSS, they both need optical flow data. FSR uses the GPU and async compute to do it (which all modern GPUs have). Nvidia uses their specialized optical flow hardware (which leaves the GPU free to do other stuff). It's unclear if DLSS's AI model is used for frame generation (it probably is), the overall algorithm is separate.

Intel wants to feed the same data into both frame gen and upscaling, ideally as the same algorithm. Essentially, Intel wants frame gen to be upscaling from nothing... except it's not nothing. It's from the G-buffer. That's the brilliance and the cheat. Let's start with the brilliance

The G-buffer is an early step in the rendering pipeline. Not only does it provide the upscaler with basic 3D information, it's the core of normal rendering anyway. AMD and Nvidia are taking complete frames, which are fully rendered, but just 2D images, and then trying to generate a third 2D image between them. Intel wants instead to take the G-buffer, and early step in the rendering pipeline and use past rendering to draw the colors over the shapes.

Intel's upscaler already understands how to use the G-buffer to correctly move the past frame's pixels over the actual objects moving in scene. WIth a new G-buffer, they can just do that without any pixel data at all, exclusively using last frames colors. And where FSR and DLSS need to do optical flow for the whole frame to do generation, the G-buffer means that optical flow only needs to be done for the parts of the screen that have things like shadows in them. And the shading refinement model already handles that, at low resolution even.

Which brings us to the cheat. The cheat is... you still have to make the G-buffer.

Hold on, let's go back a second. DLSS frame generation was, in some ways, a response to game developers having trouble with CPUs. CPUs are getting more and more cores, but game development is still locked into single core technologies. Meanwhile, GPUs are really good at using lots of cores and have continued to get more and more powerful. 4k is as far as any sane person wants to go for resolution, so gamers would love to push frame rates up to get more smoothness, more perceived detail.

But the games are being CPU limited, leaving the GPU with horsepower sitting there untouched. Along comes frame generation, which uses up the extra power in the GPU to give you smoothness even when the CPU is busy.

Intel frame extrapolation is solving a different problem. ExtraSS isn't giving you extra frames using excess GPU power. Instead, ExtraSS depends on the CPU being ahead of the GPU. It still needs the CPU to generate all the stuff it normally does for a frame, but lets the GPU skip all the shaders and jump right to a completed frame.

It really is upscaling from 0, or at least close to 0 - in that the CPU still has to actually run at the high frame rate, and initiate the frame, but then the upscaler is like "you know what, don't even bother with color or lighting or shading or textures or any of that shit, just draw me a freakin' pencil sketch and I'll do the rest." And pencil sketch isnt too far off. Intel is explicit that for "extrapolated frames" you generate the G-buffer at really low resolutions.

The advantage of this technique isn't just "new frames without added latency." Its "new frames, plus all the reduced latency you'd expect from high frame rates." Since the CPU runs at the full frame rate, the CPU can sample the controller at the full frame rate as well!

And it opens up the possibility of only doing some of the CPU work every frame. Think about animations - you already see animations running at lower frame rates in the background, in order to reduce CPU load. On these "extrapolated" frames, games could shave down just animations really near the camera. Only run physics on "real" frames.

Frame generation - even frame extrapolation - is a bad name for this technique. Intel isn't generating frames that were never rendered. And their not extrapolating frames that haven't been rendered either! The frames are rendered all the way up to geometry. It's really neural shading. Intel is using AI to take the first step of the rendering pipeline and then estimate the shading that would have happened by reusing shading from past frames, the same way that temporal upscaling reuses past pixels.

It's a clever technique that will requires substantially more engine integration, and will benefit a totally different class of titles than the ones that benefit from current frame generation techniques. If I had to bet, the long term advantages of this technique are much higher than Nvidia and AMD's approach. Currently games are CPU limited not because they're out of CPU power, but because multi-threading is hard, and retrofitting it into existing engines is even harder. But intuitively, it seems like games ought to be able to take advantage of more cores - like, every enemy should be able to run their AI and animations and even basic physics for non-colliding bodies on separate threads.

But GPU growth is slowing down because the node shrinks are slowing. Assuming that games will be CPU limited might prove to be short sighted as engines catch up to multi-core designs, while GPUs put more and more power into things other than shader performance (like AI). Intel making every other frame free on the GPU side, as long as the CPU does the work, might be the better bet.

The fact that it improves their upscaler as well is just a nice addition.

I wonder how much lower the framerate could be allowed to go with this technique than with traditional framegen. I assume the answer is "some, at least" since you don't take the hit to latency, but you would still need to deal with the visual issues. If they're so minor that you could use the technique at 40fps then it could be absolutely massive.
 
Everything is going in the worst possible direction and I fear the thread is going to replicate the tragic end of wiiu speculation back in the day.
I’ve already accepted 8nm. If it’s better? Great! If not? Oh well, Nintendo is like an iron block, immovable as ever.
The "AMD raster advantage" is something like 30%. I would bet that the lack of Infinity Cache isn't really hurting the Steam Deck here.
I was agreeing until this case, unless it’s expanded upon. With RDNA2, AMD made adjustments to the architecture where 2 CUs can tangibly share resources, and equal the 128 Vector Lanes that is similar to the Nvidia SM in which there are 128 vector lanes.

or, the WGP is compared to the SM on that front, in that scenario you get the possibility of higher performance, but only if you account for the system working with WGPs, which can lead to a higher performance difference in favor of AMD. But keep in mind, it’s 2 GPU cores vs a single GPU core. It’s not really Apples to Apples and is thus Apples to Oranges.

Here’s an article from Chips n Cheese where they go further into detail regarding the RDNA2 micro architecture, and list that they aren’t really equivalent units (WGP to SM) by only do a loose comparison due to having the same amount of vector lanes in this scenario: https://chipsandcheese.com/2023/02/19/amds-rdna-2-shooting-for-the-top/

The reason for why a WGP can and would be able to do that is that it has access to more waves that it can do, above the amount of Warps an SM can accomplish. Not only that but within a WGP you have more Register File than what a single SM even contains.

the issue with the Van Gogh GPU, so strictly the GPU, is that it lacks the infinity cache. With the it’s bigger siblings in the RDNA2 family (not consoles or other APUs) they have infinity cache, in which AMD mitigated greatly the sore spots that they have: the very small L0, the anemically small L1 per shader engine the rather small L2 for a whole GPU with an incredibly large L3 cache.

AMD has, historically anyway at least based on what people say, been bandwidth starved relatively speaking. Infinity Cache mitigates that massively because it wasn’t a 10MB L3, it was several dozen MB of L3.


This presents a much more robust and sophisticated caching system than any Nvidia GPU and still is more sophisticated and complex than any Nvidia GPU. Nvidia “simply” (grossly oversimplified) increased the capacity of the L2 but is still at less hierarchical structure than an RDNA2 or RDNA3 GPU.

With 4 levels of cache, and a system that lets it cover for where it’s troubled, RDNA2 looks to be doing just fine, just dandy even.

And then you remove it.

And what you’re left with is a GPU that doesn’t have that large “safety net” to help it where the system is structurally anemic in (L0, 1 and 2). The deck is also pretty small compared to other RDNA2 GPUs, and is structured smaller in general though, but these stayed the same: it has 32KB of L0 per WGP (4 x 32KB), 128KB of L1 per Shader Array (1 x 128KB) and 1024KB private L2 cache for the GPU. On top of that, the Rasterizer is halved from 32Pixels per clock to 16pixels per clock.



Van Gogh gets a unique implementation of a Shader Engine.
The Rasterizer throughput is halved, from 32 pixels/clock to just 16.
And instead of 2 Shader Arrays per Shader Engines, the APU has only one Shader Array.
2 Render Backends are included, leading to 16 Color ROPs and that’s a historic upgrade.
Since Llano in 2011 8 Color ROPs was the maximum which APUs got.
So after a decade, the pixel fillrate per clock will be doubled.
The L2$ size is still 1Mebibiytes and has the same size as on Vega8 iGPUs on Renoir and Cezannne, which in marketing terms also use 512 Shader Cores.
What is different to previous APU designs though is the memory controller situation.
Van Gogh has 4x 32-Bit unified memory controllers, while previous APU designs got 2x 64 Bit Controllers, which also could control 4 memory channels, if LPDDR4 memory is used.


But finally, Drake has 12SMs. The Deck has 8CUs or 4WGPs, if we are comparing them in terms of clockspeed, then the deck wins it out. But we have to remind ourselves that the Deck has significantly smaller resources available to it than what T239 has. If we look at it from the WGP angle, objectively T239 is 3x the size of Van Gogh. If we don’t, it’s 1.5x the size.

Van Gogh lacks the extra sophistication in its cache hierarchy and it is bumped down to be closer to how Ampere or the consoles operate (GPU wise), aka they have their L0, L1 and then L2. They have their register files, and they have a shared memory pool.


the IC does matter because of how RDNA2 is configured in PC. It’s not that they don’t account for it, simple that they made sacrifices and adjustments that work better for the profile they wanted the APU to be.

Anyway, I’m off!

Edit: Forgot to mention that Van Gogh should have 128KB of LDS per WGP, so 512KB of LDS across the whole GPU.

And per CU is 256KB of vRF, so 2048KB of vRF across the whole GPU.

For the sRF, it should be 16KB of scalar Register File per CU, so 128KB across the whole GPU.

I may have missed a couple.


Note I’m not saying one is a worse GPU than the other, simply that this isn’t quite like a PC you’d be able to buy. It’s very… smol.
 
Last edited:
here's hoping they support 120Hz, or at least 90Hz.

It's common enough with mobile phone these days, as well as latest gaming PC handhelds. SD OLED supports 90Hz.

120Hz is be more ideal and future proof, and allows a smooth 40FPS support for some games. For 90Hz, maybe a smooth 45fps could be thing and stop gap between 30 and 60fps (closer than 40), though its more uncommon that 40fps. Of course, both would be ideal for VR gaming. Even if VRR gaming ends up being a gimmick (likely will), having a stable sub 60fps framerate is good in the Kong run for demanding 1st party games and ports.

120Hz for gaming will be rare on Switch 2, but having it as an option for games like Super Smash Bros, and Mario Kart, (as well as indie games) would be nice. Might not be ideal in handheld mode, but for docked mode.. it would be great.

To think, it would be nuts if we get a 1080p HDR screen and 90Hz it 120Hz support. But I'd like and say I'd be disappointed if we get a 60Hz screen, and I am at least expecting a 1080p screen (or 800p for worst case scenario).
1080p is pretty much confirm, but for for Docked mode i'm expecting 120Hz support, despite that we'll see that being used will be rare, except for Indies games, Mario party, Mario Kart and Smash bros.

ideally i think on handheld it'll be 60Hz, but in docked mode it'll be 120Hz for certain games. I think a good example would be games like Hollow knight.
Like it's possible, since certain games on the Switch are 60Hz in docked and 30Hz in handheld.
So 60Hz handheld
120Hz in docked.
 
120Hz for gaming will be rare on Switch 2, but having it as an option for games like Super Smash Bros, and Mario Kart, (as well as indie games) would be nice. Might not be ideal in handheld mode, but for docked mode.. it would be great.
I agree. I can definitely see that for certain games.
A portable CRT on the Switch 2!
Lol! Buddy hurt his back.
I hope that's the case, but I'm expecting 60hz sadly.
Why sadly? Sure, above 60 is nice. I am sure there are games 20 to 25 years ago that you love playing. (Relatively) new technology doesn't mean you can't enjoy old tech. 60 is fine. 30 is fine, give me a fun and interactive world.
 
I was agreeing until this case, unless it’s expanded upon. With RDNA2, AMD made adjustments to the architecture where 2 CUs can tangibly share resources, and equal the 128 Vector Lanes that is similar to the Nvidia SM in which there are 128 vector lanes.
Not entirely sure what you're responding to, my Infinity Cache comment or the statement about the "Raster advantage"

Raster Advantage is just benchmarking by TFLOPS, it seems pretty consistent across lots of cards and tests. It's possible that there are arrangements of the CUs/SMs that tilt one way or the other, but I don't have the data to test those. Perhaps those do, in fact, lean against the Steam Deck in this case.

Infinity Cache - yeah, absolutely, the architecture is screaming out for it. But we can't test with IC on and off. In the case of the Steam Deck specifically, we can test what happens when we throw significantly more bandwidth at it. It doesn't make the Deck seem bandwidth starved.

It's highly possible that IC would offer performance benefits that are more about cache friendly workloads, than bandwidth usage per se. On balance, the benchmarks seem to suggest that the lack of the IC isn't dragging the Steam Deck by 30%. But there might be better benchmarks that tease it out. I'm not 100% confident on this at all. Not even 70%.
 
I absolutely expect the situation to be 4KHDR60 output in TV mode and a 1080pHDR60 capable screen in handheld mode. Why? Parity (and price). Most people have a 4K TV now, at least most of Nintendo's target demographic, but most of those aren't VRR or 120hz compatible. We can be in a bit of a bubble here, I'm sure some here absolutely have the all singing all dancing 120hz 4K sets, but most don't. What Nintendo won't want is for a user to be playing their 120FPS game in handheld mode, docking it, and it looking somehow choppier, and they don't know why. For most people, a 120hz handheld display will be a party issue, on top of being more expensive up front. In the other direction, if they only go with a 60hz panel, but support 120hz output, people will be bothered in the other direction.

Nintendo wants their modes to have parity where possible, the current system is designed to give you parity except for resolution, which makes sense when there's a screen size difference. If you want to replicate that idea in the here and now, that means 4KHDR60 and 1080pHDR60. For this same reason I doubt we see VRR, or HDR12, nor Dolby Atmos nor Dolby Vision.

Then there's price, HDMI 2.1 support would add complexity and require them to change their HDMI cable again (after updating it just three years ago after a 9 year period of using the previous one), all new dock internals, etc., and if they want parity or just to have it in handheld mode, there's costs (and battery consumption concerns) there.

I just don't see it. I think 1080pHDR60 is a perfectly reasonable, realistic goal for a decent, modern LCD display at a mass market price point.
 
Last edited:
I’m curious how reliable is this guy.

Also if it’s a 8NM wouldn’t it mean that Nvidia engineer are working their best to make it the best option for the Switch 2.

Also the switch first launched with 20nm, but I’m guessing with the successful launch Nintendo got a deal for 16NM that most expected it would launch with in the NX days.

I’m curious if Nvidia somehow found a way to make the 8NM the most efficient for the Switch 2. Like I’m not an expert, but isn’t there a chance that Nvidia and Samsung tweaked with 8NM.
If its 8NM its only because Nintendo didn't want to pay for a better node.
 
Why sadly? Sure, above 60 is nice. I am sure there are games 20 to 25 years ago that you love playing. (Relatively) new technology doesn't mean you can't enjoy old tech. 60 is fine. 30 is fine, give me a fun and interactive world.
I just meant that I would like it to have a 90 or 120hz is all, being 60 won't stop me from buying it or playing it and having an absolute blast with it. I play retro games all the time and have 0 issue with old tech. Only reason I said sadly is because I would personally LIKE it to have better tech, but it won't make or break me.
 
The Steam Deck is between the PS4 and the Series S.

I love talking about tech with folks, and finding ways to explain complex ideas. But part of why this discussion goes round in circles is no one agrees on a definition of "power" and trying to sort out the complexity results in three page long posts that get skipped by half the people.
Alright here me out.. Here's an interesting analogy

PS4 is PS2, Steam Deck is 3DS, and OG Xbox is Series S.
1080p is pretty much confirm, but for for Docked mode i'm expecting 120Hz support, despite that we'll see that being used will be rare, except for Indies games, Mario party, Mario Kart and Smash bros.

ideally i think on handheld it'll be 60Hz, but in docked mode it'll be 120Hz for certain games. I think a good example would be games like Hollow knight.
Like it's possible, since certain games on the Switch are 60Hz in docked and 30Hz in handheld.
So 60Hz handheld
120Hz in docked.
I'm not expecting 120Hz for the screen. That's definitely the best case scenario. I think what went through my head when I posted that was when Dakhil said a few pages ago that If Nintendo were to support 120Hz, it would be for both modes.

Truth be told, there isn't much, if any to justify a 120Hz screen on handheld, outside VRR (a gimmick), and a stable 40fps (which would get the most practical usage), as well as web browsing. But considering 120fps for games will be rare enough as it is in docked mode, it's gonna be even more rare in handheld, even with DLSS.. Certainly not PS4 quality games.

I''m hoping for a 90Hz screen for handheld though. I think it's almost inevitable. If not at launch, then for a refresh a few years later. But I think they would need missing out not supporting 90Hz screen from the get go. Hmm what if we get 90Hz support for screen and docked? Would that make it easier? Would that be weird? There's no 90hz TVs that I know of (monitors I'm sure), but I imagine 120hz tvs should support 90hz gaming..

oh shit.. ok I feel dumb. I didn't know 90hz can do a stable 40fps, instead of 45 only. Maybe I just forgot when there were discussions of 40fps gaming here. According to reddit, the 90hz screen would adjust to an 80hz screen..So that gives more support for 90Hz screen on handheld being a good idea.
 
Last edited:
If its 8NM its only because Nintendo didn't want to pay for a better node.

Keep in mind that T239 started development in 2019, it's very possible Nintendo expected to release the successor in early 2023, a time frame where 8nm made complete economical/practical sense. 4N might have been considered to high end for a $400 console when T239 started development back in 2019. In 2020/2021, if Switch sales had started to decline hard, no question Nintendo would have accelerated new hardware to market quicker than they have. Thanks to excellent Switch sales.
 
The Steam Deck screen has an adjustable refresh rate that can go down to 30 or 40 FPS.

This would be very nice for the Switch 2, but could cause some weirdness as then 30 FPS games on Switch 2 would be able to disable vsync, causing 30 FPS games to have lower input lag in handheld mode.
 
0
I absolutely expect the situation to be 4KHDR60 output in TV mode and a 1080pHDR60 capable screen in handheld mode. Why? Parity (and price). Most people have a 4K TV now, at least most of Nintendo's target demographic, but most of those aren't VRR or 120hz compatible. We can be in a bit of a bubble here, I'm sure some here absolutely have the all singing all dancing 120hz 4K sets, but most don't. What Nintendo won't want is for a user to be playing their 120FPS game in handheld mode, docking it, and it looking somehow choppier, and they don't know why. For most people, a 120hz handheld display will be a party issue, on top of being more expensive up front. In the other direction, if they only go with a 60hz panel, but support 120hz output, people will be bothered in the other direction.

Nintendo wants their modes to have parity where possible, the current system is designed to give you parity except for resolution, which makws sense when there's a screen size difference. If you want to replicate that idea in the here and now, that means 4KHDR60 and 1080pHDR60. For this same reason I doubt we see VRR, or HDR12, nor Dolby Atmos nor Dolby Vision.

Then there's price, HDMI 2.1 support would add complexity and require them to change their HDMI cable again (after updating it just three years ago after a 9 year period of using the previous one), all new dock internals, etc., and if they want parity or just to have it in handheld mode, there's costs (and battery consumption concerns) there.

I just don't see it. I think 1080pHDR60 is a perfectly reasonable, realistic goal for a decent, modern LCD display at a mass market price point.
I agree that 120hz doesn't make sense for a gaming handheld right now, particularly on switch 2, mainly due to lack of power(but also power consumption) to support it. But I'm more on board with 90Hz for the screen than ever before?

While I'm not expecting 90FPS on handheld outside of Indies, switch ports, and a select few games that aren't too demanding. There's justification to use it over 120hz and 60hz screens

1. It's cheaper than 120Hz
2. less battery consumption than 120hz due to not requiring doubling of frames and somewhat from the screen itself
3. Can still give a decent VRR experience over 120hz, which is significantly better than 60hz (45fps per eye at 90hz vs 30fps per eye at 60hz)
4. Can still give a 40fps stable experience like a 120hz screen (see my last post regarding SD OLED), as well as 45fps.
5. overall it's helps switch 2 be relevant and more future proof against other handhelds and current gen consoles vs 60hz screen
5. DLSS can help handheld mode achieve 90fps, for the small games that support it

Now the million dollar question is... Can Nintendo support 90Hz on handheld and up to 120Hz on docked? What's stoping Nintendo from supporting both?

What Nintendo probably might not do is gate keeping tabs by offering a higher Hz support for a revision, but who knows. At least if I was Nintendo, I wouldn't. But I can also seeing them do it. Worst case scenario, offer a dock that supports 120Hz tvs later.

if I was Nintendo I would support 120hz in docked form the beginning..I. get that you think using the OLED dock would make sense for Nintendo, but Nintendo could also just make their own dock that supports 120hz for Switch 2 at launch as way to get more switch owners to buy a Switch 2 sooner (120fps gaming). Could the OLED dock be compatible with switch 2? Sure. But they could make more money by not making it compatible with Switch/Switch OLED.
 
You don't need a 90 Hz screen for 40 FPS, you need a screen that can adjust its refresh rate to 40 (as well as 30 as that will be much more common).

Essentially all TVs do this already, but only for 24 FPS (as that is what film is almost always shot at).
 
I agree that 120hz doesn't make sense for a gaming handheld right now
they could always use frame prediction. there's already a use case in the field for it

 
oh shit.. ok I feel dumb. I didn't know 90hz can do a stable 40fps, instead of 45 only. Maybe I just forgot when there were discussions of 40fps gaming here. According to reddit, the 90hz screen would adjust to an 80hz screen..So that gives more support for 90Hz screen on handheld being a good idea.
Yeah, it's not that you can do 40fps in a 90hz container, just that the Steam Deck's screen can do 80hz, and open up 40fps without having to support a full 120.

I am assuming that Nintendo is going with a custom HDR LCD. My optimism allows for some form of custom support for VRR.
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom