• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Do you have audio editing experience and want to help out with the Famiboards Discussion Club Podcast? If so, we're looking for help and would love to have you on the team! Just let us know in the Podcast Thread if you are interested!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

Technically, the Switch v2, Lite, and OLED use LPDDR4X RAM, which can go higher than the LPDDR4 in Switch v1 can, but was internally set to not go beyond LPDDR4 limits, which is 1600Mhz to equate to 25.6GB/s. There is no hardware that can just make bandwidth jump higher than the RAM is capable of. It's just not how it works. Data being decompressed (in Switch's case, by the CPU as there is no dedicated decompression hardware) get stored into the RAM, which uses that RAM bandwidth. Going from compressed to decompressed is not a jump in bandwidth.

I mean, I would say that more effective RAM decompression would basically increase RAM Bandwidth as you could transfer over N MBs of data in 1 ms, and then then decompress it so it was (1+x)*N MBs of data in 1+(decompression time) ms.
 
(Apologies for clogging up the thread everybody.)

The Switch uses tile-based rendering VS the PS4's full screen renderer. Tile renderers split the screen into... well... tiles and renders each tile separately. Breaking down the image into those tiles reduces the amount of memory needed during the intermediate steps of the graphics pipeline, and the amount of data being moved about at any given time which reduces GPU and memory bandwidth requirements. Can also make parallelization easier, so you can move some shit around and do stuff at the same time if need be, as long as you got the overhead to do so. On a bandwidth starved device like the Switch, this can be incredibly useful for clawing some performance back. That might be what you're thinking of. It's come up a few times in this thread, particularly during the memory bandwidth discussions.
To add, the Tegra X1 uses cache to hold those tiles being rendered, so it's not doing all those reads/writes in main RAM per pixel, but instead reads in chunks to cache where the processing takes place, and then flushes/writes back to main RAM after the tile has been rendered.

So in this case, it's not increase the RAM's bandwidth, but reducing bandwidth usage.
 
Ya'll the intel paper is wild. I don't understand it enough to do a full explainer, but this is some really cool stuff.
Yeah it's insanely cool. I'm going through it right now. Intel's engineers are a talented bunch and I'm glad that they got back into the GPU game because now they're allowed to research and give us stuff like this.
 
Thanks for the replys. Thankfully i'm not a CDPR fan, since their always undermining other developers and treating themself as the Underdog, which if baffling, since all of their releases has been a bug filled mess, even the belove Witcher 3.

But this was news to me, thanks for keeping me informative.
Most of us just regards it as a possible tech demo anyway.
 
I know this is not exactly on topic here but since someone asked about Nintendo stock earlier, Ill just say this. If it doesn't "bounce" at 12.13 (Using NTDOY since this is an American forum) then according to the strategies I make a living on, We could see "oh shit" stock moments for Nintendo. No threat of going to zero or doom mind you. Just the type of stuff that will cause them to take action. With Switch 2 waiting in the wings, I wonder IF this happens maybe they go ahead and announce it sooner than later (June for the big investor meeting?).

Just speculation and thoughts on my part.


Numbers for reference and context: Nintendo stock was like 3ish during Wii-U era. Made ATH at 16.52 in Jan 2021. It just ran up to 15.04 then smacked down to 11.95 all during this Switch 2 2025 rumor mess. The worse I see it getting is 7ish. I think Nintendo acts if it does that. Again. No doom. I highly doubt it goes below that.
Sold back when it hit $14 a share shortly after the delay news broke, wise to buy back in at these levels or wait a bit?

I feel like this ER could be a disaster if they don’t mention anything about the next console, especially since there aren’t many games coming up and software/hardware sales might be down pretty badly.
 
Where? What's going on?
A while back, somebody talked about Intel researching frame extrapolation. I guess Intel have made enough progress that they have something to share. @Dakhil shared a Github page...
Although I'm not convinced on a personal level that current frame generation technology is essential for the Nintendo Switch's successor to support, Intel released a video demo on ExtraSS, Intel's frame generation technology.
...with some videos showcasing the tech in action, some benchmarks, and a research paper that goes deeper into it. It's a really cool read if you have the time. The paper isn't terribly long, only 11 pages.
 
So I gave the Intel paper a second read, just to confirm I wasn't wildly off, and I still have some questions, but, for those of you who don't read academic papers for "fun"

Short(ish?) version: ExtraSS is Intel's answer to frame generation, and it works very different from DLSS and FSR. DLSS and FSR keep a couple frames "in the pocket", delaying them before going out. Then they interpolate frames that go in between the buffered "real" frames. This feels as smooth as high frame rates, but at the cost of an extra delay between when your thumb hits the controller button and when Joe Zombiekiller fires his gun - delay because of those buffered frames not going to screen immediately

Intel's solution is to extrapolate future frames. So instead of holding a couple buffered frames in the pocket, Intel predicts the frame you would draw next, if you were running at a higher framerate. Extra frames, no latency.

This is very cool, but it are two huge caveats. It is heavier on the CPU than Nvidia and AMD's solution. Frame generation was created partially because GPUs were way outstripping CPUs, and games were being limited by the CPU and leaving the GPU idle. Intel's solution pushes load back to the CPU, and may not be the same kind of "free" performance that Nvidia and AMD's solution is.

The second caveat is that it might require much more engine integration than DLSS/FSR frame generation, which were basically "free" once you've added the upscaling functionality.

Longer explanation forthcoming
 
2tflop is not possible. the numbers in the dlss leak document mean nothing, and equating them to the performance numbers that t239 will enable is complete wishful thinking.

Nintendo Switch has 308, 384, and 460MHz GPU speeds for its handheld modes, known as "boost mode, " which are used for a few games. 460 was used for mk11, and it is 2/3rds speed of docked speed of 768MHz.

I'm not expecting docked mode to be 4 TFLOPs, but if we get something like 3.3 TFLOPs for docked mode, a boost (not base) of 2 TFLOPs is plausible, if Nintendo follows what they did on a witch, and if 2 TFLOPs doesn't mess with the battery too much. I think 1.1GHz would land Switch 2 at 3.3 GHz, theoretically. I'm expecting handheld mode to be around 1.5 TFLOPs Enough to allow a double increase in resolution on docked. I forgot what the lowest clock was rated before it lost efficiency.
 
Last edited:
Damn, I knew the Switch 2 was gonna be behind the current consoles, but not THIS behind. Also, it's FLOPS, not FLOPs. FLOPS stands for FLoating point Operations Per Second. It should probably be FPOPS, but FLOPS is just inherently funnier to say.
Fixed.

The Deck is within spitting distance of a PS4. In terms of horsepower, the Switch isn't that far off the 360. It's not a sad leap at all.

Steam Deck is a handheld. It would be weird to be comparing the Switch as anything other than a handheld. The Series S is a TV console. It would be weird to be comparing it otherwise. It's pretty clear both from context and from previous statements by those folks that they're comparing these three devices in a roughly apples-to-apples way.
SD OLED with 102GB/s bandwidth would be an interesting comparison vs base PS4. It can, sort of give us a preview of what we can look forward to when it comes to PS4 ports on Switch 2.
 
Huh. I know somebody on here shared something a while back about Intel researching frame extrapolation. Not sure if it was a research paper or a presentation, but damn. Can't believe they got somewhere with it. For sure interested to see if this goes anywhere.
It was actually Dakhil linking this exact paper back in December! The video’s not new or meant to be promotional, although still very cool; it’s pulled from the paper’s supplementary materials.
 
So I gave the Intel paper a second read, just to confirm I wasn't wildly off, and I still have some questions, but, for those of you who don't read academic papers for "fun"

Short(ish?) version: ExtraSS is Intel's answer to frame generation, and it works very different from DLSS and FSR. DLSS and FSR keep a couple frames "in the pocket", delaying them before going out. Then they interpolate frames that go in between the buffered "real" frames. This feels as smooth as high frame rates, but at the cost of an extra delay between when your thumb hits the controller button and when Joe Zombiekiller fires his gun - delay because of those buffered frames not going to screen immediately

Intel's solution is to extrapolate future frames. So instead of holding a couple buffered frames in the pocket, Intel predicts the frame you would draw next, if you were running at a higher framerate. Extra frames, no latency.

This is very cool, but it are two huge caveats. It is heavier on the CPU than Nvidia and AMD's solution. Frame generation was created partially because GPUs were way outstripping CPUs, and games were being limited by the CPU and leaving the GPU idle. Intel's solution pushes load back to the CPU, and may not be the same kind of "free" performance that Nvidia and AMD's solution is.

The second caveat is that it might require much more engine integration than DLSS/FSR frame generation, which were basically "free" once you've added the upscaling functionality.

Longer explanation forthcoming
They aren't the only one running this type of solution. Arena Breakout is using frame prediction for their above-60fps frame rate modes on mobile
 
It was actually Dakhil linking this exact paper back in December! The video’s not new or meant to be promotional, although still very cool; it’s pulled from the paper’s supplementary materials.
Ah okay, good to know! Whether it’s something new or not, I’m just glad it was shared. Gave me an excuse to go and actually read about it instead of forgetting lol
 
ExtraSS = Extrapolation and Super Sampling.

Intel already has XeSS, their answer to DLSS/FSR style upscaling. ExtraSS isn't a new feature in that technology suite, it's a new technology suite that replaces XeSS and adds their version of frame generation. I say it "replaces" because it changes the way Super Sampling (the upscaling technique) works. These changes are designed to 1) make the upscaling work better and 2) lay the groundwork for their frame generation technology.

FSR frame gen is a second process that runs after upscaling, though it may reuse some internals. DLSS is a black box, but Intel assumes, like we do, that Nvidia's frame gen works the same way. Intel's solution is to completely unify the two techniques, where frame generation is more like "upscaling from no pixels at all."

That sounds pretty insane, and it is. It's also lying, a little. There is a HUGE ASTERISK in the way it works, but I've gotta give them credit for not just playing catch up with Nvidia. First, let's talk about how they've changed the upscaler.

ExtraSS Part 1: The G-Buffer

Temporal upscalers take information from previous frames, and use it to "fill in the blanks" on the current frame to make them higher resolution. The challenge, of course, is that in games, objects move so you can't just lay frames over each other like an onion skin. That's why DLSS/FSR/XeSS use motion vectors describing how all the pixels in the frame are moving. The process of moving pixels from old frames to where those objects oughta be in the current frame is called warping.

The bane of temporal upscalers is disocclusion. Big word, simple idea. Occlusion is graphics speak for "something is in the way, so you can't see this thing." Disocclusion, then, means "the thing in the way? It moved, so now I can see new stuff."

Disocclusion is hard because upscalers don't have information for the newly revealed pixels. That object was occluded in the previous frames, so we don't have anything for them there. In a perfect world, when your main character (Link, say) moves to the right, the space to his left would be low resolution, but "correct", just revealing the raw frame data underneath. But in our tainted and imperfect universe, the temporal upscaler can't make perfect decisions about how to blend it's high resolution info and the low resolution info underneath.

This cases disocclusion artifacts. Basically, ghosting and tearing. If Link is jumping to the right, you get ghosting when some of Link's pixel data gets left behind in the disoccluded space. You get tearing where whatever was to Link's left before he jumped, like a tree, spills into the gap.

That's where Intel comes in. Intel adds a new input, and extra piece of data from the game engine that can be used in the upscaler: The G-Buffer.

There are a lot of passes in modern rendering, with lighting and shading and coloring and post processing all chewing threw huge chunks of memory. And they all work in pixels! 3D geometry is way more complex than that, and doesn't work in pixels, it works in polygons. A common - almost, but not quite, universal - technique in rendering is to try to flatten the 3D geometry of a scene as quickly as possible, to transform it into a flat image that encodes the shapes of the objects that are facing the camera.

deferred_overview.png

In the G-Buffer colors don't actually mean the color of the pixels in the final image, but properties of the geometry. You can see in this picture that on the top, one of the flat images in the G-buffer uses blue and red to encode how far away each pixel is from the camera. With this process, your shaders can use some clever math light and shade (and texture) the rendered frame, without having to refer back to the complex 3D geometry every time.

Intel's solution is "give us the G-Buffer, and we don't just know how each pixel is moving, but we can figure out what the edges of 3D objects are, and make sure that we don't let the pixels of one object bleed into another." This is a clever addition, and in their demo footage it works well. But it's not life changing? That's okay, though, because this is really the layup for what Intel is actually doing, which we'll get to in Part 3.
 
ExtraSS Part 2: More upscaling improvements

The second thing that Intel is doing they're called "shading refinement." First, we gotta understand the problem, which is mostly "shadows move, and it sucks"

Shadows - and reflections, though in this case, we mostly mean glossy reflections - move. But they're not game-objects, they don't have geometry, they're not even really an effect like a particle or lensflare. They're a consequence of light and an object moving relative to each other, and then this third thing happens. As we have established, DLSS/FSR/XeSS use motion vectors from the game engine to warp pixels into their correct place before upscaling.

But because shadows don't have those motion vectors, upscalers make shitty decisions here, especially around the edges of shadows. Since shadows darken other objects, it can be hard for the upscaler to know where the edge of the shadow is, versus where the object underneath might just be dark. The edges of shadows get fuzzy.

One solution to this problem is optical flow. Optical flow is an analysis process that takes two images and attempts to discern the 3D movement of the objects underneath. This would work! Optical flow could detect the movement of the shadow, because OF doesn't care why something's moving, just that it is. But OF is expensive.

Intel's solution is, again, the G-buffer. Since Intel is using the G-buffer to do warping of pixels, it actually knows what parts of the frame are "changing but not moving" - ie, the areas affected by exactly this problem. Then instead of doing high resolution optical flow for the whole frame, it does low resolution optical flow for just these portions of the frame. Then it uses a specialized AI model designed specifically to handle reflections and shadow, to "refine" the warping that has already happened.

You can see their short demo here. On the left is the low resolution input. On the far right is the natively rendered high resolution version. In the middle is the upscaled version of the image on the left.

Again, this is a nice refinement, but actually, Nvidia is already doing pretty well in this area with their existing AI. But as before, this improvement in the upscaler is really a set up for their frame generation. Next post.
 
A while back, somebody talked about Intel researching frame extrapolation. I guess Intel have made enough progress that they have something to share. @Dakhil shared a Github page...

...with some videos showcasing the tech in action, some benchmarks, and a research paper that goes deeper into it. It's a really cool read if you have the time. The paper isn't terribly long, only 11 pages.
That is very nice.
 
ExtraSS Part 3: Frame Extrapolation

So this is all about Intel's frame generation, right? That's what is the headline from the paper (which I missed when it was first released last year, the demo videos are giving it the second rounds, thanks for flagging @Dakhil). So why all the talk about the upscaler improvements?

Because one of Intel's goals is to unify the upscaler and frame generator. FSR and DLSS do frame generation as a second pass, potentially with it's own inputs. In the case of FSR and DLSS, they both need optical flow data. FSR uses the GPU and async compute to do it (which all modern GPUs have). Nvidia uses their specialized optical flow hardware (which leaves the GPU free to do other stuff). It's unclear if DLSS's AI model is used for frame generation (it probably is), the overall algorithm is separate.

Intel wants to feed the same data into both frame gen and upscaling, ideally as the same algorithm. Essentially, Intel wants frame gen to be upscaling from nothing... except it's not nothing. It's from the G-buffer. That's the brilliance and the cheat. Let's start with the brilliance

The G-buffer is an early step in the rendering pipeline. Not only does it provide the upscaler with basic 3D information, it's the core of normal rendering anyway. AMD and Nvidia are taking complete frames, which are fully rendered, but just 2D images, and then trying to generate a third 2D image between them. Intel wants instead to take the G-buffer, and early step in the rendering pipeline and use past rendering to draw the colors over the shapes.

Intel's upscaler already understands how to use the G-buffer to correctly move the past frame's pixels over the actual objects moving in scene. WIth a new G-buffer, they can just do that without any pixel data at all, exclusively using last frames colors. And where FSR and DLSS need to do optical flow for the whole frame to do generation, the G-buffer means that optical flow only needs to be done for the parts of the screen that have things like shadows in them. And the shading refinement model already handles that, at low resolution even.

Which brings us to the cheat. The cheat is... you still have to make the G-buffer.

Hold on, let's go back a second. DLSS frame generation was, in some ways, a response to game developers having trouble with CPUs. CPUs are getting more and more cores, but game development is still locked into single core technologies. Meanwhile, GPUs are really good at using lots of cores and have continued to get more and more powerful. 4k is as far as any sane person wants to go for resolution, so gamers would love to push frame rates up to get more smoothness, more perceived detail.

But the games are being CPU limited, leaving the GPU with horsepower sitting there untouched. Along comes frame generation, which uses up the extra power in the GPU to give you smoothness even when the CPU is busy.

Intel frame extrapolation is solving a different problem. ExtraSS isn't giving you extra frames using excess GPU power. Instead, ExtraSS depends on the CPU being ahead of the GPU. It still needs the CPU to generate all the stuff it normally does for a frame, but lets the GPU skip all the shaders and jump right to a completed frame.

It really is upscaling from 0, or at least close to 0 - in that the CPU still has to actually run at the high frame rate, and initiate the frame, but then the upscaler is like "you know what, don't even bother with color or lighting or shading or textures or any of that shit, just draw me a freakin' pencil sketch and I'll do the rest." And pencil sketch isnt too far off. Intel is explicit that for "extrapolated frames" you generate the G-buffer at really low resolutions.

The advantage of this technique isn't just "new frames without added latency." Its "new frames, plus all the reduced latency you'd expect from high frame rates." Since the CPU runs at the full frame rate, the CPU can sample the controller at the full frame rate as well!

And it opens up the possibility of only doing some of the CPU work every frame. Think about animations - you already see animations running at lower frame rates in the background, in order to reduce CPU load. On these "extrapolated" frames, games could shave down just animations really near the camera. Only run physics on "real" frames.

Frame generation - even frame extrapolation - is a bad name for this technique. Intel isn't generating frames that were never rendered. And their not extrapolating frames that haven't been rendered either! The frames are rendered all the way up to geometry. It's really neural shading. Intel is using AI to take the first step of the rendering pipeline and then estimate the shading that would have happened by reusing shading from past frames, the same way that temporal upscaling reuses past pixels.

It's a clever technique that will requires substantially more engine integration, and will benefit a totally different class of titles than the ones that benefit from current frame generation techniques. If I had to bet, the long term advantages of this technique are much higher than Nvidia and AMD's approach. Currently games are CPU limited not because they're out of CPU power, but because multi-threading is hard, and retrofitting it into existing engines is even harder. But intuitively, it seems like games ought to be able to take advantage of more cores - like, every enemy should be able to run their AI and animations and even basic physics for non-colliding bodies on separate threads.

But GPU growth is slowing down because the node shrinks are slowing. Assuming that games will be CPU limited might prove to be short sighted as engines catch up to multi-core designs, while GPUs put more and more power into things other than shader performance (like AI). Intel making every other frame free on the GPU side, as long as the CPU does the work, might be the better bet.

The fact that it improves their upscaler as well is just a nice addition.
 
I think Nintendo wants feature parity between TV mode and handheld mode. In other words, if VRR is supported in TV mode, Nintendo wants VRR to be supported in handheld mode as well.

And I think all the mobile displays that have VRR support also have support for 120 Hz. Theoretically, Nintendo could work with display manufacturers(s) (e.g. Sharp, etc.) to design and manufacture a custom, mobile 60 Hz display that also has VRR support. But I imagine that won't be cheap, especially since I imagine display manufacturers see VRR support as a premium feature. And I don't believe there are any mobile 60 Hz displays that support VRR.
If the goal is 40fps support, there's far cheaper ways to achieve that on a handheld. A 40hz mode doesn't require VRR. 40hz isn't standard on TVs, which is why it requires 120hz.

I think also with VRR Nintendo could probably do it at 60hz if they wanted. Mobile screens with VRR tends to also be high end screens with 120hz, but there's nothing technologically preventing you from having 60hz vrr. There be a one time R&D fee, but probably cheaper in the long run.
 
Has there been any leaks/speculation on the dimensions of the next Switch? I’m guessing it’s going to be bigger than the OLED model?
Personal speculation based on various leaked details +

Hidden content is only available for registered users. Sharing it outside of Famiboards is subject to moderation.


Half an inch taller, and proportionally wider and thicker
 
Last edited:
I think Nintendo wants feature parity between TV mode and handheld mode. In other words, if VRR is supported in TV mode, Nintendo wants VRR to be supported in handheld mode as well.

And I think all the mobile displays that have VRR support also have support for 120 Hz. Theoretically, Nintendo could work with display manufacturers(s) (e.g. Sharp, etc.) to design and manufacture a custom, mobile 60 Hz display that also has VRR support. But I imagine that won't be cheap, especially since I imagine display manufacturers see VRR support as a premium feature. And I don't believe there are any mobile 60 Hz displays that support VRR.


While the first part is a fair point(-ish), I’m not sure Nintendo quite needs a 120Hz display, or that it’s prohibitively expensive for them to have a customization considering the 3DS has an Display with an adjustable framerate:


Based on the replies and unless I’m misunderstanding this, it wasn’t really used but the hardware has support for it to be adjustable if the developer wanted to:

afaik this wasn't used by official games for this purpose
it's less of a freesync and more of a "you can change the refresh rate"



It’s certainly possible, if Nintendo dared to and didn’t really care too much for parity of that specific feature that can be turned off and on when clocked higher in TV mode (meaning it doesn’t need to compensate with lower framerate VRR), to have it on the hardware and for developers to support it.

Though, with VRR I think people shouldn’t look at it as a smoothening feature but as a compensation for the inability to hit a performance target.

Aka, if it’s able to hit 60 in TV mode but in portable mode it ranges from 35-50 and never hits 60, capping it to say… 40Hz and have a very few occasions that it dips below 40 rather than have many occasions that it is dipping below 60 as it can’t hit a target.

And they can work on other elements to make it work.

This is how, in my opinion just to be clear, people should view and use VRR in the case of the Switch or a Switch-like console. As a feature for compensating, not a feature for removing stutter. So basically, the other way around with VRR.

Maybe have some presets… 24, 30, 40, 48, 50 aaand 60 of course.
 
2tflop is not possible. the numbers in the dlss leak document mean nothing, and equating them to the performance numbers that t239 will enable is complete wishful thinking.

We have no idea which clock Nintendo will use, I'm just telling you that the data leaked from the dlss docs doesn't prove anything, and again to reach the raster rendering level of sd the t239 would have to be liberated to 700mhz+ clocks to make it possible.
You don’t know the rendering capabilities of either system so making this definitive claim is utterly useless unless you develop a game for each system that takes into account what each unique hardware does, even with clocks revealed, so I suggest getting down from that hill.
 
5nm+ to 3nm to 2nm are all massive changes that would lead to not incredible gains. You could do an architectural leap I guess? But that would also be extremely complicated and very complicated for development as well (causing devs to basically have to do four separate SKUs with old/new and handheld/docked versions)
It will depend on third party support. I mean third parties today are willing to make two separate Xbox versions of their games even though the two Xbox SKUs have a pretty small user base. Would they be willing to make two separate Nintendo versions if these different Switch 2 SKUs user base is much bigger? They should, but it could be Xbox gets those different versions only because of Microsoft money, and Nintendo would get much worse support even though its user base would be much bigger.
 
I think we obviously need to consider steam deck as a 2022 product, if switch2 is weaker than SD in portable mode it makes sense, but if docking mode is also inferior to deck and its going to sell for $399 then I do think it's a sad switch.
The comparison cannot be made. Valve makes the Steam Deck for dedicated hardcore gamers that are willing to spend a lot of money not only on a gaming PC but also on an expensive handheld PC like the Steam deck, meaning that they do not have to think about lowering cost to get mass market audience to buy into it.

Sure, Nintendo could make a much stronger handheld than the Steam deck, but the price point would mean that it wouldn't get near Switch in popularity in that case. Nintendo is aiming for a larger audience than the steam deck and that will always lead to price being a more important factor to more people than increased power.
 
The comparison cannot be made. Valve makes the Steam Deck for dedicated hardcore gamers that are willing to spend a lot of money not only on a gaming PC but also on an expensive handheld PC like the Steam deck, meaning that they do not have to think about lowering cost to get mass market audience to buy into it.

Sure, Nintendo could make a much stronger handheld than the Steam deck, but the price point would mean that it wouldn't get near Switch in popularity in that case. Nintendo is aiming for a larger audience than the steam deck and that will always lead to price being a more important factor to more people than increased power.
It's worth stressing that the Steam Deck is absolutely not fair when it comes to price-point comparison. Valve has made it clear in the past that they're selling at a heavy loss, but any purchase on Steam basically brings them back without issue. My guess, and I believe this is the commonly accepted opinion, is that Valve is selling at a loss to get a foothold into Handheld PCs early, and they're succeeding at that.

If Nintendo wants to sell at a profit or only a slight loss, selling for 400 is completely fine, especially since we know it'll be more powerful docked and have games tailor-made for the device in a way that developers can't do as easily for the Steam Deck's Arch-Linux-based OS.
 
What’s the expected teraflops for the switch 2.

Since the series s is 4TF
And the steam deck is 1.6 TF

Wouldn’t a 3+ TF be a nice sweet spot for the switch 2.

I don't know if they've heard anything but Rich at DF seemed to think Switch 2's raw numbers would be less than a Steam Deck but would make up for it with different technologies such as DLSS.
 
I don't know if they've heard anything but Rich at DF seemed to think Switch 2's raw numbers would be less than a Steam Deck but would make up for it with different technologies such as DLSS.
He also expects 8nm (either underclocked or cut down).
 
Last edited:
Sold back when it hit $14 a share shortly after the delay news broke, wise to buy back in at these levels or wait a bit?

I feel like this ER could be a disaster if they don’t mention anything about the next console, especially since there aren’t many games coming up and software/hardware sales might be down pretty badly.

My answer to "Should I buy or sell" is always "Please seek a licensed financial expert and not the advice of a non-licensed person on the internet" for these things. I dont mind telling you what my thought process or basic strategy is if you want to DM me since this is really off topic. All Ill say is above 11.93, I maintain a bullish bias and if it climbs to 13, There will be certain levels where I will take profit if it drops too much (Technical term is "Trailing stops")

So I gave the Intel paper a second read, just to confirm I wasn't wildly off, and I still have some questions, but, for those of you who don't read academic papers for "fun"

Short(ish?) version: ExtraSS is Intel's answer to frame generation, and it works very different from DLSS and FSR. DLSS and FSR keep a couple frames "in the pocket", delaying them before going out. Then they interpolate frames that go in between the buffered "real" frames. This feels as smooth as high frame rates, but at the cost of an extra delay between when your thumb hits the controller button and when Joe Zombiekiller fires his gun - delay because of those buffered frames not going to screen immediately

Intel's solution is to extrapolate future frames. So instead of holding a couple buffered frames in the pocket, Intel predicts the frame you would draw next, if you were running at a higher framerate. Extra frames, no latency.

This is very cool, but it are two huge caveats. It is heavier on the CPU than Nvidia and AMD's solution. Frame generation was created partially because GPUs were way outstripping CPUs, and games were being limited by the CPU and leaving the GPU idle. Intel's solution pushes load back to the CPU, and may not be the same kind of "free" performance that Nvidia and AMD's solution is.

The second caveat is that it might require much more engine integration than DLSS/FSR frame generation, which were basically "free" once you've added the upscaling functionality.

Longer explanation forthcoming

This is fascinating. I bought a $400 Mini-LED 240hz TCL TV for Mario RPG. The motion smoothing effect was INSANE but as we all know the input lag is 2 seconds. Seeing Mario RPG "cheat out" 240 FPS was amazing but unplayable. If I am reading correctly, Frame generation could be a way to "cheat" for more FPS at the cost of CPU power rather than GPU or post processing on a display?
 
He also expects 8nm (either underclocked or cut down).
I’m curious it’s because of MLID video of the switch 2, that makes people more questionable if the Switch 2 will either use 4NM or 8NM.

If digital foundry’s is wrong, then they can just say ,, well that was a pleasant surprise’’ and not make people mad over them if they get something wrong from the switch 2.
 
I’m curious it’s because of MLID video of the switch 2, that makes people more questionable if the Switch 2 will either use 4NM or 8NM.

If digital foundry’s is wrong, then they can just say ,, well that was a pleasant surprise’’ and not make people mad over them if they get something wrong from the switch 2.
He already expected 8nm in Fall, when their Video about T239 was recorded.
 
Hi everyone, regarding the capability of having DLSS running in paralel i.e. frame N being upscaled while N+1 is already rendering, it was mentioned that the cost of upscaling was in the range of 15ms right?
Am I right in thinking that while that may enable 60 fps and therefore smoother animations in screen, it will still "feel" like it's closer to 30 fps in terms of lag and responsiveness? From the user input to the action on screen we still need to account those 15 extra mili seconds on every frame.
 
Hi everyone, regarding the capability of having DLSS running in paralel i.e. frame N being upscaled while N+1 is already rendering, it was mentioned that the cost of upscaling was in the range of 15ms right?
Am I right in thinking that while that may enable 60 fps and therefore smoother animations in screen, it will still "feel" like it's closer to 30 fps in terms of lag and responsiveness? From the user input to the action on screen we still need to account those 15 extra mili seconds on every frame.
Afaik, you're right. The frame generation is essentially faking a frame between point A and point B, but the input is only accepted at points A and B. A fighting game playing at 60fps natively will naturally accept more inputs than a fighting game running at 60fps using frame generation.

Frame generation's best application are at high framerates (namely stuff from 120fps onwards because you're naturally running FPS at around 60 internally), or for action-adventure games. High-input environments like fighting games and shooters arguably don't benefit from frame gen as a result, especially since GPU or CPU bound titles will likely decrease user input after enabling it due to the strain of enabling DLSS/FSR or ExtraSS respectively.
 
I don't know if they've heard anything but Rich at DF seemed to think Switch 2's raw numbers would be less than a Steam Deck but would make up for it with different technologies such as DLSS.

DF knows nothing.

I'm going with what the youth calls; Based and make some unsubstantiated claims. Then dip out again for a while (wish GDC had some more info...) 😎

Compute-wise, Switch 2 is:
  • Handheld: above the Steam Deck
  • Docked: below the Series S.
Functional capabilities; better than the current-gen consoles.

Whenever it's released in the future. It will never be outdated and pretty much the only direct competitor in a portable form factor will be Apple's M3 in the iPad Pro. The M2 already is ahead of the Steam Deck, but the selection of games is limited (and also locked) to do a proper 1 to 1 comparison. The only other SoC for an interesting comparison soon, is the Snapdragon X Elite, but other than that ❌ .
Switch 2 will come earlier than Steam Deck 2 and I think by the time SD2 becomes relevant we'll just be discussing w/e new Nintendo game is going to come out.
 
He already expected 8nm in Fall, when their Video about T239 was recorded.
Do we know when the 8NM talk started talking.

Because there was some Nvidia leaker saying that it’ll use 8NM

Meanwhile there’s other saying it’ll be 5NM

I’m for quite hopefuls for a 4NM since Nintendo will be in a much better situation of getting better deals and some price cuts of some products for the switch 2, compare to the Switch, because of the failure of the Wii U.
 
Do we know when the 8NM talk started talking.

Because there was some Nvidia leaker saying that it’ll use 8NM

Meanwhile there’s other saying it’ll be 5NM

I’m for quite hopefuls for a 4NM since Nintendo will be in a much better situation of getting better deals and some price cuts of some products for the switch 2, compare to the Switch, because of the failure of the Wii U.
He said it because T239 is based on Orin, which is 8nm. But it is highly custom and would be a bit too big and power hungry for a handheld like Switch 2 if it is 8nm.
 
Last edited:
Do we know when the 8NM talk started talking.

Because there was some Nvidia leaker saying that it’ll use 8NM

Meanwhile there’s other saying it’ll be 5NM

I’m for quite hopefuls for a 4NM since Nintendo will be in a much better situation of getting better deals and some price cuts of some products for the switch 2, compare to the Switch, because of the failure of the Wii U.
8nm started with Tegra Orin, which is 8N

Kopite7Kimi admits to not knowing and assumed 8N based on Orin

For all intents and purposes, 4N is a 5nm design
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom