• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

That's fair then. My argument would then be are Nintendo going to release bleeding edge hardware which struggles to come in at under $500 though when they've just hit a massive home run by releasing a moderately (at the time) powerful hybrid console which they could sell for $349 but even then barely lasted 2.5 hours in it's flagship launch game? Also the tensor cores aren't free to manufacture and we know it has them so that has to be added to the BoM.

How long is this 1.5tflop mobile GPU going to last in terms of steady performance while gaming or does this chip even sustain it's performance or is it in a phone which throttles it when it gets too hot after 20 minutes at which time you lose 60% of your gaming performance? and if it's not in a phone how big is the device? we get back to the brick like Steam Deck again. I just don't see it personally but I will be glad to be wrong!

A device with the exact same dimensions as the current Switch which is 900gflops handheld / 1.8tflops docked with tensor cores for DLSS/RT, lasts 3 hours when mobile at $399 and plays every current Switch game will be a phenomenal value proposition for 99% of their target audience. There is no need to push further than that and if there's one thing we know about corporations it's they do the bare minimum especially in a post Covid / current War economy.
it helps that Drake isn't bleeding edge. the A78 is from 2020, and Ampere is also from 2020. the only thing that's cutting edge is the node, which Nvidia has been using for 2 years now. it'll reach the expected $400 price tag through amortization and volume.

unlike phones, Drake will have a lot more cooling as well as active cooling. the SoC will be larger than any phone SoC, but smaller than the APU in Steam Deck (which is made on 7nm). it's not a good comparison because the builds are very different

well, I'd say you're greatly undervaluing what can be done on a modern teraflop. especially when said phones are doing some damn good ray tracing and Drake will be more powerful than that
 
well, I'd say you're greatly undervaluing what can be done on a modern teraflop. especially when said phones are doing some damn good ray tracing and Drake will be more powerful than that
Unless you're on the Chips and Cheese Discord Server who think RT on mobile is a joke that will never happen and Ampere TFLOPs are so weak that GCN2 can beat them so much that 3TFLOPs Ampere is weaker than 1.8TFLOPs GCN2 (PS4)
YezxieZUgyQN.jpeg
 
Just gonna lay this out there: no one gives one flying f*** about "raw performance", they care about actual output. Raw performance merely gives us a window into what the actual output might be.

When comparing hardware without things like dedicated tensor math and ray tracing accelerators with hardware that does, the only viable measurement is output, native resolution or otherwise. It's like comparing HEVC playback and hardware performance on a device without HEVC decode acceleration to one with it, there's no bloody contest. No one should realistically care if the 4K image they get is done by "cheating", because half or more of what a GPU does is technically "cheating" by virtue of being a processor that is built for explicit purpose and the prior techniques used to do that hav merely been normalized after the fact.

So a device having less raw power than a PS4 doesn't mean anything to most people. A handheld device having the capacity to output a 4K image when docked that so close to being native 4K as to be near-indistinguishable to the naked eye at an impressive visual fidelity above PS4 is impressive and consumers will be impressed, even if spec-sheet nerds call it "cheating" or "witchcraft" or whatever.

So these comparisons really need to focus discussion on comparing output. And in that regard, this new Nintendo hardware is going to be punching above its weight class by no small amount.
Lets just look at the PS5 and Switch 2 from the specs we have, it's simple to figure out if this is a next gen product or a "pro" model.

PS4 uses GCN from 2011 as it's GPU architecture, it has 1.84TFLOPs available to it. (We will use GCN as a base) The performance of GCN is a factor of 1

Switch uses Maxwell v3 from 2015, it has 0.393TFLOPs available to it. The performance of Maxwell V3 is a factor of 1.4 + mixed precision for games that use this... This means when docked Switch is capable of 550GFLOPs to 825GFLOPs GCN, still a little less than half the GPU performance of PS4, this doesn't factor in far lower bandwidth, RAM amount or CPU performance, all of which sit around 30-33% of the PS4, with the GPU somewhere around 45% when completely optimized.

PS5 uses RDNA1.X, customized in part by Sony, introduced with the PS5 in 2020, it has up to 10.2TFLOPs available to it. The performance of RDNA 1.X is a factor of 1.2 + mixed precision (though this is limited to AI in use cases, developers just don't use mixed precision atm for console or PC gaming, it's used heavily in mobile and in Switch development though). This means ultimately that the PS5's GPU is about 6.64 times as powerful as the PS4, and around 3 times the PS4 Pro.

Switch 2 uses Ampere, specifically GA10F, which is a custom GPU architecture that will be introduced with the Switch 2 in 2024 (hopefully), it has 3.456TFLOPs available to it. The performance of Ampere is a factor of 1.2 + mixed precision* (this uses the tensor cores, and is independent of the shader cores). Mixed precision offers 5.2TFLOPs to 6TFLOPs. It also reserves 1/4th of the tensor cores for DLSS according to our estimates, much like the PS5 using FSR2, this allows the device to render the scene at 1/4th the resolution of the output with minimal loss to image quality, greatly boosting available GPU performance, and allowing the device to force a 4K image.

When comparing these numbers to PS4 GCN, Switch 2 has 4.14TFLOPs to 7.2TFLOPs, and PS5 12.24TFLOPs GCN equivalent, meaning that Switch 2 will do somewhere between 34% to 40% of PS5. It should also manage RT performance, and while PS5 will use some of that 10.2TFLOPs to do FSR2, Switch 2 can freely use the remaining 1/4th of it's tensor cores to manage DLSS. Ultimately there are other bottlenecks, the CPU is only going to be about 2/3rd as fast as the PS5's, and bandwidth with respect to their architectures, will only be about half as much, though it could offer 10GB+ for games, which is pretty standard atm for current gen games.

Switch 2 is going to manage current gen much better than Switch did with last gen games. The jump is bigger, the technology is a lot newer, and the addition of DLSS has leveled the playing field a lot, not to mention Nvidia's edge in RT playing a factor. I'd suggest that Switch 2 when docked, if using mixed precision will be noticeably better than the Series S, but noticeably behind current gen consoles.
You've really given more effort that it was worth, but hats off regardless.
On the contrary, this is very relevant to Switch 2. In fact, it was pretty much on top of my list of "things that would make RT more viable on Switch 2".

To explain, if you're using both ray tracing and DLSS in a game, there are effectively three relevant steps involved:
  • Ray traced graphics pass, which produces a noisy image
  • Denoising pass, which uses spatial data (ie info from neighbouring pixels) and temporal data (ie info from the same pixel in previous frames) to produce a smoother image
  • DLSS pass, which uses spatial and temporal data to reconstruct a higher resolution image
The issue is that the denoising and DLSS steps interfere with each other to an extent. The denoising pass doesn't leave DLSS with enough spatial or temporal information to reconstruct a good image, and the DLSS pass is a bit of a mess without denoising (it's not expecting noisy input data). Given that both denoising and DLSS are effectively two different approaches to the same problem, creating a good output image from incomplete data using spatial and temporal sampling, the obvious solution is to combine them into a single pass, which is what DLSS ray reconstruction is.

DLSS ray reconstruction should absolutely produce better image quality than the old denoiser plus DLSS approach, which means better quality output for a given level of RT performance, or the ability to hit the same quality output with reduced RT performance (ie lower ray count). It's also unrelated to DLSS 3's frame generation feature, so should be usable on the next Switch (so long as performance isn't significantly worse than regular DLSS).
It reads like it's an optimization of an existing technology we already know is in this new hardware, so yeah, without even the technical speak, we should treat it like one.
Outside of using Tensor cores for dlss, how much of the CPU gap between Switch 2 and PS5/Series X could be made up by the Tensor cores for cpu tasks like Ai and animations?
AI relies really heavily on tensor math, so... the odds are pretty good so long as the documentation and tools are readily available to do so and the Tensor cores are not fully allocated to other tasks in each cycle.
I know I've been looking forward to seeing how they could be used for really tight rollback netcode.
When people are talking about performances, it would help if they specify whether they talk about the docked or undocked profile 😁
When in doubt? Docked.
 
Well, on pure GPU numbers it would be a PS4...in portable mode.

but that is ignoring all the features that it has over PS4

  • Infinitely Better CPU
  • More RAM Overall
  • Lower Latency RAM
  • GPU Features
    • Tile-Based Rendering (Big one)
    • Hardware VRS Support
    • Hardware-level DX12/Vulkan Support which can help make them and similar architectures run far faster than GCN2 could ever.
    • Shader-Level RT Pipeline Optimization (The CUDA Cores since Turing have been tuned to run RT faster than pre-RT era cards even at the Shader-level)
    • The RT Cores themselves to hardware accelerate RT far faster than RDNA2 can.
    • Mesh Shading Support at the hardware level (Could help a lot, especially if they introduce Lovelace's hardware-level compression/decompression of Mesh Shader formats. Probably bringing speed/size to a better balance than Nanite)
    • DP4a Support (XeSS if it wanted to be used)
    • Tensor Cores which while some people (Looking at you chips and cheese) meme on, have a lot more capability than just DLSS/Ray Reconstruction (Seemingly Neural Radiance Caching giving a.....different name).
      • Tensor Cores are a major part of the FP16 Pipeline on Ampere GPUs, the key thing is Ampere/Lovelace GPUs are actually 1+X (X being the type of FP16 the Tensor Cores do) FP32+FP16 in mixed percision mode.
        • Z0m3le did more research on this than I did, but in Mixed Percision mode, Drake can push past 5TFLOPs of effective compute in docked mode. so applying that, Portable mode T239 would likely push past 2TFLOPs. (550MHz calculated by Thraktor, so 1.6TFLOPs FP32)
      • And then you have DLSS, RR, off-chance of DLSS-FG If the OFA/Pipeline/Algorithm could get tuned properly (It is Orin's OFA so it's a unkown factor)
All that stuff is things that Swtich 2 would have over PS4. And that is all in Portable mode.
I'm not an expert, but I assume that this + the fact that the more we go forward, the more games are easier to adapt to hardware, this is potentially the console that will lead Nintendo to even more third party, right?
 
It's not going to be the N64 to Gamecube leap some think. it's going to be the PS4 to PS5 leap which causes arguments and the more core audience asking "is this it, is this next gen?"
So I have to say I totally disagree. You are right that this gen (PS5, etc) has the lowest perceived leap, games are still being released for PS4 as there are a lot of people still playing that gen for this reason... And Nintendo is going to put out a new platform squarely between PS4 and PS5? Who cares if it's closer to Series S or not, it's for sure above the PS4 gen and that one has lasting power, people think it still looks good enough! How perfect is that for a handheld? The average person is going to see ports to Switch 2 and think "Hey that looks pretty much the same as a PS5, WOW" and they'll be wrong, but... that won't matter!

Just gonna lay this out there: no one gives one flying f*** about "raw performance", they care about actual output.
AKA does the gameplay trailer look dope? Bet.
 
Great post. One thing I would add to this knowing quite a few developers and hearing them talk about the internets absolute fetish for resolution and pixel numbers is that not all pixels are created or rendered equally when it comes to games.
Absolutely true. That’s the reason I tried to frame the whole thing around the PS4 Pro. The Pro never received any exclusives, just enhanced PS4 games. That makes it really easy to talk about how GPU performance can be used to scale resolution specifically, and how DLSS 2.0 can make that process cheaper.
 
0
That's fair then. My argument would then be are Nintendo going to release bleeding edge hardware which struggles to come in at under $500 though when they've just hit a massive home run by releasing a moderately (at the time) powerful hybrid console which they could sell for $349 but even then barely lasted 2.5 hours in it's flagship launch game? Also the tensor cores aren't free to manufacture and we know it has them so that has to be added to the BoM.

How long is this 1.5tflop mobile GPU going to last in terms of steady performance while gaming or does this chip even sustain it's performance or is it in a phone which throttles it when it gets too hot after 20 minutes at which time you lose 60% of your gaming performance? and if it's not in a phone how big is the device? we get back to the brick like Steam Deck again. I just don't see it personally but I will be glad to be wrong!

A device with the exact same dimensions as the current Switch which is 900gflops handheld / 1.8tflops docked with tensor cores for DLSS/RT, lasts 3 hours when mobile at $399 and plays every current Switch game will be a phenomenal value proposition for 99% of their target audience. There is no need to push further than that and if there's one thing we know about corporations it's they do the bare minimum especially in a post Covid / current War economy.
How about we look at chips using ARM CPUs and closer on power budget:

Top Snapdragon by March 2016: SD 820, 319 GFlops
Top Snapdragon by March 2017: SD 835, 567 GFlops
Top Snapdragon by March 2023: SD 8 Gen 2, 3.48 TF

It's worth to mention that TF comparison is quite flawed*. But it serves to show that a 6x increase in flops after 6+ years isn't some unreal expectation.

*It can be used for cards on the same architecture, but on different architectures it only helps to give you a ballpark of comparison. Even on the same vendor, the performance per flop can change a lot (the Ampere line, likely architecture for NG, got a huge boost in flops, but the performance boost was proportionally much lower)
 
I'm not an expert, but I assume that this + the fact that the more we go forward, the more games are easier to adapt to hardware, this is potentially the console that will lead Nintendo to even more third party, right?
Potentially. Likely depends on the CPU for the most part at this point, the CPU being the most unpredictable part really at this point as ARM CPU perf can vary wildly depending on implementation, also us not having any real tests for A78C in insolation.
 
0
so no more shit about LCD screens right?
I’m sorry but I AM an OLED lover and I WILL complain if the Switch 2 ships with only an LCD, I don’t care how good of an LCD it is.

Nintendo has no one to blame but themselves for shipping the Switch OLED Model! They showed me just how good a mobile screen CAN look, they should have expected complaints if they’re gonna downgrade us like that!
 
Given some people here comments, I just think next Switch HW will have same clocks speeds as OG Switch thus its GPU performances in FLOPS will be...

Portable: 921/1152/1380 GFLOPS (307/384/460 MHz)
Docked: 2304/2763 GFLOPS (768/920 MHz)

In raw numbers in portable side its inferior to a PS4 but considering advancements in architecture and such, its a PS4+ in real life performance.

Its hard to compare it to Xbox Series S as CPU will be inferior, though as others said, good optimized ports of XBSeries S should be very doable to happen, though maybe a little late (so not day one as Sony/Microsoft HW).

Nintendo IPs that run at 900-1080p at Switch OG Docked mode should get a big boost at visuals in their Switch NG entries.

I think it was @Thraktor who mentioned this, but there does come a point where you start losing efficiency when the clock speeds go down to a certain level. There is a floor to it, and given Nintendo would want to preserve battery life, while also maintaining efficiency, going with purely the Switch 1 clock speeds isn’t really feasible.

It literally comes down to the Chip must have a minimum voltage to operate, and no amount of reducing clocks below a certain level will change that. You end up losing efficiency, while gaining absolutely nothing. So it’s very unlikely Nintendo would just copy and paste the clock speeds like you suggest.

Real Short Answer
DLSS BABY

Long, but hopefully ELI5, answer
There are actually dozens of feature differences between the PS4 Pro's 2013 era technology and Drake's 2022 era technology that allow games to produce results better than the raw numbers might suggest. But DLSS is the biggest, and easiest to understand.

Let's talk about pixel counts for a second. A 1080p image is made up of about 2 million pixels. A 4K image is about 8 million pixels, 4 times as big.

what-is-4k-common-2

All else being equal if you want to natively draw 4 times as many pixels, you need 4 times as much horsepower. It's pretty straight forward. One measure of GPU horsepower is FLOPS - floating-point operations per second. The PS4 runs at 1.8 TFLOPS (a teraflop is 1,000,000,000 FLOPS).

But that leads to a curious question - how does the PS4 Pro work? The PS4 Pro runs at only 4.1 TFLOPS. How does the PS4 Pro make images that are 4x as big as the PS4, with only 2x as much power?

The answer is checkerboarding. Imagine a giant checkerboard with millions of squares - one square for every pixel in a 4k image. Instead of rendering every pixel every frame, the PS4 Pro only renders half of them, alternating between the "black" pixels and the "red"

J5bYzRO.png


It doesn't blindly merge these pixels either - it uses a clever algorithm to combine the last frame's pixels with the new pixels, trying to preserve the detail of the combined frames without making a blurry mess. This class of technique is called temporal reconstruction. Temporal because it uses data over time (ie, previous frames) and reconstruction because it's not just upscaling the current frame but trying to reconstruct the high res image underneath, like a paleontologist trying to reconstruct a whole skeleton from a few bones.

This is how the PS4 Pro was able to make a high quality image at 4x the PS4's resolution, with only 2x the power. And the basic concept is, I think, pretty easy to understand. But what if we had an even more clever way of combining the images - could we get higher quality results? Or perhaps, could we use more frames to generate those results? And maybe, instead of half resolution, could we go as far as quarter resolution, or even 1/8th resolution, and still get 4k?

That's exactly what DLSS 2.0 does. It replaces the clever algorithm with an AI. That AI doesn't just take 2 frames, but every frame it has ever seen over the course of the game, and combines them with extra information from the game engine - information like, what objects are moving, or what parts of the screen have UI on them - to determine the final image.

DLSS 2.0 can make images that look as good or better as checkerboarding, but with half the native pixels. Half the pixels means half the horsepower. However, it does need special hardware to work - it needs tensor cores, a kind of AI accelerator designed by Nvidia and included in newer GPUs.

Which brings us to Drake. Drake's raw horsepower might be lower than the PS4 Pro - I suspect it will be lower by a significant amount - but because it includes these tensor cores it can replace the older checkerboarding technique with DLSS 2. This is why I said low-effort ports might not look at good. DLSS 2.0 is pretty simple to use, but it does require some custom engine work.

Hope that answers your question!

But what about Zelda?

It's kinda hard to imagine what a Zelda game would look like with this tech, especially since Nintendo reboots Zelda's look so often. But Windbound is a cel-shaded game highly inspired by Windwaker, and it has both a Switch and a PS4 Pro version.

Here is a section of the opening cutscene on Switch. Watch for about 10 seconds

Here is the same scene on PS4 Pro. The differences are night and day.

It's not just that the Pro is running a 4k60, while the Switch runs at 1080p30. This isn't a great example, because Zelda was designed to look good on Switch, and this clearly wasn't - Windbound uses multiple dynamic lights in each shot, and either removes things that cast shadows (like the ocean in that first shot) making the scene look too bright and flat, or removes lights entirely (like the lamp in the next shot) making things look too dark.

These are the kinds of posts I love seeing on this thread. Just purely educating folks on the particulars, and showing how exciting it all is.

Nintendo Switch sucessor reveal?



Ummmm….

Not rain on your parade but I think this one was running at 8k on the Series X and PS5.

So no true 4k. As hilarious as it sounds.

You’re actually close, but the Series X runs at 6k Internally, and not 8K as shown by Digital Foundry:

https://www.eurogamer.net/digitalfoundry-2021-the-touryst-8k-60fps-on-ps5

you might wonder why, let alone how the PS5 version can do that, and here’s the relevant bit:

“Shin'en tells us that in the case of its engine, the increase to clock frequencies and the difference in memory set-up makes the difference. Beyond this, rather than just porting the PS4 version to PS5, Shin'en rewrote the engine to take advantage of PS5's low-level graphics APIs.”

So yeah, pretty cool stuff!

Well, on pure GPU numbers it would be a PS4...in portable mode.

but that is ignoring all the features that it has over PS4

  • Infinitely Better CPU
  • More RAM Overall
  • Lower Latency RAM
  • GPU Features
    • Tile-Based Rendering (Big one)
    • Hardware VRS Support
    • Hardware-level DX12/Vulkan Support which can help make them and similar architectures run far faster than GCN2 could ever.
    • Shader-Level RT Pipeline Optimization (The CUDA Cores since Turing have been tuned to run RT faster than pre-RT era cards even at the Shader-level)
    • The RT Cores themselves to hardware accelerate RT far faster than RDNA2 can.
    • Mesh Shading Support at the hardware level (Could help a lot, especially if they introduce Lovelace's hardware-level compression/decompression of Mesh Shader formats. Probably bringing speed/size to a better balance than Nanite)
    • DP4a Support (XeSS if it wanted to be used)
    • Tensor Cores which while some people (Looking at you chips and cheese) meme on, have a lot more capability than just DLSS/Ray Reconstruction (Seemingly Neural Radiance Caching giving a.....different name).
      • Tensor Cores are a major part of the FP16 Pipeline on Ampere GPUs, the key thing is Ampere/Lovelace GPUs are actually 1+X (X being the type of FP16 the Tensor Cores do) FP32+FP16 in mixed percision mode.
        • Z0m3le did more research on this than I did, but in Mixed Percision mode, Drake can push past 5TFLOPs of effective compute in docked mode. so applying that, Portable mode T239 would likely push past 2TFLOPs. (550MHz calculated by Thraktor, so 1.6TFLOPs FP32)
      • And then you have DLSS, RR, off-chance of DLSS-FG If the OFA/Pipeline/Algorithm could get tuned properly (It is Orin's OFA so it's a unkown factor)
All that stuff is things that Swtich 2 would have over PS4. And that is all in Portable mode.

It truly is amazing the kinds of technologies we’re seeing on fricking Mobile hardware these days. If Switch 2 does in fact use T239, and as others have long speculated, it’ll be quite the machine.

“You’re not just playing with power. You’re playing with portable power!”
 
Damn, the PS Portal’s LCD looks good! I’ll gladly fall for a LCD Switch 2 and buy a Switch 2 - OLED Model in four years :p
 
0
Nintendo's games look just as good if not better than a lot of the best looking PS3/360 games.

I see no reason why it wouldn't be the same with PS4/Xbox One games with the next hardware.

Not only that but we'll see a good few native PS4 ports with the likes of Resident Evil 2, 3, 4, VII and Village all likely making the jump and with a graphical quality much better than what the Switch could ever handle.
There's a lot of switch games that look just as good as PS4/xbone in fidelity. The resolution might be lower, and some features might be missing or lowered. The Switch is closer to xbone and PS4 in specs (and with help of modern tools) than 360/PS3.
I totally agree with you. I've always expected PS4 level graphics with DLSS to give the games closer to 4k image quality when docked. Expecting a 3tflop GPU when docked as a base with DLSS on top is fantasy imo and just setting up people to be massively disappointed. I'm expecting handheld mode to be like 900gflops and then 1.8tflops docked with them using DLSS to take games from 720p/900p/1080p to 1080p/1440p/4k (dependant on genre and how much the particular game engine pushes the GPU before DLSS).

Nintendo have absolutely NO NEED to push towards a 3tflop GPU because all major engines can and will be scaled around different power profiles (looks at how developers are leveraging the massive PS4/XBOXONE install base even three years into this gen). This rings doubly true when to achieve a 3tflop GPU you're looking at almost half the battery life as a 900gflop mobile/1.8tflop docked device which could run the exact same games. Total RAM is far, far more important to developers and the cherry on top would be an SSD with at least 1GB/s speed. They can easily scale around GPU power aslong as it's modern in feature set (we have seen this with the current Switch running games a lot of people thought "impossible ports!".

A 1.5tflop GPU when mobile is never going to happen because all you have to do is look at how large, bulky, heavy, noisy, hot and how much electricity Steam Deck uses then factor in it lasts about 2 hours for anything approaching a demanding modern game (unless you run the games at 600p/30fps) and even then it doesn't solve the physics of the actual device being a monstrous chunk of tech, plastic and cooling when compared directly to holding a Switch OLED form factor in your hands.
SD isn't very super efficient in regards to power draw.

It's RAM, SSD, and it's CPU (running at 3.5Ghz) takes up the majority of the power draw. And of course, running the GPU at 1-1.6Ghz will also eat a lot of power as well. And it's also using an older node.

Compare that to Switch 2 which is projected to be on a 4nm TSMC node. Moving from 7 to 4 alone gives a near 50% power reduction efficiency. The ARM A78 cores are way more efficient then the Zen 2 on SD at lower clocks. Despite having double the cores as SD it will also run at significantly lower clock speeds and will not take up nearly as much power draw as SD's. The GPU power draw remains to be seen, but it's at a more supposed to be using a newer and efficient node a heah. Switch 2 using UFS will also use a lot less power draw than the NVME on SD. The SOC itself on 4nm should be using much less than the 15 watts on SD's SoC alone.

The Switch launched over 10 years after the PS3 and 3 years after the PS4. In raw power it was about the same as PS3 while undocked, but it had more RAM and architectetures on par with the PS4.
God no, the Switch is significantly better than the PS3 undocked GPU wise.
Ask yourself this. If you think a 1.5tflop handheld (which then goes to 3tflops when docked) was feasible while keeping it within the dimensions of the current Switch while keeping it cool with a tiny fan, while being able to sell it at a profit for $399 do you not think Steam Deck 2 would already have been announced or another company would have built it. Remember I'm not talking DLSS (which is even more cost by the way for the tensor cores). I'm talking pure GPU compute power.
Well actually, we don't know if it's going to be 1/2 split in clock speed. Did you know that Switch docked mode and the lowest switch handheld mode has a 2.5x difference? And going from 720p to 1080p takes 2.25x more pixels. So they were able to increase from 720p-10800p resolution and have some left over for games that weren't bottlenecked by bandwidth.

So if Switch 2 docked is 3-3.3 tflops, then handheld mode's lowest clock speed could easily be 1.2-1.3 tflops
 
so no more shit about LCD screens right?
There's still a nonzero chance of LCD display variations (or LCD display lottery), especially if Innolux does supply the LCD displays.

Compare that to Switch 2 which is projected to be on a 4nm TSMC node. Moving from 7 to 4 alone gives a near 50% power reduction efficiency. The ARM A78 cores are way more efficient then the Zen 2 on SD at lower clocks.
Samsung's 8N process node to be correct.
 
0
I’m sorry but I AM an OLED lover and I WILL complain if the Switch 2 ships with only an LCD, I don’t care how good of an LCD it is.

Nintendo has no one to blame but themselves for shipping the Switch OLED Model! They showed me just how good a mobile screen CAN look, they should have expected complaints if they’re gonna downgrade us like that!
OLED is awesome but a good LCD does bring other benefits (much higher max brightness for outdoor/daytime use, better HDR, zero burn-in risk).
 
Unless you're on the Chips and Cheese Discord Server who think RT on mobile is a joke that will never happen and Ampere TFLOPs are so weak that GCN2 can beat them so much that 3TFLOPs Ampere is weaker than 1.8TFLOPs GCN2 (PS4)
YezxieZUgyQN.jpeg
Still baffling to me that the server treats the idea of Nintendo spending the extra cash for a <=100mm^2 4N chip as being somehow more inane than dealing with a way worse node, taking up potentially more space on the PCB than it can realistically handle. Or coping over the idea of some of the 12SMs simply being disabled under handheld, despite no such indication existing in the source code.
 
Still baffling to me that the server treats the idea of Nintendo spending the extra cash for a <=100mm^2 4N chip as being somehow more inane than dealing with a way worse node, taking up potentially more space on the PCB than it can realistically handle. Or coping over the idea of some of the 12SMs simply being disabled under handheld, despite no such indication existing in the source code.
They seemingly (Up and down the stack on their biggest people) seem to be very anti-NVIDIA. Taking any reasonable ground take about NVIDIA like "They make things easy and usually are first to impliment tech, and they try to make it run faster on their own hardware" as NVIDIA shilling.

So much so they say DLSS "Barely runs on the Tensor cores" indicating that they don't quite understand how DLSS or other upsamplers work wrt to the rendering pipeline, or ignore that fact in the face of an Anti-NVIDIA Narrative.

Like, yeah I think that DLSS could probably run in a DP4a branch like XeSS. Yeah I think Ray Reconstruction is a dumb name for what is likely Neural Radiance Chache.

But like, that doesn't make their utilization of Tensor Cores non-existent. They do in fact use them, they seem to just look at NSight, see the % Utilized be low, and say "It barely uses it" without understanding that the cores would only be active when needed in the pipeline due to Tensor Operations having to be sequential to shader operations.
 
Yeah I think Ray Reconstruction is a dumb name for what is likely Neural Radiance Chache.

DLSS ray reconstruction is a very different thing to Neural Radiance Cache. DLSS ray reconstruction is a reconstruction technique (like standard DLSS) which is specifically built for ray tracing use-cases, so the name is quite appropriate. Radiance caching is a global illumination technique which attempts to store (cache) the amount of radiance emitted by surfaces in a scene, and calculates indirect illumination by aggregating nearby radiance (with or without ray tracing for occlusion checks). It's the basis of Lumen in Unreal Engine 5, as well as being used in other game engines. Neural Radiance Cache is an ML-driven variant where the representation of the radiance cache is itself a neural network. The neural network is continually trained with new samples, and queried to provide the expected radiance from a given position in the scene.
 
DLSS ray reconstruction is a very different thing to Neural Radiance Cache. DLSS ray reconstruction is a reconstruction technique (like standard DLSS) which is specifically built for ray tracing use-cases, so the name is quite appropriate. Radiance caching is a global illumination technique which attempts to store (cache) the amount of radiance emitted by surfaces in a scene, and calculates indirect illumination by aggregating nearby radiance (with or without ray tracing for occlusion checks). It's the basis of Lumen in Unreal Engine 5, as well as being used in other game engines. Neural Radiance Cache is an ML-driven variant where the representation of the radiance cache is itself a neural network. The neural network is continually trained with new samples, and queried to provide the expected radiance from a given position in the scene.
Well Chips and Cheese Server members are convinced it's NRC and nothing really notable.
 
Although I agree with your assessment about some overestimating the new hardware (it won’t touch Series S), I also think you’re actually underestimating it too.

A PS4+ level piece of hardware is going to provide some fantastic visuals and a dramatic leap forward. This is going to be a machine which will be able to run Red Dead Redemption 2 as opposed to Red Dead Redemption 1 like the current Switch - That’s a huge leap.

Just look at how good other games like Spider-Man, TLoU P2, God of War or Horizon look on base PS4. Switch 2 should have the potential to at least match that fidelity and at higher resolutions.

No one will say ‘is that it?’.
Honestly, it's all largely a moot argument anyways. In terms of raw performance, it could be PS4+ but with DLSS (be 2 or 3.5) the performance and resolution will be far beyond what the PS4 or even a PS4 Pro could achieve. Factor in RT and such, and the system will be competing with the PS5/Series line. It's more than capable hardware and will be exceptionally modern. If it rivals Series S fidelity and performance with DLSS enabled, that bodes well for the hardware and the support it'll receive.
 
Honestly, it's all largely a moot argument anyways. In terms of raw performance, it could be PS4+ but with DLSS (be 2 or 3.5) the performance and resolution will be far beyond what the PS4 or even a PS4 Pro could achieve. Factor in RT and such, and the system will be competing with the PS5/Series line. It's more than capable hardware and will be exceptionally modern. If it rivals Series S fidelity and performance with DLSS enabled, that bodes well for the hardware and the support it'll receive.
I am so ready for this new console to be revealed, I know it likely won't be until next year but it is always exciting when a new console is around the corner. Will be very interesting to see what comes out of TGS and leaks start to be more prevalent
 
Some Brief Expectation Setting

Hi, I'm Old Puck. You may know me from such films as DLSS 2: Electric Boogaloo and A Midsummer Night's Stream. If you've been here a while, you've probably formed your own expectations for the REDACTED, A New Console By Nintendo(tm).

But if you're new, or maybe a little techo-intimidated, maybe you haven't. Maybe you want to be excited, but you've been burned by Nintendo before. Maybe you've seen a lot of arguing about numbers but you don't know what those numbers mean. Maybe you just want expectations that are reasonable, but also don't set you up for disappointment.

Don't worry, I'm here to help.

TL;DR: Last Gen ports are easy, Next Gen ports are hard, Nintendo games look great

Every discussion here comes down to this, we're just arguing over the definitions of "easy" and "hard" - not the fundamental truth of this statement. You stay there, you'll be alright.

It'll be the best damn handheld on the market. But "best" doesn't mean "most powerful" in every way.

We know a lot about Drake, the chip inside the new console, but we don't know everything. There are a lot of estimates to fill in those blanks. But even the most pessimistic estimates make a pretty excellent handheld.

The Steam Deck is an amazing piece of kit, but it's magic trick is making a PC that fits in your hand. The Switch's magic trick is to make a powerful handheld that plugs into your TV. That's not the same trick, and when you understand that, you can start to see the advantages and disadvantages Nintendo has.

Nintendo will let developers code down to the hardware, taking advantage of every specific feature that Redacted offers. When developers do that, there will be enough power to offer what Steam Deck does, but in a smaller form factor, with better battery life.

It'll be "last gen +" when you dock it, but not "last gen Pro +."

How the hell did The Witcher III get on Switch? The answer is "a lot of hard development work." But the other answer is "the Switch may not be powerful, but it is modern." All those modern features let the development studio perform clever optimizations that wouldn't be possible on the 360, even though the Switch isn't way ahead of the 360 in raw horsepower.

The optimists who expect 3.5 TFLOPS of Raw GPU Compute!!! and the pessimists who expect 2 TFLOPS of Widdle Baby Console, For Babies? Both of them are in this range.

There is only so much electricity, silicon, and money to go around. Even if Nintendo chooses to make any single metric competitive with the last gen Pro consoles, or the Series S, or whatever - that doesn't leave enough power to make every metric competitive. We're discussing how big the "plus" in "last gen plus" might be, and where it might be - RAM, CPU, GPU? - but we're not going to get all of them all at once.

The Pro consoles are shit comparison points, anyway

The Pro consoles didn't have any exclusives, they were always "enhanced" base model games. There is not a single game every published that actually takes full advantage of a Pro system's hardware. The One X has an absolute monster of a GPU because Microsoft was trying to baseball bat the Xbox One's library up to 4k.

The Pro consoles were also pretty crap at things that the base consoles were also crap at. The One X has the CPU of a potato, because the One had a potato for a CPU, and improving that potato wasn't worth it for enhanced Xbox One games.

Winning some sort of point-for-point spec battle with a Monster Potato isn't a victory. There are better paths to games that look as good as Monster Potato Games, paths that open up more modern ports.

What about the Series S?

It's like the Pro console but backwards. Again, there are no exclusives, it's designed to receive cut-down next gen games. Complaints about the Series S from developers has more to do with being required to make Next Gen Magic happen on Series X, while achieving Feature Parity on Series S.

There are places that Drake simply can never compete with the Series S no matter how much you tweak the numbers - like CPU performance. There are places where Drake can beat it quite easily - like the memory architecture. And then there are places where maybe Drake could be competitive, but at a cost that in practice you probably aren't will to pay - like GPU performance.

What about Frame Generation?

The thing you probably think Frame Gen does? It doesn't do that.

Maybe it's possible on Drake, maybe it isn't. But it's not magic, and the best case scenarios I've seen describe a technology less useful than the Switch's IR camera.

IS NINTENDOOOOOOOMED?

Best damn handheld on the market. Huge upgrade for Nintendo games on your TV. Great last gen ports you can take on the go. Pokemon. The exact recipe that made you buy a Switch in the first place.

I ain't worried.
 
Then they either don't understand DLSS or Neural Radiance Cache (or both).
There was a lot of chatter about it being NRC when it was announced, and I think some people didn't explore it more deeply than that.

On the one hand, I really hate Nvidia putting all these technologies under the DLSS banner, but on the gripping hand, they are integrated tech. I just wish that they hadn't associated the feature set with the version number the way they have.
 
Honestly, it's all largely a moot argument anyways. In terms of raw performance, it could be PS4+ but with DLSS (be 2 or 3.5) the performance and resolution will be far beyond what the PS4 or even a PS4 Pro could achieve. Factor in RT and such, and the system will be competing with the PS5/Series line. It's more than capable hardware and will be exceptionally modern. If it rivals Series S fidelity and performance with DLSS enabled, that bodes well for the hardware and the support it'll receive.
Like, barring the scenario of them sticking with 8N and not applying any of Lovelace's power optimizations (things that would be obscured in the NVN2 API). The main thing likely holding Switch 2 back from absolutely destroying the Series S would be CPU and Bandwidth but those have asterisks unlike OG Switch.

Unlike OG Switch which had 1/7th the bandwidth of PS4. Switch 2 at a safe guess if 102GB/s would be only slightly less than half the bandwidth of Series S. But it's also more than double the response speed (half the latency, different term for the same thing), so smart optimizations on how data is delivered could result in an effective bandwidth over time (Minutes.etc) that is closer.

And CPU wise, well, the low latency and inflated cache helps a lot there as T239 unless NVIDIA breaks best practices should have a System Level Cache, which because it's all but confirmed it's A78C, means the SoC overall would have more cache on tap, which for games is very important.
 
Honestly, it's all largely a moot argument anyways. In terms of raw performance, it could be PS4+ but with DLSS (be 2 or 3.5) the performance and resolution will be far beyond what the PS4 or even a PS4 Pro could achieve. Factor in RT and such, and the system will be competing with the PS5/Series line. It's more than capable hardware and will be exceptionally modern. If it rivals Series S fidelity and performance with DLSS enabled, that bodes well for the hardware and the support it'll receive.
Man I'm liking these confident statements from NateDrake 🥵

What I'm gathering is that the low-end estimation of the new hardware is like 4x the current Switch before any up-rez magic features. All while addressing memory bottlenecks that plagued the current system.
You give the likes of MonolithSoft, Next Level Games and well-budgeted-Platinum that spec, fuhgeddaboutit.

... Where is ArmouredBear to keep the hype in check 🌧️

===

Teehee, the lack of bluetooth in favor of Sony's in-house audio protocol on the Portal is so Sony.

I'm OK with a nice LCD panel, and the one on the PS Portal looks nice thanks to that 8-inch real estate. Being able to wake the PS5 from anywhere and have it operate as your game server is a cool idea. In a different way than the Switch, its like WiiU evolved.

Tbh, while the new impression videos give me a much better vibe than the reveal, I still think it will be a flop.
And isn't it subject to input lag over wifi or did I miss some additional tricks it does to mitigate that?
 
There was a lot of chatter about it being NRC when it was announced, and I think some people didn't explore it more deeply than that.

On the one hand, I really hate Nvidia putting all these technologies under the DLSS banner, but on the gripping hand, they are integrated tech. I just wish that they hadn't associated the feature set with the version number the way they have.
Promoting the frame generation as just "DLSS 3" rather than a new feature added to the DLSS toolkit in 3.0 was a huge unforced error. The marketing really should have focused more on the feature names than the version number.
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom