• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

Whoever came up with the marketing idea to lump the different features together under the "DLSS 3.x" brand was clearly out of the loop with regards to the upcoming features roadmap.

To play devil's advocate, Microsoft could have increased the amount of RAM the Xbox Series S has (e.g. 12 GB GDDR6, with 10 GB reserved for video games, and 2 GB reserved for the OS) and/or increased the RAM bus width (e.g. 256-bit), from the beginning. Alex Battaglia from Digital Foundry has heard from third party developers that the RAM is the Xbox Series S's biggest bottleneck. And besides, Microsoft's already selling the Xbox Series S at a loss.
Increasing RAM and bus width are, for the most part, tied together.

The Series S as is uses five chips (160-bit bus width). And they're 2 GB each. Unfortunately, 2 GB is as dense as it gets for GDDR6.
Technically, there are two options to increase RAM here. One realistic, the other "Hahaha...no".
The realistic one is to widen the memory bus so you can add more chips.
The 'technically exists' option is something that I think is referred to as clamshell mode? You can actually double the number of chips, but you don't increase bandwidth. I think that it's called clamshell because the usual approach is to put that second set of chips on the other side. It also jacks up the complexity of the PCB design to accommodate those chips on the other side. In the end, it's very rare for this option to actually be used (see the 4060 TI 16 GB version*)

*so, the 4060 TI. 128-bit bus width, so normally four chips. Using 2 GB chips, we get 8 GB total. People weren't too fond of that. Nvidia probably foresaw that, so not too long later, we got a 16 GB version.
For a $100 more. I'd say that's a combination of price of 4 more chips, price of labor in PCB redesign, plus a little extra 'of course we're extracting a premium from you if you're going to turn your nose up at the lower SKUs'.

---

Regarding latency for GDDR6; it's worth noting that ~140ns figure comes from Aida64. For those reading, if you remember I earlier linked to Anandtech and Chipsandcheese for the 12900k, they're not using Aida, so I wouldn't recommending mixing the assorted figures.

(the 4700S is a PS5 chip with defective iGPU)
The latency chart on this page does include usage of a tool made by someone from Chipsandcheese which tests to a far further depth (up to 1 GB).
I guess it does hang around in the ~140 ns range up to a little further past 64 MB. Then your L3 hit rate keeps tanking until plateauing past 256 MB (at which point you're basically never hitting the 4 MB L3)? The other neat thing to notice is the difference in rate of change between the GDDR6 and DDR4.
So anyway, I think that ~246 ns is more accurate for describing when specifically needing to go out to GDDR6 for the PS5 chip.

Ugh, wish I really understood those benchmark tests more, to be able to truly grok when/how much the difference between the 4750G and 4700S is due to one factor, when/how much is it the other factor. (on paper, the 4750G should clock up to 400 mhz higher, plus is using DDR4-3200 ram for better latency)
 
If we want to factor in DLSS and such, sure; the visual gap between the two will be minimal but that still goes back to the Switch Successor being a PS4 Pro+ tier hardware with modern features to help assist it outperform its raw numbers. Going in expecting Series S in raw performance without DLSS factored in would be setting one self for disappointment. It'll be capable hardware, especially for its form factor.
Well DLSS would be on 99% of the time. Or at least something utilizing the Tensor Cores, best practices and all so I would lump it in with the perf of the system.

But "PS4 Pro+" is a bit of a misnomer term as, well, PS4 Pro and Series S are very close GPU wise, so why not compare to the architecture that has the more similar features? (Tile Based Rendering, Mesh Shading, Freeform FP16, RT Acceleration.etc) versus the architecture that technically can't use any newer features outside of FP16 solely for Checkerboarding because it has to be built around GCN2 games on PS4 OG.

CPU wise T239 with 8 A78Cs would be infinitely closer to Series S versus PS4, and GPU power and features, even without DLSS, the same applies
 
No issue here. My statement was not necessarily addressed at you, even though I quoted your post. It was more of a global statement made at people who say x game can’t run on switch. The biggest issue for switch would be storage. Optimization goes a long way. I remember people saying no way Arkham Knight could run on switch, and here we are.

I would Stil say that the Incressed AI/NPC load of Act 3 would be a concern one should not dismiss out of hand. But yes optimization does goes a long way and there is still to see how Larian will tackle the CPU utilazation on the PC side of act 3 and forward. The PS5 will giver a better picture of how consoles will handle it.
 
0
Are there are any "true" native 4k games on consoles at all? I'm not aware of any, but it's not something I pay a lot of attention to.

the-touryst-artHD-1480x833.jpg
 
The 3.4Tflop would be for docked mode, so battery life is not a concern. I will admit that I am cautious with my expectations, because Nintendo tends to be conservative with the thermals with their tech. I expect power draw to be very similar to Switch, but without knowing what process Drake is being manufactured, its hard to lock down the clock speeds. An extremely pessimistic outcome would be to match the clock speeds for Switch and even then it is 2.4Tflop. My guess is that it will be right around 3 Tflop.
Yeah but you get the mobile tflops by halving the docked tflops in general terms. Do people really expect Switch 2 to be 3x the current Switch's docked mode in handheld form?... That line of thinking is wishful thinking imo when you consider they have shown time and time again they value battery life over even at times running games at 360p-480p resolution in handheld mode. An issue I rarely see people talk about is active cooling and the costs involved in cooling a chip running at one third to two thirds the wattage over a chip like Tegra X1 which only need a tiny, tiny fan.

@mjayer

I absolutely adore @Z0m3le but let's be real if you followed him during the pre Switch and Switch Pro speculation years he has consistently massively overshot in terms of how high Nintendo will clock their chipsets thus leading to massively reduced performance in relation to his expectations.

I 100% think Switch 2 will be massively more powerful than the current Switch but starting to compare it to current gen machines with good desktop class CPU's, their massive fast RAM pools, their ridiculously fast SSD's and their GPU's (which are still in a best case scenario going to be 3-5x the performance of a best case scenario Switch 2 GPU) is just setting yourself up for disappointment imo.

Switch 2 will be around top end Switch level visuals with more complex geometry, better textures, better quality lighting and models while running at much higher resolutions due mainly to DLSS and more stable framerates while again getting about 20-25% of the latest and greatest big AAA third party games. It's not going to be the N64 to Gamecube leap some think. it's going to be the PS4 to PS5 leap which causes arguments and the more core audience asking "is this it, is this next gen?" when it comes to visuals in games like Starfield or Spider-Man 2 (I think both look phenomenal personally and go for aims not necessarily to do with visuals but either speed of traversal or scale unseen before in the AAA space).

Will this please the hardcore among us? probably not but is it enough to please the 100+ million consumers that Nintendo will want to capture with the device?... of course.
 
Yeah but you get the mobile tflops by halving the docked tflops in general terms. Do people really expect Switch 2 to be 3x the current Switch's docked mode in handheld form?... That line of thinking is wishful thinking imo when you consider they have shown time and time again they value battery life over even at times running games at 360p-480p resolution in handheld mode. An issue I rarely see people talk about is active cooling and the costs involved in cooling a chip running at one third to two thirds the wattage over a chip like Tegra X1 which only need a tiny, tiny fan.

@mjayer

I absolutely adore @Z0m3le but let's be real if you followed him during the pre Switch and Switch Pro speculation years he has consistently massively overshot in terms of how high Nintendo will clock their chipsets thus leading to massively reduced performance in relation to his expectations.

I 100% think Switch 2 will be massively more powerful than the current Switch but starting to compare it to current gen machines with good desktop class CPU's, their massive fast RAM pools, their ridiculously fast SSD's and their GPU's (which are still in a best case scenario going to be 3-5x the performance of a best case scenario Switch 2 GPU) is just setting yourself up for disappointment imo.

Switch 2 will be around top end Switch level visuals with more complex geometry, better textures, better quality lighting and models while running at much higher resolutions due mainly to DLSS and more stable framerates while again getting about 20-25% of the latest and greatest big AAA third party games. It's not going to be the N64 to Gamecube leap some think. it's going to be the PS4 to PS5 leap which causes arguments and the more core audience asking "is this it, is this next gen?" when it comes to visuals in games like Starfield or Spider-Man 2 (I think both look phenomenal personally and go for aims not necessarily to do with visuals but either speed of traversal or scale unseen before in the AAA space).

Will this please the hardcore among us? probably not but is it enough to please the 100+ million consumers that Nintendo will want to capture with the device?... of course.

Although I agree with your assessment about some overestimating the new hardware (it won’t touch Series S), I also think you’re actually underestimating it too.

A PS4+ level piece of hardware is going to provide some fantastic visuals and a dramatic leap forward. This is going to be a machine which will be able to run Red Dead Redemption 2 as opposed to Red Dead Redemption 1 like the current Switch - That’s a huge leap.

Just look at how good other games like Spider-Man, TLoU P2, God of War or Horizon look on base PS4. Switch 2 should have the potential to at least match that fidelity and at higher resolutions.

No one will say ‘is that it?’.
 
To play devil's advocate, Microsoft could have increased the amount of RAM the Xbox Series S has (e.g. 12 GB GDDR6, with 10 GB reserved for video games, and 2 GB reserved for the OS) and/or increased the RAM bus width (e.g. 256-bit), from the beginning. Alex Battaglia from Digital Foundry has heard from third party developers that the RAM is the Xbox Series S's biggest bottleneck. And besides, Microsoft's already selling the Xbox Series S at a loss.
Its 10 gb, 2,5 for OS. 7,5 for games.
 
Having a Series S version to down port from is a favor for Nintendo, as it means there will be a variety of titles from third-party partners with potential of release -- depending what the quality of the Series S release was.

It doesn't mean the two are directly comparable in terms of raw power, however.
Well the main thing is, look at RTX 3050M, which can be argued to be the closest desktop counterpart to T239/Drake
  • 3050M
    • 16SMs (2056 CUDA, 16 RT, 64 Tensor)
    • 4GB GDDR6 on a 128bit bus, 192GB/s
      • With Latency (based on GDDR6 Latency tests on other systems) likely breaking the 140ns mark easily, easily inflating over 160ns when pushing higher data amounts.
    • 1.34GHz Boost clock resulting in 5.5TFLOPs FP32 on Samsung 8N
      • Has multiple degrees of overhead holding performance back and limiting those 5.5TFLOPs from being fully used.
        • Windows Overhead
        • DirectX Overhead
        • Latency overhead as it's a dedicated GPU.
        • Shader Comp overhead
        • Unable to receive specific optimizations due to being a PC part.
  • T239
    • 12SMs (1536 CUDA, 12RT, 48 Tensor)
    • 12+GB LPDDR5 on a 128bit bus, likely 102GB/s at least docked.
      • However, latency would be far tighter at worst being half the latency of the GDDR6 in the 3050M/Other Consoles. So, it can make repeat-calls far faster, which can help dramatically for CPU/Latency dependent tasks like Ray Tracing.
    • ??? Clocks, however Thraktor best calculated in this post, that T239 would probably best-fit on TSMC 4N due if not die-size concerns, power concerns. So running on his clocks, T239/Drake Docked would be around 3.4TFLOPs at 1.1GHz.
      • And unlike the 3050M, it would have little of the overhead
        • Custom OS that would be puny versus likely even the PS5 OS and especially Xbox's OS.
        • Custom API that has shown to be very close to metal
        • Less latency as the GPU is right next to the CPU and RAM
        • No Shader Comp Overhead
        • Able to be specifically tuned for.
    • Has more cache overall than 3050M as the GPU would have CPU-Cache access and also, assuming NVIDA still follows best ARM practices, a System-Level Cache.
So, in my honest opinion, the 3050M and T239 would likely perform similarly. The 3050M's inefficiencies allowing T239's Optimizations/Efficiencies to match or overtake it.

And looking at 3050M...well it punches the PS4 into a 30ft hole. And actually trades blows with the arguable closest GPU (within the same Architecture) to the Series S GPU, the 6500M.

So I say Series S and T239/Drake could be compared, unless you 100% with 0 doubt or other chance Nintendo and NVIDIA would really ship a mobile SoC on Samsung 8N in 2024/2025.

And then you add DLSS, Ray Reconstruction, Ampere Mixed Percision theoretically resulting in a Mixed-Percision TFLOP Value over 5TFLOPs with Accumulate FP16...yeah, I say T239 vs Series S would probably be closer than a lot people are expecting unless they really stick with 8N
It's too bad Nintendo doesn't use AGX Orion clock counts and speeds (12 A78 cores at 2.2 GHz each), then it would give the Series S a run for it's money 😉 That and lpddr5x max clock speeds
 
0
You do realize that is all TFLOPs is right? it's just a hardware resource, specifically how many ops are theoretically available per second... You even literally describe FLOPs in your example lol. I understand not wanting to give numbers, but if you do the math above, you come to 393GFLOPs for Switch and 2359GFLOPs for Drake at the same clock.

Having said that, there is architectural differences and bottlenecks to consider, you are ultimately right, but the problem is there just isn't a better method of performance metrics for GPUs, outside of actual benchmarks, which we won't get for a while.
The reason why I don’t think it’s as useful a metric in this case it’s because you can have a device that is 300 Gflop and then you’re comparing it to a device that is 3000 GFLOPs, that does not mean the device that is 3000 Gflop will be doing 10 times a performance, it could be doing (in practice) 5 to 6 times the real world performance.

Just like how the 4090 is ~80 Tflop and the 3090TI is ~40 TFLOPs, and it is not twice The performance, the same rules apply for Drake and switch. To use the 4090 and 3090 example again, the 4090 has about 52% more hardware resources(SMs), it clocks faster, it has that large cache to improve memory bandwidth, etc. it doesn’t offer 2x performance in practice. In practice it is 40-50% more performance.!

These don’t always scale linearly. Hence why using actual hardware resources for this sake, the SMs and other aspects, give you a closer to accurate idea.

TFLOPs, GFLOPs, MFLOPs, etc are a measure of how many floating point calculations they can do in a second. They are not always representative of actual performance gained.

To give the other reader an idea, the switch has 2 SMs and at max (docked) it performs 393 FP32 GFLOP per second.

Let’s assume a hypothetical chip called… Damian (keeping the D start here). It is 4SMs and it is clocked to 1.67GHz in TV mode. It is doing 1.7TFLOPs in TV mode. That isn’t going to literally give you over 4x the Performance, it’ll give you a smaller one. between 2 and 3x, which isn’t enough to look like a generational leap in the eyes of some of you.

And the way these FLOPs give the result with the end differs per architecture. One can be more efficient, the other could be less efficient. They vary per vendor, the GPUs vary, etc.

Smartphones have been doing way higher FLOPs for a while now, vs the switch, and the scene has improved tremendously for game development on smartphones, and there are games that don’t look that much better than the ones in the switch. I am only speaking from a GPU pov, the CPU and memory bandwidth also matters, etc .



With all that said, this is why I am placing it now that Drake will be at minimum six times the performance capability of the Nintendo switch when docked. Because it has, at the very least from the GPU resources (namely amount of SMs), 6x the amount.
 
Yeah but you get the mobile tflops by halving the docked tflops in general terms. Do people really expect Switch 2 to be 3x the current Switch's docked mode in handheld form?... That line of thinking is wishful thinking imo when you consider they have shown time and time again they value battery life over even at times running games at 360p-480p resolution in handheld mode. An issue I rarely see people talk about is active cooling and the costs involved in cooling a chip running at one third to two thirds the wattage over a chip like Tegra X1 which only need a tiny, tiny fan.
1.2 tflops in handheld mode is totally feasible at respectable handheld clockspeeds. Coming from 7nm in SD to 4nm is pretty significant *in regards to power efficiency. But different hardware is also being compared, as well as SD having much higher clockspeeds (the CPU, the hard drive)and components that take more power draw than Drake.
 
Last edited:
Although I agree with your assessment about some overestimating the new hardware (it won’t touch Series S), I also think you’re actually underestimating it too.

A PS4+ level piece of hardware is going to provide some fantastic visuals and a dramatic leap forward. This is going to be a machine which will be able to run Red Dead Redemption 2 as opposed to Red Dead Redemption 1 like the current Switch - That’s a huge leap.

Just look at how good other games like Spider-Man, TLoU P2, God of War or Horizon look on base PS4. Switch 2 should have the potential to at least match that fidelity and at higher resolutions.

No one will say ‘is that it?’.
That's not it, as arguably having the possibility of taking this new gen of games on the go will in and of itself be a fantastic feature. I think there is a common mistake in thinking about the successor, it will not only do what the previous one did, but better. It will also do what the competition did/does, but better.
Just bought mk8d, which I owned on the wiiu, and I had not imagined the thrill I'd have with it this summer while away. Frankly the new games, breathtaking as they will be, open the door to a whole new range of experiences. Not saying Nintendo will not find yet another brilliant idea to shake things up (hope they will like anyone), but there is a slight possibility that the innovation this time is software specific, like we've seen with labo, ring fit and probably others I forget. The joycons and tablet format opened the door to a range of experiences we did not imagine when announced, sure they can add to that.
 
(With a big imo and lots of guesswork) from what is leaked/speculated, in terms of graphical capabilities I'm expecting Switch 2 to sit against PS5 and XSX, more or less as the Swtich currently sits against PS4/XBO, maybe with just a marginal advantage.
And that would still place Switch 2 in a more favourable position, factoring in:
  • Game engines seem to be more scalable than ever.
  • There are more and better techniques avaliable for upscaling/supersampling (starting with DLSS of course)
  • I think the advantage of higher resolutions marginally decreases after a certain point, meaning a 1080p image looks more favourably compared to a 4K image, than a 540p image looks compared to a 1080p one.
 
Last edited:
Although I agree with your assessment about some overestimating the new hardware (it won’t touch Series S), I also think you’re actually underestimating it too.

A PS4+ level piece of hardware is going to provide some fantastic visuals and a dramatic leap forward. This is going to be a machine which will be able to run Red Dead Redemption 2 as opposed to Red Dead Redemption 1 like the current Switch - That’s a huge leap.

Just look at how good other games like Spider-Man, TLoU P2, God of War or Horizon look on base PS4. Switch 2 should have the potential to at least match that fidelity and at higher resolutions.

No one will say ‘is that it?’.
Well I was thinking 1.5tflop in handheld mode which is on par with a Steam Deck. Look at the size, how much cooling and how much power Steam Deck uses...

Also I totally agree that big budget PS4 exclusives are astounding looking but will Nintendo spend $70 million on average outside of a large scale Zelda every five or six years? I personally don't think so. Switch games are already amazing looking when ran on PC at 4k. All they need to do is increase the geometry a bit, improve the textures then if they want to get fancy use RT for lighting and shadows. Boom you have games that will look on par with Ratchet PS5 and you don't need a 3tflop GPU to do it especially when you can leverage DLSS to take 900p/1080p games up to a near 4k image when it's docked.
 
Well I was thinking 1.5tflop in handheld mode which is on par with a Steam Deck. Look at the size, how much cooling and how much power Steam Deck uses...

Also I totally agree that big budget PS4 exclusives are astounding looking but will Nintendo spend $70 million on average outside of a large scale Zelda every five or six years? I personally don't think so. Switch games are already amazing looking when ran on PC at 4k. All they need to do is increase the geometry a bit, improve the textures then if they want to get fancy use RT for lighting and shadows. Boom you have games that will look on par with Ratchet PS5 and you don't need a 3tflop GPU to do it especially when you can leverage DLSS to take 900p/1080p games up to a near 4k image when it's docked.

Nintendo's games look just as good if not better than a lot of the best looking PS3/360 games.

I see no reason why it wouldn't be the same with PS4/Xbox One games with the next hardware.

Not only that but we'll see a good few native PS4 ports with the likes of Resident Evil 2, 3, 4, VII and Village all likely making the jump and with a graphical quality much better than what the Switch could ever handle.
 
Real Short Answer
DLSS BABY

Long, but hopefully ELI5, answer
There are actually dozens of feature differences between the PS4 Pro's 2013 era technology and Drake's 2022 era technology that allow games to produce results better than the raw numbers might suggest. But DLSS is the biggest, and easiest to understand.

Let's talk about pixel counts for a second. A 1080p image is made up of about 2 million pixels. A 4K image is about 8 million pixels, 4 times as big.

what-is-4k-common-2

All else being equal if you want to natively draw 4 times as many pixels, you need 4 times as much horsepower. It's pretty straight forward. One measure of GPU horsepower is FLOPS - floating-point operations per second. The PS4 runs at 1.8 TFLOPS (a teraflop is 1,000,000,000 FLOPS).

But that leads to a curious question - how does the PS4 Pro work? The PS4 Pro runs at only 4.1 TFLOPS. How does the PS4 Pro make images that are 4x as big as the PS4, with only 2x as much power?

The answer is checkerboarding. Imagine a giant checkerboard with millions of squares - one square for every pixel in a 4k image. Instead of rendering every pixel every frame, the PS4 Pro only renders half of them, alternating between the "black" pixels and the "red"

J5bYzRO.png


It doesn't blindly merge these pixels either - it uses a clever algorithm to combine the last frame's pixels with the new pixels, trying to preserve the detail of the combined frames without making a blurry mess. This class of technique is called temporal reconstruction. Temporal because it uses data over time (ie, previous frames) and reconstruction because it's not just upscaling the current frame but trying to reconstruct the high res image underneath, like a paleontologist trying to reconstruct a whole skeleton from a few bones.

This is how the PS4 Pro was able to make a high quality image at 4x the PS4's resolution, with only 2x the power. And the basic concept is, I think, pretty easy to understand. But what if we had an even more clever way of combining the images - could we get higher quality results? Or perhaps, could we use more frames to generate those results? And maybe, instead of half resolution, could we go as far as quarter resolution, or even 1/8th resolution, and still get 4k?

That's exactly what DLSS 2.0 does. It replaces the clever algorithm with an AI. That AI doesn't just take 2 frames, but every frame it has ever seen over the course of the game, and combines them with extra information from the game engine - information like, what objects are moving, or what parts of the screen have UI on them - to determine the final image.

DLSS 2.0 can make images that look as good or better as checkerboarding, but with half the native pixels. Half the pixels means half the horsepower. However, it does need special hardware to work - it needs tensor cores, a kind of AI accelerator designed by Nvidia and included in newer GPUs.

Which brings us to Drake. Drake's raw horsepower might be lower than the PS4 Pro - I suspect it will be lower by a significant amount - but because it includes these tensor cores it can replace the older checkerboarding technique with DLSS 2. This is why I said low-effort ports might not look at good. DLSS 2.0 is pretty simple to use, but it does require some custom engine work.

Hope that answers your question!

But what about Zelda?

It's kinda hard to imagine what a Zelda game would look like with this tech, especially since Nintendo reboots Zelda's look so often. But Windbound is a cel-shaded game highly inspired by Windwaker, and it has both a Switch and a PS4 Pro version.

Here is a section of the opening cutscene on Switch. Watch for about 10 seconds

Here is the same scene on PS4 Pro. The differences are night and day.

It's not just that the Pro is running a 4k60, while the Switch runs at 1080p30. This isn't a great example, because Zelda was designed to look good on Switch, and this clearly wasn't - Windbound uses multiple dynamic lights in each shot, and either removes things that cast shadows (like the ocean in that first shot) making the scene look too bright and flat, or removes lights entirely (like the lamp in the next shot) making things look too dark.
The perfect answer, thank you so much!
 
0
Nintendo's games look just as good if not better than a lot of the best looking PS3/360 games.

I see no reason why it wouldn't be the same with PS4/Xbox One games with the next hardware.

Not only that but we'll see a good few native PS4 ports with the likes of Resident Evil 2, 3, 4, VII and Village all likely making the jump and with a graphical quality much better than what the Switch could ever handle.
I totally agree with you. I've always expected PS4 level graphics with DLSS to give the games closer to 4k image quality when docked. Expecting a 3tflop GPU when docked as a base with DLSS on top is fantasy imo and just setting up people to be massively disappointed. I'm expecting handheld mode to be like 900gflops and then 1.8tflops docked with them using DLSS to take games from 720p/900p/1080p to 1080p/1440p/4k (dependant on genre and how much the particular game engine pushes the GPU before DLSS).

Nintendo have absolutely NO NEED to push towards a 3tflop GPU because all major engines can and will be scaled around different power profiles (looks at how developers are leveraging the massive PS4/XBOXONE install base even three years into this gen). This rings doubly true when to achieve a 3tflop GPU you're looking at almost half the battery life as a 900gflop mobile/1.8tflop docked device which could run the exact same games. Total RAM is far, far more important to developers and the cherry on top would be an SSD with at least 1GB/s speed. They can easily scale around GPU power aslong as it's modern in feature set (we have seen this with the current Switch running games a lot of people thought "impossible ports!".

A 1.5tflop GPU when mobile is never going to happen because all you have to do is look at how large, bulky, heavy, noisy, hot and how much electricity Steam Deck uses then factor in it lasts about 2 hours for anything approaching a demanding modern game (unless you run the games at 600p/30fps) and even then it doesn't solve the physics of the actual device being a monstrous chunk of tech, plastic and cooling when compared directly to holding a Switch OLED form factor in your hands.
 
Real Short Answer
DLSS BABY

Long, but hopefully ELI5, answer
There are actually dozens of feature differences between the PS4 Pro's 2013 era technology and Drake's 2022 era technology that allow games to produce results better than the raw numbers might suggest. But DLSS is the biggest, and easiest to understand.

Let's talk about pixel counts for a second. A 1080p image is made up of about 2 million pixels. A 4K image is about 8 million pixels, 4 times as big.

what-is-4k-common-2

All else being equal if you want to natively draw 4 times as many pixels, you need 4 times as much horsepower. It's pretty straight forward. One measure of GPU horsepower is FLOPS - floating-point operations per second. The PS4 runs at 1.8 TFLOPS (a teraflop is 1,000,000,000 FLOPS).

But that leads to a curious question - how does the PS4 Pro work? The PS4 Pro runs at only 4.1 TFLOPS. How does the PS4 Pro make images that are 4x as big as the PS4, with only 2x as much power?

The answer is checkerboarding. Imagine a giant checkerboard with millions of squares - one square for every pixel in a 4k image. Instead of rendering every pixel every frame, the PS4 Pro only renders half of them, alternating between the "black" pixels and the "red"

J5bYzRO.png


It doesn't blindly merge these pixels either - it uses a clever algorithm to combine the last frame's pixels with the new pixels, trying to preserve the detail of the combined frames without making a blurry mess. This class of technique is called temporal reconstruction. Temporal because it uses data over time (ie, previous frames) and reconstruction because it's not just upscaling the current frame but trying to reconstruct the high res image underneath, like a paleontologist trying to reconstruct a whole skeleton from a few bones.

This is how the PS4 Pro was able to make a high quality image at 4x the PS4's resolution, with only 2x the power. And the basic concept is, I think, pretty easy to understand. But what if we had an even more clever way of combining the images - could we get higher quality results? Or perhaps, could we use more frames to generate those results? And maybe, instead of half resolution, could we go as far as quarter resolution, or even 1/8th resolution, and still get 4k?

That's exactly what DLSS 2.0 does. It replaces the clever algorithm with an AI. That AI doesn't just take 2 frames, but every frame it has ever seen over the course of the game, and combines them with extra information from the game engine - information like, what objects are moving, or what parts of the screen have UI on them - to determine the final image.

DLSS 2.0 can make images that look as good or better as checkerboarding, but with half the native pixels. Half the pixels means half the horsepower. However, it does need special hardware to work - it needs tensor cores, a kind of AI accelerator designed by Nvidia and included in newer GPUs.

Which brings us to Drake. Drake's raw horsepower might be lower than the PS4 Pro - I suspect it will be lower by a significant amount - but because it includes these tensor cores it can replace the older checkerboarding technique with DLSS 2. This is why I said low-effort ports might not look at good. DLSS 2.0 is pretty simple to use, but it does require some custom engine work.

Hope that answers your question!

But what about Zelda?

It's kinda hard to imagine what a Zelda game would look like with this tech, especially since Nintendo reboots Zelda's look so often. But Windbound is a cel-shaded game highly inspired by Windwaker, and it has both a Switch and a PS4 Pro version.

Here is a section of the opening cutscene on Switch. Watch for about 10 seconds

Here is the same scene on PS4 Pro. The differences are night and day.

It's not just that the Pro is running a 4k60, while the Switch runs at 1080p30. This isn't a great example, because Zelda was designed to look good on Switch, and this clearly wasn't - Windbound uses multiple dynamic lights in each shot, and either removes things that cast shadows (like the ocean in that first shot) making the scene look too bright and flat, or removes lights entirely (like the lamp in the next shot) making things look too dark.
Great post. One thing I would add to this knowing quite a few developers and hearing them talk about the internets absolute fetish for resolution and pixel numbers is that not all pixels are created or rendered equally when it comes to games. Some are shaded, some are not. Some have far more complex shading than others. Then there's shading tricks like VRS which to those that don't know only fully shades certain parts of the image (usually the middle 60-80%). All of this matters massively and it's not just a case of 2 million pixels versus 8 million pixels for 1080p vs 2160p and what's better for instance. The complexity of what you're rendering in those 2 million pixels could be far more impressive to someone than rendering something far more basic at 4k. Yes the 4k image would be crisper but would you rather have a sharper image or a generational leap in visual fidelity? That's the question Sony asked themselves but from what I know because of their push for "4k" with PS4 Pro then putting "8k" on the PS5 box they felt they couldn't go backwards. They also sell HDTV's...

Imagine Insomniac started Spider-Man 2's development at a base render of 1600x900 for instance. They would produce almost Hollywood level CGI to the average person yet they would be running the game at around 150% less base resolution versus the 1440p temporal resolution they typically start from but they would be dropping jaws all over the internet versus the current discourse on twitter about "this is just Spider-Man PS4 with faster swinging!" lol.
 
I totally agree with you. I've always expected PS4 level graphics with DLSS to give the games closer to 4k image quality when docked. Expecting a 3tflop GPU when docked as a base with DLSS on top is fantasy imo and just setting up people to be massively disappointed. I'm expecting handheld mode to be like 900gflops and then 1.8tflops docked with them using DLSS to take games from 720p/900p/1080p to 1080p/1440p/4k (dependant on genre and how much the particular game engine pushes the GPU before DLSS).

Nintendo have absolutely NO NEED to push towards a 3tflop GPU because all major engines can and will be scaled around different power profiles (looks at how developers are leveraging the massive PS4/XBOXONE install base even three years into this gen). This rings doubly true when to achieve a 3tflop GPU you're looking at almost half the battery life as a 900gflop mobile/1.8tflop docked device which could run the exact same games. Total RAM is far, far more important to developers and the cherry on top would be an SSD with at least 1GB/s speed. They can easily scale around GPU power aslong as it's modern in feature set (we have seen this with the current Switch running games a lot of people thought "impossible ports!".

A 1.5tflop GPU when mobile is never going to happen because all you have to do is look at how large, bulky, heavy, noisy, hot and how much electricity Steam Deck uses then factor in it lasts about 2 hours for anything approaching a demanding modern game (unless you run the games at 600p/30fps) and even then it doesn't solve the physics of the actual device being a monstrous chunk of tech, plastic and cooling when compared directly to holding a Switch OLED form factor in your hands.
Won’t the T239 be more efficient than whatever’s inside the Steam Deck, therefore potentially making battery life, thermals e.t.c. better in response to its performance? Also, in Docked mode battery life just isn’t something to worry about, so it can be pushed to higher clocks.

People have done a lot of research into clock speed and TFLOPs on this thread and they’ve backed it up with a lot of evidence, obviously it’s all just speculation, but it adds up. So, yes whilst you’re right, it currently is fantasy, but the only reason is because we don’t actually KNOW yet, not because it’s an impossibility.
 
A 1.5tflop GPU when mobile is never going to happen because all you have to do is look at how large, bulky, heavy, noisy, hot and how much electricity Steam Deck uses then factor in it lasts about 2 hours for anything approaching a demanding modern game (unless you run the games at 600p/30fps) and even then it doesn't solve the physics of the actual device being a monstrous chunk of tech, plastic and cooling when compared directly to holding a Switch OLED form factor in your hands.
You're still ignoring a lot of the problems with the Steam Deck in your comparison. A lot of problems that a dedicated console doesn't have
 
I totally agree with you. I've always expected PS4 level graphics with DLSS to give the games closer to 4k image quality when docked. Expecting a 3tflop GPU when docked as a base with DLSS on top is fantasy imo and just setting up people to be massively disappointed. I'm expecting handheld mode to be like 900gflops and then 1.8tflops docked with them using DLSS to take games from 720p/900p/1080p to 1080p/1440p/4k (dependant on genre and how much the particular game engine pushes the GPU before DLSS).

Nintendo have absolutely NO NEED to push towards a 3tflop GPU because all major engines can and will be scaled around different power profiles (looks at how developers are leveraging the massive PS4/XBOXONE install base even three years into this gen). This rings doubly true when to achieve a 3tflop GPU you're looking at almost half the battery life as a 900gflop mobile/1.8tflop docked device which could run the exact same games. Total RAM is far, far more important to developers and the cherry on top would be an SSD with at least 1GB/s speed. They can easily scale around GPU power aslong as it's modern in feature set (we have seen this with the current Switch running games a lot of people thought "impossible ports!".

A 1.5tflop GPU when mobile is never going to happen because all you have to do is look at how large, bulky, heavy, noisy, hot and how much electricity Steam Deck uses then factor in it lasts about 2 hours for anything approaching a demanding modern game (unless you run the games at 600p/30fps) and even then it doesn't solve the physics of the actual device being a monstrous chunk of tech, plastic and cooling when compared directly to holding a Switch OLED form factor in your hands.
Yes, in 2017, 11 years after the PS3, Nintendo released hardware that had the raw power of a PS3 in handheld mode and nearly twice as much as a dock.
But in 2024, 11 years after the PS4, it will settle for producing hardware with the raw power of half a PS4 in handheld mode and a whole one docked.
This doesn't even make sense for Nintendo.
Even the Steam Deck that is no longer "New" manages to push that Ps4 performance in portable mode.
 
Yes, in 2017, 11 years after the PS3, Nintendo released hardware that had the raw power of a PS3 in handheld mode and nearly twice as much as a dock.
But in 2024, 11 years after the PS4, it will settle for producing hardware with the raw power of half a PS4 in handheld mode and a whole one docked.
This doesn't even make sense for Nintendo.
Even the Steam Deck that is no longer "New" manages to push that Ps4 performance in portable mode.
I would wager NG Switch will be more like 4X the performance of Steam Deck after DLSS in handheld mode. Is this cheating with AI? Yeah. That's the point, I imagine.
 
You're still ignoring a lot of the problems with the Steam Deck in your comparison. A lot of problems that a dedicated console doesn't have
And Steam OS mitigates a lot of the issues a PC like handheld would have which is why it punches above it's specs versus other PC handhelds that don't use Steam OS. Obviously a Switch is even more "to the metal" in it's gaming optimisation but it's nowhere near the difference it would have been only a few years ago. Even Windows on PC has made massive strides in getting rid of the bloat running in the background during gaming tasks. It's still not perfect of course and why consoles still outperform similarly hardware equipped gaming PC's.
 
Do people really expect Switch 2 to be 3x the current Switch's docked mode in handheld form?
The Switch launched over 10 years after the PS3 and 3 years after the PS4. In raw power it was about the same as PS3 while undocked, but it had more RAM and architectetures on par with the PS4.

NG Switch is expected to launch almost 11 years after the PS4 and almost 4 years after the PS5. Roughly matching the PS4 in handheld is exactly the ballpark most people are going to expect before going into technical details.

Well I was thinking 1.5tflop in handheld mode which is on par with a Steam Deck. Look at the size, how much cooling and how much power Steam Deck uses...
Steam Deck needs backwards compatibility with hardware not made for portability and it needs to overshot in CPU, because devs aren't adapting their games if it's not enough. It will be also be close to 3 years old when the Switch successor launches.

On top of it, we know from the data stolen from NVidia that it has 6x as many cores as the Switch. More cores means more silicon per chip, making it more expensive. The only advantages for going with that many is either they want something on the XB1~PS4 ballpark while being as power efficient as possible or to raise the ceiling above what a smaller chip could do.
 
Yes, in 2017, 11 years after the PS3, Nintendo released hardware that had the raw power of a PS3 in handheld mode and nearly twice as much as a dock.
But in 2024, 11 years after the PS4, it will settle for producing hardware with the raw power of half a PS4 in handheld mode and a whole one docked.
This doesn't even make sense for Nintendo.
Even the Steam Deck that is no longer "New" manages to push that Ps4 performance in portable mode.
Why would they waste money on pure GPU compute when DLSS can take the burden of work off the GPU (aswell as the DLSS cores providing RT) thus enabling a potential doubling of battery life? Something Nintendo massively values in their handhelds and the reason they went with battery life over a much needed performance upgrade for the die shrunk redbox Switch. I can tell you for certain they were not happy with a handheld that lasted 2.2hours playing Zelda at launch but there were no other options at the time than Tegra.
 
And Steam OS mitigates a lot of the issues a PC like handheld would have which is why it punches above it's specs versus other PC handhelds that don't use Steam OS. Obviously a Switch is even more "to the metal" in it's gaming optimisation but it's nowhere near the difference it would have been only a few years ago. Even Windows on PC has made massive strides in getting rid of the bloat running in the background during gaming tasks. It's still not perfect of course and why consoles still outperform similarly hardware equipped gaming PC's.
Ah, so you do understand why Drake can do Steam Deck performance on a smaller power budget 😉

Why would they waste money on pure GPU compute when DLSS can take the burden of work off the GPU (aswell as the DLSS cores providing RT) thus enabling a potential doubling of battery life? Something Nintendo massively values in their handhelds and the reason they went with battery life over a much needed performance upgrade for the die shrunk redbox Switch. I can tell you for certain they were not happy with a handheld that lasted 2.2hours playing Zelda at launch but there were no other options at the time than Tegra.
Because you can't overcome vast distances with "just lower the resolution". Ask Microsoft how that's working out for them. You still need to be in the ball park to even play
 
Won’t the T239 be more efficient than whatever’s inside the Steam Deck, therefore potentially making battery life, thermals e.t.c. better in response to its performance? Also, in Docked mode battery life just isn’t something to worry about, so it can be pushed to higher clocks.

People have done a lot of research into clock speed and TFLOPs on this thread and they’ve backed it up with a lot of evidence, obviously it’s all just speculation, but it adds up. So, yes whilst you’re right, it currently is fantasy, but the only reason is because we don’t actually KNOW yet, not because it’s an impossibility.
Ask yourself this. If you think a 1.5tflop handheld (which then goes to 3tflops when docked) was feasible while keeping it within the dimensions of the current Switch while keeping it cool with a tiny fan, while being able to sell it at a profit for $399 do you not think Steam Deck 2 would already have been announced or another company would have built it. Remember I'm not talking DLSS (which is even more cost by the way for the tensor cores). I'm talking pure GPU compute power.

Show me the device that isn't the size of a house brick which has a 1.5tflop GPU with a CPU, enough and fast enough RAM and an SSD that doesn't bottleneck games which also lasts more than 2.5 hours for even $499. It doesn't exist. If it doesn't exist then Nintendo cannot build it especially if we're talking about a chip which is what? 2.5 years out of date at this point and will be 4 years out of date if it launches in the expected late 2024 window?

Believe me I would love a 1.5tflop mobile / 3tflop docked Switch 2. I just don't think it will happen but I will say people should look at PS5 vs Series X and realise there's far, far more to game performance than teraflops...
 
The Steam Deck is as hot and power hungry as it is due to its chip being a laptop x86 one embedded into a handheld casing. I love it for what it is, but it can't even hope to compete to newer ARM chips efficiencywise.
 
Ask yourself this. If you think a 1.5tflop handheld was feasible which keeping it within the dimensions of the current Switch while keeping it cool with a tiny fan, while being able to sell it at a profit for $399 do you not think Steam Deck 2 or another company would have built it. Remember I'm not talking DLSS (which is even more cost by the way for the tensor cores). I'm talking pure GPU compute power.
It's called ARM and yeah it exists. See Snapdragon 8 Gen 2. Now if Steam Deck would use an ARM chip instead of X86, then they won't have the game library to support it. Also remember the fact that PC games have overheads due to varying hardware. That's why with certain games, you need to wait for shader compilation before you can start playing else it will be a stutter fest.
 
Ask yourself this. If you think a 1.5tflop handheld (which then goes to 3tflops when docked) was feasible while keeping it within the dimensions of the current Switch while keeping it cool with a tiny fan, while being able to sell it at a profit for $399 do you not think Steam Deck 2 would already have been announced or another company would have built it. Remember I'm not talking DLSS (which is even more cost by the way for the tensor cores). I'm talking pure GPU compute power.

Show me the device that isn't the size of a house brick which has a 1.5tflop GPU with a CPU, enough and fast enough RAM and an SSD that doesn't bottleneck games which also lasts more than 2.5 hours for even $499. It doesn't exist. If it doesn't exist then Nintendo cannot build it especially if we're talking about a chip which is what? 2.5 years out of date at this point and will be 4 years out of date if it launches in the expected late 2024 window?

Believe me I would love a 1.5tflop mobile / 3tflop docked Switch 2. I just don't think it will happen but I will say people should look at PS5 vs Series X and realise there's far, far more to game performance than teraflops...
We can show you devices, but none of them make component purchases in the millions, lowering the cost per part, which allows the device to hit the price you listed (though some are actually cheaper than $500)
 
We can show you devices, but none of them make component purchases in the millions, lowering the cost per part, which allows the device to hit the price you listed (though some are actually cheaper than $500)
The AYN Odin 2 can hit the price point if it ever materialize.
 
0
Ah, so you do understand why Drake can do Steam Deck performance on a smaller power budget 😉


Because you can't overcome vast distances with "just lower the resolution". Ask Microsoft how that's working out for them. You still need to be in the ball park to even play
1- My prediction for Switch 2 is 900gflops in mobile mode. If it's that then it's not clawing back another 900gflops of performance by "to the metal" programming. I thought this was disproven when PS4 launched and games like GTAV ran at a higher resolution at a more stable framerate on a PC using a 750TI which was 1.4tflops vs PS4's 1.84 tflops. This "to the metal" rhetoric that's banded about forums hasn't been nearly as true as some like to believe it has for the past decade. It definitely existed during previous console generations prior to PS4 but mainly due to the exotic hardware found in consoles like PS2/PS3 thus PC hardware having to essentially emulate their specific custom console hardware to get even on par results never mind better results.

2- You can still claw back massive amounts of performance in modern games by lowering native rendering resolution. Why do you think dynamic resolution scaling and image reconstruction even exists? Or the fact you can go run a modern game on PC at 4k look at the framerate then lower it to 1440p and see up to 50% gains in performance by simply lowering native resolution. It doesn't always work obviously when a game is really pushing CPU performance or is bottlenecked by RAM amount or memory bandwidth (Gotham City Knights as a recent examples and is probably the reason Starfield is 30fps on console because they've hit their CPU ceiling). If it was GPU related there would 100% be a 1080p/60fps mode in Starfield on Series X like you know 99% of PS5 games because their games performance directly scales with their rendering resolution because their games like most modern AAA games are built around GPU scaling which is why I don't believe Nintendo will go anywhere near 1.5tflops in mobile mode they'll simply use less GPU silicon and use half of what they would have with tensor cores then use them to upscale the image while also having the option for some light ray tracing).

I've read your posts over the years and greatly enjoyed them so I know you should know all of the above. It feels like you're arguing in bad faith. Apologies if I'm wrong.
 
We can show you devices, but none of them make component purchases in the millions, lowering the cost per part, which allows the device to hit the price you listed (though some are actually cheaper than $500)
That's fair then. My argument would then be are Nintendo going to release bleeding edge hardware which struggles to come in at under $500 though when they've just hit a massive home run by releasing a moderately (at the time) powerful hybrid console which they could sell for $349 but even then barely lasted 2.5 hours in it's flagship launch game? Also the tensor cores aren't free to manufacture and we know it has them so that has to be added to the BoM.

How long is this 1.5tflop mobile GPU going to last in terms of steady performance while gaming or does this chip even sustain it's performance or is it in a phone which throttles it when it gets too hot after 20 minutes at which time you lose 60% of your gaming performance? and if it's not in a phone how big is the device? we get back to the brick like Steam Deck again. I just don't see it personally but I will be glad to be wrong!

A device with the exact same dimensions as the current Switch which is 900gflops handheld / 1.8tflops docked with tensor cores for DLSS/RT, lasts 3 hours when mobile at $399 and plays every current Switch game will be a phenomenal value proposition for 99% of their target audience. There is no need to push further than that and if there's one thing we know about corporations it's they do the bare minimum especially in a post Covid / current War economy.
 
Ask yourself this. If you think a 1.5tflop handheld (which then goes to 3tflops when docked) was feasible while keeping it within the dimensions of the current Switch while keeping it cool with a tiny fan, while being able to sell it at a profit for $399 do you not think Steam Deck 2 would already have been announced or another company would have built it. Remember I'm not talking DLSS (which is even more cost by the way for the tensor cores). I'm talking pure GPU compute power.

Show me the device that isn't the size of a house brick which has a 1.5tflop GPU with a CPU, enough and fast enough RAM and an SSD that doesn't bottleneck games which also lasts more than 2.5 hours for even $499. It doesn't exist. If it doesn't exist then Nintendo cannot build it especially if we're talking about a chip which is what? 2.5 years out of date at this point and will be 4 years out of date if it launches in the expected late 2024 window?

Believe me I would love a 1.5tflop mobile / 3tflop docked Switch 2. I just don't think it will happen but I will say people should look at PS5 vs Series X and realise there's far, far more to game performance than teraflops...

Steam Deck is based on 8-7nm, while Switch 2 is speculated to be based on a 5-4nm. Steam Deck is based on older technology, It's as simple as that.
 
PS4 power isn’t bad, but for a late 2024 release I would love to get something a little better
Well, on pure GPU numbers it would be a PS4...in portable mode.

but that is ignoring all the features that it has over PS4

  • Infinitely Better CPU
  • More RAM Overall
  • Lower Latency RAM
  • GPU Features
    • Tile-Based Rendering (Big one)
    • Hardware VRS Support
    • Hardware-level DX12/Vulkan Support which can help make them and similar architectures run far faster than GCN2 could ever.
    • Shader-Level RT Pipeline Optimization (The CUDA Cores since Turing have been tuned to run RT faster than pre-RT era cards even at the Shader-level)
    • The RT Cores themselves to hardware accelerate RT far faster than RDNA2 can.
    • Mesh Shading Support at the hardware level (Could help a lot, especially if they introduce Lovelace's hardware-level compression/decompression of Mesh Shader formats. Probably bringing speed/size to a better balance than Nanite)
    • DP4a Support (XeSS if it wanted to be used)
    • Tensor Cores which while some people (Looking at you chips and cheese) meme on, have a lot more capability than just DLSS/Ray Reconstruction (Seemingly Neural Radiance Caching giving a.....different name).
      • Tensor Cores are a major part of the FP16 Pipeline on Ampere GPUs, the key thing is Ampere/Lovelace GPUs are actually 1+X (X being the type of FP16 the Tensor Cores do) FP32+FP16 in mixed percision mode.
        • Z0m3le did more research on this than I did, but in Mixed Percision mode, Drake can push past 5TFLOPs of effective compute in docked mode. so applying that, Portable mode T239 would likely push past 2TFLOPs. (550MHz calculated by Thraktor, so 1.6TFLOPs FP32)
      • And then you have DLSS, RR, off-chance of DLSS-FG If the OFA/Pipeline/Algorithm could get tuned properly (It is Orin's OFA so it's a unkown factor)
All that stuff is things that Swtich 2 would have over PS4. And that is all in Portable mode.
 
That's fair then. My argument would then be are Nintendo going to release bleeding edge hardware which struggles to come in at under $500 though when they've just hit a massive home run by releasing a moderately (at the time) powerful hybrid console which they could sell for $349 but even then barely lasted 2.5 hours in it's flagship launch game? Also the tensor cores aren't free to manufacture and we know it has them so that has to be added to the BoM.

How long is this 1.5tflop mobile GPU going to last in terms of steady performance while gaming or does this chip even sustain it's performance or is it in a phone which throttles it when it gets too hot after 20 minutes at which time you lose 60% of your gaming performance? and if it's not in a phone how big is the device? we get back to the brick like Steam Deck again. I just don't see it personally but I will be glad to be wrong!

A device with the exact same dimensions as the current Switch which is 900gflops handheld / 1.8tflops docked with tensor cores for DLSS/RT, lasts 3 hours when mobile at $399 and plays every current Switch game will be a phenomenal value proposition for 99% of their target audience. There is no need to push further than that and if there's one thing we know about corporations it's they do the bare minimum especially in a post Covid / current War economy.
Idgi…what are you trying to get across here? That Nintendo won’t be able to provide customers with a significant upgrade, as discussed in this thread, at a reasonable price point? Everything seems to point towards they will. The Switch is thicker than any phone I’ve seen, it’s thicker than most tablets too. So I don’t understand the phone comparisons.

It’s not about playing every current switch game only though, it’s being able to play new games, even at lower fidelity, that the current switch can’t handle. Of course there is a need, the Switch is getting on a bit, and tech has improved since 2015, that’s almost 10 years ago.

I’m sorry, I know you’re trying to downplay and set your expectations low, but there’s evidence to the contrary. I guess we’ll just have to wait and see.

Edit: Just as an aside on price point, you can buy a PS5 brand new for £350 in the UK. The PS5 is waaaaaaaay more powerful than the Switch 2 has the potential of being. So again, in terms of reasonableness of price, they’ll 100% figure something out.
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom