• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

If they could somehow engineer a way for it to work without a grip/accessory but also still support horizontal play and have it be elegant, then that might actually be enough to satisfy my craving for a new gimmick. Maybe…
vertical gaming is extremely niche and isn't worth the manufacturing effort. I'd just be happy Nintendo adds it back to DS for NSO since some third party will make a grip like they did for switch.

It could be :)
they already tried (large worlds and open hub). that's how we got Bayonetta 3
 
In an effort to not give myself a stroke waiting for Switch 2 news… I started thinking about Switch 3! 🤪 I wonder when we’ll get more info from Nvidia about Thor. I know it won’t wind up in S3, and even the next Tegra won’t be in 3, but I want to know where the future of mobile gaming is headed. I mean, we’re nearing the limits of what shrinking nodes can do, right? I’m kinda anxious to see the specs for Thor so we can see a bit more of a roadmap. Does Nvidia do a fall conference? Or do we have to wait for the next GTC?
 
I found this interesting video about running ray-traced games on a 3050 mobile. Mind you the GPU alone is using 80w and it also has 2048 cores instead of 1536 from the T239, 16 RT cores vs 12 on Drake, while also running at higher clocks vs Drake (likely, even at 4nm), and also having higher memory bandwidth (192GB/s vs around 100GB/s for Drake, likely). If we expect something like 60% of the performance of that 3050 (no, I don't think any amount of optimization would close that gap), we could end up seeing some games running at 30fps with select ray-tracing effects at 540-720p to 1080p DLSS'd on Switch 2:



Edit: also some benchmarks of the 3050 at 35w for good measure:

 
Last edited:
In an effort to not give myself a stroke waiting for Switch 2 news… I started thinking about Switch 3! 🤪 I wonder when we’ll get more info from Nvidia about Thor. I know it won’t wind up in S3, and even the next Tegra won’t be in 3, but I want to know where the future of mobile gaming is headed. I mean, we’re nearing the limits of what shrinking nodes can do, right? I’m kinda anxious to see the specs for Thor so we can see a bit more of a roadmap. Does Nvidia do a fall conference? Or do we have to wait for the next GTC?
I don't think Thor will be a good indication of what the future will look like. It'll be 5nm most likely, and built on Lovelace - but if you remove the gaming specific advances out of Lovelace you basically get Ampere. In other words, if you extrapolate a gaming version of Thor you'll get... Drake.

Well need Thor Next to see what a Tegra chip backed by Blackwell would look like. Rumors are that Blackwell will be the first major rethink of the GPU pipeline since Tesla, so there might be some interesting stuff in there
 
A thing I've been trying to communicate for a while, this seems like a good moment.

Stop trying to compare the Switch NG's power to other consoles, you will make yourself dizzy.

Gamers like to power rank things, tech nerds like to compare technologies, it's all understandable. And for the most part, they're not wrong! If you weren't busy being a fanwanker, you could pretty easily see at launch that the PS4 was going to be a better console, graphically, than the Xbox One. But here is the dirty secret - that's because, within a generation, GPUs are basically the same design.

The N64, the GameCube, the Wii, the Wii - all had GPU's designed by the same team. You know what other GPUs they designed? The Xbox 360, the PS4, the Xbox One, the PS5, the Xbox Series - in the 21st century, the same team has designed every GPU in every major console, save 3.

Is it surprising that these devices have been easy to compare? But it's not just that, Nvidia and AMD heavily influence each other on their designs. The GPU in the Switch shares a lot in common with the GPU in the PS4, despite them coming from different companies.

Nvidia has changed the game. DLSS and RT both fundamentally alter how pixels get to screen. And while AMD has answers for both technologies it hasn't built the hardware around them. When we talk about raw power we're no longer talking about what Nvidia is actually bringing to the table.

On the other hand, trying to talk about Nvidia's new tech is difficult, because folks still want to cram it into the same box as the raw power. We get things like "effective FLOPS" or Nvidia's own "RT FLOPS" which are nonsensical.

Because Nvidia has changed the way pixels get on the screen, you can get huge leaps in some places, and tiny leaps - or even back steps - in others. If you try to take Nvidia's new tech, and the say it turns the Switch 2 into a PS4 Pro you are both wildly overselling and underselling what the thing can do at the same time. A motorcycle is not a freight truck is not a sports car, and trying to whittle it all down to a MPH number is missing the plot.

This is why I talk about the experiences you'll get instead of the console's power.
Outstanding explanation as always oldpuck, and the definitive reason why we should all be excited for this console.
 
In an effort to not give myself a stroke waiting for Switch 2 news… I started thinking about Switch 3! 🤪 I wonder when we’ll get more info from Nvidia about Thor. I know it won’t wind up in S3, and even the next Tegra won’t be in 3, but I want to know where the future of mobile gaming is headed. I mean, we’re nearing the limits of what shrinking nodes can do, right? I’m kinda anxious to see the specs for Thor so we can see a bit more of a roadmap. Does Nvidia do a fall conference? Or do we have to wait for the next GTC?
as soon as this thread closes im making the switch 3 speculation thread
 
I found this interesting video about running ray-traced games on a 3050 mobile. Mind you the GPU alone is using 80w and it also has 2048 cores instead of 1536 from the T239, 16 RT cores vs 12 on Drake, while also running at higher clocks vs Drake (likely, even at 4nm), and also having higher memory bandwidth (192GB/s vs around 100GB/s for Drake, likely). If we expect something like 60% of the performance of that 3050 (no, no amount of optimization would close that gap), we could end up seeing some games running at 30fps with select ray-tracing effects at 540-720p to 1080p DLSS'd on Switch 2:


1. You can't just simplify that much.
2. T239 has a lot more cache, and Lovelace does show that the cache increase to Ampere's cores does make a lot of the difference (enough to make the 4050 Laptop GPU Outpace the 3060 Laptop despite having the same core count as the 3050Ti)
3. Bandwidth isn't as big of a factor as people think it is due to reasons mentioned here , not to mention the cache, and how LPDDR has less than half the latency of GDDR so it can make repeat calls faster, or more efficient use of calls
4. The 3050 Laptop GPU actually only boosts up to a peak of 1GHz, Thraktor's clocks at 4N put 1.1GHz as the sweet spot.
5. Windows, DirectX, and NVIDIA's Windows drivers have a fuckton of overhead which Switch 2 would lack.
 
Last edited:
1. You can't just simplify that much.
2. T239 has a lot more cache, and Lovelace does show that the cache increase to Ampere's cores does make a lot of the difference (enough to make the 4050 Laptop GPU Outpace the 3060 Laptop despite having the same core count as the 3050Ti)
3. Bandwidth isn't as big of a factor as people think it is due to reasons mentioned here , not to mention the cache, and how LPDDR has less than half the latency of GDDR so it can make repeat calls faster, or more efficient use of calls
4. The 3050 Laptop GPU actually only boosts up to a peak of 1GHz, Thraktor's clocks at 4N put 1.1GHz as the sweet spot.
5. Windows, DirectX, and NVIDIA's Windows drivers have a fuckton of overhead which Switch 2 would lack.
Interesting.

2. Where did that info about Drake having more cache come from? I couldn't find much about that online. I did find a poster in this thread who said that there were references to 1MB of L2 cache in the NVN2 leaked files, so 4MB of cache doesn't seem to be a definitive information: https://famiboards.com/threads/futu...-staff-posts-before-commenting.55/post-184917

3. I was under the impression that latency is more important to the CPU than to the GPU.

4. That doesn't seem to be true. The videos I posted show the GPU boosting to between 1.1GHz and 1.5GHz at 35w and 1.8GHz to 1.9GHz at 80w.
 
Yes I've said this opinion on an earlier post too.

For 1st party games, a lot of the Mario games are 2-5GB. This includes Sports games, odyssey and platformers. Even Super Mario Wonder is only 4.5GB. Some of their larger titles are MP Remastered at 6.8GB, Animal Crossing at 10.5GB, Pikmin 4 at 10.5GB. But Hell, Pokemon Scarlet is only 6.8GB digital! I almost forgot Ultimate is on a 16GB cart, because it was actually 13.5GB at launch


I'm not expecting 1st party mario games to suddenly triple in size on Switch 2. I mean its possible, but Nitendo are the masters of compression, they were never really big on textures (though that could change) and FMV cut scenes. We eille will have higher SSD speeds and that compression hardware on Switch 2 as well to help migate space.

Besides that, think the majority of Nintendo games will be using 8 or 16GB carts on the successor, simply because they don't feel like they need to, and they do like saving money, which is ironic since they are the ones needed to push down prices of the larger cart sizes. Lol

Hopefully the next xenoblade and open world Zelda pave the way for 64GB carts, and reduce the overall cost so it ends up being as common as 16GB carts on Switch 2's lifespan at least, cause I don't want to pay $80 lol.

We'll be lucky to get 128GB carts though.
@ILikeFeet and @Skittzo are correct for pointing out that this discussion is incorrectly assuming the memory in the game cards will remain the same, which it won't; we can be certain of that. And while we can't be certain about what will replace it, the likely tech for the successor was found by @Thraktor, and this memory was estimated by Macronix as -- in short, and notwithstanding the "ballpark" estimate and any changes since this 2017 publication -- being significantly cheaper at a capacity of 1 Tbit (125 GiB) than the figures that have been thrown around for the cost of 32 GB game cards today.
 
Interesting.

2. Where did that info about Drake having more cache come from? I couldn't find much about that online. I did find a poster in this thread who said that there were references to 1MB of L2 cache in the NVN2 leaked files, so 4MB of cache doesn't seem to be a definitive information: https://famiboards.com/threads/futu...-staff-posts-before-commenting.55/post-184917
Tegras can access the CPU L3 Cache, and it's best Practices for ARM SoCs nowadays to have a System-Level Cache (Even Orin has a 4MB SysLC)

I detail this here in this post, the A78C which is pretty much confirmed to be the CPU for T239 is a cache monster relative to all other CPUs in mobile. And Tegra SoCs can share the L3 between CPU/GPU, then you get the SysLC.
yeah, the main reason bandwidth becomes important again in the modern day is when tile based rendering isn't used, or when a GPU doesn't have enough cache to fit the tiles for the image without stalling/having to go to memory.

Thankfully, T239 (should) have more cache per mm^2 than any of the next gen consoles, and def more Cache/HW Component.
  • T239
    • CPU:
      • Up to 1MB L1, 4MBs L2, and 8MB L3 (Shared with GPU)
    • GPU:
      • ??? L0, 1.5MB L1, 1MB L2, and 8MB L3 (Shared with CPU)
    • And assuming NVIDIA is following best practices for an ARM SoC, a System-Level Cache. The size of which can be assumed at a minimum of 4MB like Orin.
    • So, the GPU has a total of 14.5MB of Cache accessible in that scenario, the Allocation of the 8MB L3 likely changing depending on if a scene is heavily CPU or GPU bound, or if it can work in a balanced state.
  • Series S
    • CPU (Guessing based on the 4800S and Renoir SoCs of similar spec):
      • 512kb L1 split in 2 for Instruction/Data, 4MB L2, 8MB L3.
    • GPU (Extrapolating from the RDNA2 Architecture Guide):
      • 320KB L0, 512KB L1 (Assuming 4 Shader Arrays), 2MB L2
    • So, the GPU only has its own cache to work with and lacks an L3 or SysLC to draw from. So, it's stuck with its 2.53MB of Cache, and the CPU is missing half a meg of L1 before you get to the lack of SysLC.
  • PS5
    • CPU (Guessing mainly looking at the 4700S as the PS5's CPU looks more similar to Desktop Z2 but un-chipleted than the 4800S which is just Renoir. Although it likely isn't too dissimilar);
      • 256KB L0, 512KB L1, 4MB L2, 8MB L3
    • GPU:
      • GPU: 648KB L0, 1MB L1, 2MB L2


3. I was under the impression that latency is more important to the CPU than to the GPU.
It is for the most part, but in Ray Tracing applications, it actually seemingly helps a lot. Seems to be the reason why the Steam Deck can actually Ray Trace at the same rate (adjusted for the CU Count) as Series S for the most part despite its reduction in bandwidth.

But Consoles just do memory differently than PCs, you can optimize far more around the memory structure and whatnot, it's part of why the Xbox One, despite on-paper being immensely weaker than PS4, could keep up to some degree. The ESRAM+DDR3 System they had, despite being extremely funky and on-paper extremely slower, did work to close a lot of that gap with the GDDR5 in the PS4.

4. That doesn't seem to be true. The videos I posted show the GPU boosting to between 1.1GHz and 1.5GHz at 35w and 1.8GHz to 1.9GHz at 80w.
May be some custom tuning on the part of the laptop manufacturer, either that or something with MaxQ...or the Laptop Misreading it's wattage.

The 3050M at 35W is officially rated for 713MHz base and 1.05GHz boost (MaxQ). You need to hit 45W to hit 1.3GHz.

And either way, the 1.1GHz number for Thraktor was calculated based on a 15W Target based on T239,

And either either way, GA10F in T239 would have different performance scaling for multiple reasons even if it were on 8N (Somehow) due to the massive increase in Cache accessible to it, and the large reduction in overhead, along with reductions in latency overall due to being a SoC rather than a dedicated component.

A final note on the 3050M (Which extends to the 6500M), a lot of testers end up getting it into Memory Bound Scenarios due to how tight 4GB of VRAM is in general, especially when testing Ray Tracing.

Rumors indicate Switch 2 may actually have more memory than the Series S, so assuming 12GB as the reasonable middle ground (Would allow 16GB Devkits pretty easily by just having 2 8GB modules rather than 2 6 GB Modules), that would be extra memory afforded to it to help break through memory limits that PC titles on 4GB GPUs run into usually without careful tinkering.
 
I was curious about the feasibility of fully ray-traced titles on Drake given the advantages still technically present for DLSS over FSR on consoles, DLSS 3.5, and Nvidia's generally better ray tracing performance. DLSS 3.5 seems to help optimize parts of the pipeline that aren't improve by ray tracing hardware, such as denoising, but BVH tree building each frame still seems like its a pretty massive CPU bottleneck. I am aware of multiple GPU parallel implementations for BVH tree construction such as this one by Nvidia https://research.nvidia.com/sites/d...Parallel-Construction/karras2013hpg_paper.pdf, but I cant seem to find any indication of its usage outside of research. Are games at this point still relying on CPU driven BVH tree construction, and is there any possible movement of moving it to GPU?

The second question is a lot more hypothetical, especially without knowing memory and clockspeeds, but in theory, I could see ultra-performance mode being an acceptable comprise to getting very ray tracing heavy titles on Drake, but with Cyberpunk Overdrive on 2080tis (admittedly, in only balanced mode), often falling to go above 19 fps on average with the mod (which I feel, at that point, is stripping away almost all of the point of fully ray-tracing the game to begin with), even with a target of 1080p and DLSS 3.5, it seems outright impossible for Drake to run the game at 30fps no matter what optimizations Nintendo does on the API-end right? I do wonder if we could see them try to give that treatment to more reasonable games though, say current Switch titles...
 
I was curious about the feasibility of fully ray-traced titles on Drake given the advantages still technically present for DLSS over FSR on consoles, DLSS 3.5, and Nvidia's generally better ray tracing performance. DLSS 3.5 seems to help optimize parts of the pipeline that aren't improve by ray tracing hardware, such as denoising, but BVH tree building each frame still seems like its a pretty massive CPU bottleneck. I am aware of multiple GPU parallel implementations for BVH tree construction such as this one by Nvidia https://research.nvidia.com/sites/d...Parallel-Construction/karras2013hpg_paper.pdf, but I cant seem to find any indication of its usage outside of research. Are games at this point still relying on CPU driven BVH tree construction, and is there any possible movement of moving it to GPU?

The second question is a lot more hypothetical, especially without knowing memory and clockspeeds, but in theory, I could see ultra-performance mode being an acceptable comprise to getting very ray tracing heavy titles on Drake, but with Cyberpunk Overdrive on 2080tis (admittedly, in only balanced mode), often falling to go above 19 fps on average with the mod (which I feel, at that point, is stripping away almost all of the point of fully ray-tracing the game to begin with), even with a target of 1080p and DLSS 3.5, it seems outright impossible for Drake to run the game at 30fps no matter what optimizations Nintendo does on the API-end right? I do wonder if we could see them try to give that treatment to more reasonable games though, say current Switch titles...
Cyberpunk Path Traced is probably off the table yes.

but if you really wanted a full ray traced game on Drake, it's definitely possible. at a good resolution and frame rate even. ray tracing, at the highest level definition, is very scalable to the point it doesn't even fit people's expectation of what ray tracing is, rendering-wise.
 
Tensor Core, O Tensor Core, where art thou Tensor Core?

I HOPE that offloading tasks to the GPU, or indeed the Tensor Cores, is at least made more accessible on NG Switch. It could be the difference between a game being possible or not. I believe this was the case for Witcher 3 on Switch.
It is the East, and Concernt is in the Sun

Shall I hear more, or shall I speak at this?
A thing I've been trying to communicate for a while, this seems like a good moment.

Stop trying to compare the Switch NG's power to other consoles, you will make yourself dizzy.

Gamers like to power rank things, tech nerds like to compare technologies, it's all understandable. And for the most part, they're not wrong! If you weren't busy being a fanwanker, you could pretty easily see at launch that the PS4 was going to be a better console, graphically, than the Xbox One. But here is the dirty secret - that's because, within a generation, GPUs are basically the same design.

The N64, the GameCube, the Wii, the Wii - all had GPU's designed by the same team. You know what other GPUs they designed? The Xbox 360, the PS4, the Xbox One, the PS5, the Xbox Series - in the 21st century, the same team has designed every GPU in every major console, save 3.

Is it surprising that these devices have been easy to compare? But it's not just that, Nvidia and AMD heavily influence each other on their designs. The GPU in the Switch shares a lot in common with the GPU in the PS4, despite them coming from different companies.

Nvidia has changed the game. DLSS and RT both fundamentally alter how pixels get to screen. And while AMD has answers for both technologies it hasn't built the hardware around them. When we talk about raw power we're no longer talking about what Nvidia is actually bringing to the table.

On the other hand, trying to talk about Nvidia's new tech is difficult, because folks still want to cram it into the same box as the raw power. We get things like "effective FLOPS" or Nvidia's own "RT FLOPS" which are nonsensical.

Because Nvidia has changed the way pixels get on the screen, you can get huge leaps in some places, and tiny leaps - or even back steps - in others. If you try to take Nvidia's new tech, and the say it turns the Switch 2 into a PS4 Pro you are both wildly overselling and underselling what the thing can do at the same time. A motorcycle is not a freight truck is not a sports car, and trying to whittle it all down to a MPH number is missing the plot.

This is why I talk about the experiences you'll get instead of the console's power.
Nooo make me! Time to powerscale with DBZ levels now.

I, Prince Majora's Vegeta am the T239, and I'm confident I'll beat Kakarot who is X Box Series S. That dirty Cell is PS5, and Kakarot's bumbly battle-damaged one-arm son is X series X!

Cyberpunk Path Traced is probably off the table yes.

but if you really wanted a full ray traced game on Drake, it's definitely possible. at a good resolution and frame rate even. ray tracing, at the highest level definition, is very scalable to the point it doesn't even fit people's expectation of what ray tracing is, rendering-wise.
I'm actually expecting the first RT games from Nintendo to be WIi U/Switch ports. TOTK, Luigi's Mansion, and even Mario Kart are decent candidates. It won't strain Switch 2 too much using less demanding games from older generations.

TOTK would be interesting for sure. Maybe we'll see something like these but at lower resolutions (1080p or 1440p)



And with RT and DLSS, its crazy that we will get multiple modes as well like the rest of the current gen consoles. It's gonna be interesting. :)
 
Last edited:
In an effort to not give myself a stroke waiting for Switch 2 news… I started thinking about Switch 3! 🤪 I wonder when we’ll get more info from Nvidia about Thor. I know it won’t wind up in S3, and even the next Tegra won’t be in 3, but I want to know where the future of mobile gaming is headed. I mean, we’re nearing the limits of what shrinking nodes can do, right? I’m kinda anxious to see the specs for Thor so we can see a bit more of a roadmap. Does Nvidia do a fall conference? Or do we have to wait for the next GTC?
I would absolutely love to crack my knuckles and get started hammering out a wall of text on what I'd expect out of a next-next Switch, but the crystal ball's murky regarding things that far out :unsure:

On the foundry side of things, what's coming up next is the transition to Gate-All-Around (GAAFET). If successful, that should be a nice shot in the arm, but it's actually only expected to last about a couple of nodes. So by the time of 2030 or close to that, we might already be looking at the transition to Forksheets. And that is also supposed to last maybe only a couple of nodes too, as it's a bridge to the next-next major thing, Complementary FET (CFET)? Wild times as researchers try to keep transistor scaling alive :p
Image from here:
four-multicolored-blocks-with-arrows-between-them-indicating-a-progression.png


Aside from regular ol' logic scaling... the other thing I'm keeping an eye on is SRAM scaling (we keep mentioning 'cache'; cache is made of SRAM). Plain ol' SRAM scaling kind of crashed into a wall right now. It would be nice if the transition to GAAFET also allows to squeeze a bit more improvement out of SRAM density as well, but really, the hope for the next great leap in SRAM would be 3D/stacking SRAM. That's a next decade thing.

Also, maaaaay want to keep an eye on how DRAM evolves over the next decade. Isn't everybody researching 3D/stacking for that too?
 
If we assume, hypothetically of course.. that the maximum possible power transfer speed of USB-C at the time of tape-out is capable of both charging the battery and running the Docked Specs, what is the Maximum Possible wattage of the NGSwitch?
 
0
Cyberpunk Path Traced is probably off the table yes.

but if you really wanted a full ray traced game on Drake, it's definitely possible. at a good resolution and frame rate even. ray tracing, at the highest level definition, is very scalable to the point it doesn't even fit people's expectation of what ray tracing is, rendering-wise.
Portal RTX is definitely beyond feasible, but its also a very safe game from ray tracing, given its mostly consistent, cubic, enclosed environments. But something like Mario Odyssey, BoTW, even at 1080p, I think that could definitely also be feasible even with their more open environments and I'd be interested in seeing if Nintendo is willing to offer upgraded versions of those games with such a massive change in rendering.

Again, stuff like reflections, GI, might be far safer bets, but the real question is will Nintendo dive into truly ray-traced rendered games for these remasters. Id love to see it.
 
It is the East, and Concernt is in the Sun

Shall I hear more, or shall I speak at this?

Nooo make me! Time to powerscale with DBZ levels now.

I, Prince Majora's Vegeta am the T239, and I'm confident I'll beat Kakarot who is X Box Series S. That dirty Cell is PS5, and Kakarot's bumbly battle-damaged one-arm son is X series X!


I'm actually expecting the first RT games from Nintendo to be WIi U/Switch ports. TOTK, Luigi's Mansion, and even Mario Kart are decent candidates. It won't strain Switch 2 too much using less demanding games from older generations.

TOTK would be interesting for sure. Maybe we'll see something like these but at lower resolutions (1080p or 1440p)



And with RT and DLSS, its crazy that we will get multiple modes as well like the rest of the current gen consoles. It's gonna be interesting. :)

How major could those reworks be? I mean, raytracing is very clearly something but I don't see Nintendo implementing raytracing in legacy titles and depriving their next gen offerings from the wow factor, that's my only doubt about this.
 
0
1. You can't just simplify that much.
2. T239 has a lot more cache, and Lovelace does show that the cache increase to Ampere's cores does make a lot of the difference (enough to make the 4050 Laptop GPU Outpace the 3060 Laptop despite having the same core count as the 3050Ti)
3. Bandwidth isn't as big of a factor as people think it is due to reasons mentioned here , not to mention the cache, and how LPDDR has less than half the latency of GDDR so it can make repeat calls faster, or more efficient use of calls
4. The 3050 Laptop GPU actually only boosts up to a peak of 1GHz, Thraktor's clocks at 4N put 1.1GHz as the sweet spot.
5. Windows, DirectX, and NVIDIA's Windows drivers have a fuckton of overhead which Switch 2 would lack.

Thank you for detailing all this. I have a couple of questions I was hoping you could answer.

How much is the fact that T239 is ARM expected to improve it vs x86 systems like this laptop? My understanding is that ARM has better performance-per-watt, but I have no idea how much.

Also, is it even remotely possible that the Switch 2 could use a PS5-style fixed power draw dynamic clock system for its docked mode? If I understand the concept correctly, it would allow them to have higher clocks in the vast majority of actual gaming scenarios, because they wouldn't need to be working around occasional power-spiking outlier scenarios.
 
Thank you for detailing all this. I have a couple of questions I was hoping you could answer.

How much is the fact that T239 is ARM expected to improve it vs x86 systems like this laptop? My understanding is that ARM has better performance-per-watt, but I have no idea how much.
Well ARM IPC is indeed far better than x86, the drawback being that it strips some redundancies/functions that are in x86 to make it smaller/more efficient.

Not nessecarily saying it's worse, Switch and Apple's silicon shows that much, it's just different and lacks some legacy stuff that X86 has, so code should be compiled for it specifically so it doesn't have that baggage.

Now, while IPC is nice to know, it doesn't directly help figure our performance as we don't know what clock Nintendo would settle for, especially as they have a pretty beefy (for what is going to fit in a glorified tablet) GPU to power, where likely the majority of power savings being on 4N would provide.

Also, is it even remotely possible that the Switch 2 could use a PS5-style fixed power draw dynamic clock system for its docked mode? If I understand the concept correctly, it would allow them to have higher clocks in the vast majority of actual gaming scenarios, because they wouldn't need to be working around occasional power-spiking outlier scenarios.
Oh 100%, ARM has had power shifting between cores for years (DynamicIQ) to allow specific cores to be set to different clock speeds depending on the scenario. And NVIDIA has Dynamic Boost (Part of their MaxQ Suite which I do feel will be integrated to some extent in Switch 2) which acts similarly but between power delivery on the CPU and the GPU, and NVIDIA GPUs have been able to opportunistically boost for years too.

For example, you have 1 Core handling the OS in the background running at like, 500MHz, then the other 7 cores can use the power provided to it to reach 1.5GHz (a hypothetical, not sure what clocks the CPU would be able to hit at peak for thermal/power reasons).

With that, and Dynamic Boost, in a game Scenario where the game hits a region that is CPU bound, it may divert power to the CPU to allow it to clock higher and mitigate or even prevent an FPS drop perceivable to the player (The GPU drop would be masked by DRS/The dev having the internal Framerate be high enough to handle a drop to GPU frequency in that scenario)

Or another scenario where a dev just runs Switch optimized CPU code on Switch 2, A78Cs can probably run it 1:1 on those 4 Cores at the same clocks as OG Switch if not lower. So you'd have 3-4 cores worth of power freed up to go back to the GPU to allow it to push a higher clock.
 
2. T239 has a lot more cache
I’m sorry, I have to log in to actually mention this and correct this erroneous statement, but this is incorrect. T239 has the same level of proportional cache as the other desktop ampere-based products minus tegra ORIN, which is made for a completely different sector.

I regret ever mentioning that the GPU in Tegras can access the CPU L3 $, because you don’t always want that. Just because it can access the cache theoretically due to another SOC from the same company doing it, does not mean you want it to access the cache all the time. The function of the L3$ on the tegra Xavier was because that thing had no system level $.

If you have the GPU consistently hitting the L3$ for access to the large memory pool, you are also conflicting with the CPU. AMD literally avoided doing this for a design reason, because it would give just too much traffic. Intel literally moved away from doing this because it literally can cause CPU traffic.

Nvidia did it for one SOC and one SOC alone.

Please, I beg of you to please stop telling these people that the GPU can access the L3$ and that it has this proto-Lovelace thing or whatever because it’s not true, because 1) you don’t know how the design of the chip is, and 2) you’re going to mislead and tell them something that is completely off with information that you cannot verify at all.

For all we know, NVidia removed the feature to reduce the CPU having so much traffic congested by the GPU constantly requesting for something. This would improve the CPU performance rather than have what it requests bumped to the RAM and having to wait for that data to come to it, and the RAM isn’t low in latency either.

It’s setting unrealistic expectations and like I said I regret ever mentioning ever that the GPU can access the CPU cache on Tegras. It’s not always a good thing to even have that anyways, it can actually be detrimental to performance, and for a console that does a lot of heavy, graphical work loads for its design, it will be hitting that cache frequently and this does intervene with the CPU.

That is all and I don’t want to talk about this at all anymore.



And please don’t oversell the SLC either. These people are already getting different attributions about the device and they haven’t even seen it yet.

Anyway I’m off and away, goodbye.
 
Well ARM IPC is indeed far better than x86, the drawback being that it strips some redundancies/functions that are in x86 to make it smaller/more efficient.

Not nessecarily saying it's worse, Switch and Apple's silicon shows that much, it's just different and lacks some legacy stuff that X86 has, so code should be compiled for it specifically so it doesn't have that baggage.

Now, while IPC is nice to know, it doesn't directly help figure our performance as we don't know what clock Nintendo would settle for, especially as they have a pretty beefy (for what is going to fit in a glorified tablet) GPU to power, where likely the majority of power savings being on 4N would provide.

Yeah, most of the discussion I've had around T239 has involved pointing out that even if it stuck with the base Switch clocks it would still be really strong, so we're looking at a great minimum level of power with the potential for even more.

Oh 100%, ARM has had power shifting between cores for years (DynamicIQ) to allow specific cores to be set to different clock speeds depending on the scenario. And NVIDIA has Dynamic Boost (Part of their MaxQ Suite which I do feel will be integrated to some extent in Switch 2) which acts similarly but between power delivery on the CPU and the GPU, and NVIDIA GPUs have been able to opportunistically boost for years too.

For example, you have 1 Core handling the OS in the background running at like, 500MHz, then the other 7 cores can use the power provided to it to reach 1.5GHz (a hypothetical, not sure what clocks the CPU would be able to hit at peak for thermal/power reasons).

With that, and Dynamic Boost, in a game Scenario where the game hits a region that is CPU bound, it may divert power to the CPU to allow it to clock higher and mitigate or even prevent an FPS drop perceivable to the player (The GPU drop would be masked by DRS/The dev having the internal Framerate be high enough to handle a drop to GPU frequency in that scenario)

Or another scenario where a dev just runs Switch optimized CPU code on Switch 2, A78Cs can probably run it 1:1 on those 4 Cores at the same clocks as OG Switch if not lower. So you'd have 3-4 cores worth of power freed up to go back to the GPU to allow it to push a higher clock.

I'm just hoping they use this approach so the system could combine that power with DLSS to minimise its cost and be a really legit "4K console", at least in the sense of "4K DLSS performance mode". 25W with dynamic clocks would be tremendously better than 15W with fixed clocks like what the current system uses.

Of course I'd also like it to have a Frore AirJet Pro, so I have plenty of very silly dreams.
 
0
Nintendo SwitchNFlip confirmed

That would be a great concept and that would allow to play DS Games

I think you have something there

It’s a real photo and current feature with Switch games.

It’s a Google’d image but I’ve played Ikaruga like this myself.
 
Kind of excited to think about the new Game Card format. 125GB at a nominal cost with acceptable speed could mean a generation where physical releases even of the largest games can be expected. While I'm mostly a digital person, the idea of having, I don't know, let's say Cyberpunk 2077, all on one Game Card, with no installs, is very exciting to me.
 
Stop trying to compare the Switch NG's power to other consoles, you will make yourself dizzy.


This is why I talk about the experiences you'll get
instead of the console's power.
so from my dumb understanding, it's best to "stop" comparing the hardware (based on what we know so far at least) to any other hardware on the market and instead think on how well a multiplat game would work there? (be it previous gen game [PS4 & XONE] or a current gen game [PS5 & XSX])
 
so from my dumb understanding, it's best to "stop" comparing the hardware (based on what we know so far at least) to any other hardware on the market and instead think on how well a multiplat game would work there? (be it previous gen game [PS4 & XONE] or a current gen game [PS5 & XSX])

We already know that Nintendo are wizards at optimizing their games, they don't even compare with the rest of the industry, and we haven't heard/seen anything from their end regarding the next gen games, they will surprise us once again
 
0
If this new tech for cartridges does get confirmed, I wonder how this will affect retail prices.

These cartridges can't be cheaper than DVDs and games printed on those have already gone up in price with the start of the new generation.

I wonder if we can see games costing 80 USD at retail and 60-70 on the eShop.
 
Portal RTX is definitely beyond feasible, but its also a very safe game from ray tracing, given its mostly consistent, cubic, enclosed environments. But something like Mario Odyssey, BoTW, even at 1080p, I think that could definitely also be feasible even with their more open environments and I'd be interested in seeing if Nintendo is willing to offer upgraded versions of those games with such a massive change in rendering.

Again, stuff like reflections, GI, might be far safer bets, but the real question is will Nintendo dive into truly ray-traced rendered games for these remasters. Id love to see it.
Upgrading these games, nah, the time it takes doesn't make it feasible. Maybe stuff like splatoon 3 getting ray traced shadows, AO, or reflections.

If this new tech for cartridges does get confirmed, I wonder how this will affect retail prices.

These cartridges can't be cheaper than DVDs and games printed on those have already gone up in price with the start of the new generation.

I wonder if we can see games costing 80 USD at retail and 60-70 on the eShop.
Until the rest of the market pushed for $80 games, Nintendo won't. Publishers will just use smaller cards and tell people to download the rest to save money.
 
Kind of excited to think about the new Game Card format. 125GB at a nominal cost with acceptable speed could mean a generation where physical releases even of the largest games can be expected. While I'm mostly a digital person, the idea of having, I don't know, let's say Cyberpunk 2077, all on one Game Card, with no installs, is very exciting to me.

Sounds like something extremely unlikely to happen apart from potentially very unique instances.
 
If this new tech for cartridges does get confirmed, I wonder how this will affect retail prices.

These cartridges can't be cheaper than DVDs and games printed on those have already gone up in price with the start of the new generation.

I wonder if we can see games costing 80 USD at retail and 60-70 on the eShop.
Sounds like something extremely unlikely to happen apart from potentially very unique instances.
The tech is there, it was announced by Macronix themselves (who manufactures Nintendo's game cards for the past couple decades or so). The question exactly is how cheap it is compared to the existing game card format.
 
Kind of excited to think about the new Game Card format. 125GB at a nominal cost with acceptable speed could mean a generation where physical releases even of the largest games can be expected. While I'm mostly a digital person, the idea of having, I don't know, let's say Cyberpunk 2077, all on one Game Card, with no installs, is very exciting to me.
Sounds a bit too good to be true. Waiting for collaboration from dev sources before I celebrate.
 
Sounds like something extremely unlikely to happen apart from potentially very unique instances.

Why? Nobody in the 3DS era would’ve said that we’re going to be stuck with 2GB or 4GB game cards. Macronix invests money and resources into higher capacities, viable for both consumers and publishers.
 
0
I think Oldpuck was trying to say that it won't be "easily" or "instantly" capable of any Series S level game, rather it will take a lot of special optimizations and work to get a game from this gen running well.

After which yeah it might wind up being DLSS'd to a better or similar resolution. I'm guessing overall CPU capability will be the main issue with ports from current gen but it should be a much lower gap than we saw with Switch 1 and last gen.
This is what I’m expecting. The same “miracle port” type experiences from Series S down to Switch 2 except this time we won’t have to play in blur-o-vision for it to be possible due to DLSS taking the probable 600p resolution up to 1080p or higher when docked.

If people find that resolution ridiculous remember that there are Series S games that drop as low as 540p before using FSR to improve image quality.

A 4tflop GPU for state of the art current AAA games really isn’t enough for people who want a high end visual experience unless you’re targeting 30fps like Starfield on Series S (most games target 60 nowadays).
 
0
There is some nuance there, but yeah. To vastly oversimplify you can think of the Switch as like 1/5th of a PS4 in terms of GPU performance, and the NG as, like, 1/3rd of a PS5. That's a very silly way of thinking of things, but it does help you sort of see that the gap is closing, but the gap isn't tiny.

On the other hand, if we look at the CPU, the story inverts. The gap between NG and current gen is larger than it was between Switch and last gen.


I think it's multi-layered. Ultimately, the bottleneck for a port will always be cash and time. Randy Linden put Quake on the GBA for goodness sake. The number of miracle ports will not just be about hardware, it will be about number of Switch NG sold. Better hardware makes port costs go down. But higher sales makes expensive ports profitable. So don't consider any of these problems "insurmountable".

I tend to think that CPU is going to be an issue. Not the only issue, and not an issue for every game, but Starfield and A Plague Tale: Requiem are both games that tax next gen CPUs to their limit to enable the core gameplay.

Then there are games like Gotham Knights - a game that doesn't so much stress the CPU for gameplay reasons, as use it as a crutch. I'm not defending that game, but there will be more of those. And there will be games like the Life is Strange series - games that aren't impossible on smaller hardware, but are only economically viable because a small team can just use Unreal Engine defaults for everything, and throw lots of CPU power at mocap'ed animation, and not pay a dozen programmers to optimize the engine.

I also think the GPU will start becoming more and more of an issue. We're coming out of the long cross-gen period, we're going to see more and more 30fps games, more and more low-res/high reconstruction games. We're going to see more games skip Xbox because of Series S + Weak Sales, and we're going to see nigh-unplayable Series S versions.

At some point devs will be competing with each other to deliver better and better looking experiences on the same hardware, and they will start deploying the sorts of cuts for "low end hardware" on the current gen consoles to do it. AAA game development that starts today will be targeting a 2028 release date. That will only be the 3-4th year of the Switch NG, but it will be the cross-gen period for the Playstation Six.

This all sounds like Nintendoom, and I don't mean that. I actually think I'm pretty optimistic! Last gen consoles were a little weak, relative to where technology was at the time. They had very bad CPUs, and very modest GPUs. Switch was able to capitalize on that, offering a more modern GPU despite the lack of power, and a CPU which actually started to get close to what the then-current consoles were doing. Those were the aces-in-the-hole for making games like The Witcher III possible.

The PS5 and the Series X aren't just more powerful consoles than last gen, they're more powerful relative to their era. AMD was now top-of-the-heap on CPU tech, and if you pull up any list of "best graphics cards in 2020" and compare specs, the consoles are competitive. Not to mention the forward thinking bells and whistles - SSDs, custom decompression hardware, 3D audio engines.

These consoles don't have the sorts of weaknesses that give NG as much "catchup" room as the Switch had, and yet - Nintendo seems to be delivering. The CPU gap was going to get larger, but Nintendo is keeping it from becoming massive. The GPU gap is getting smaller, despite the 10x leap that the Series X made over the Xbox One. DLSS 2 and Nvidia's RT solution are more forward thinking than the AMD counterparts. Nintendo seems to be keeping pace with storage speed and decompression hardware.
To be silly and simple, Switch's GPU is 1/3rd of the PS4's. 393GFLOPs Maxwell + mixed precision is more than 1/3rd of GTX 750 TI, which was the PC counterpart to PS4 early on. Drake is still closer to PS5, it really depends if developers pursue mixed precision with Drake and don't do the same with PS5, but a lot of Drake's accelerators run in parallel to it's Cuda cores this time, which means there is more untapped resources for Drake to use this time.

A78C is also higher performance per clock over Ryzen 2, than A57 was over Jaguar, with an expected similar clock difference.

I'd also suggest that RAM and Storage, not GPU performance, held back Switch's visuals the most, that is where muddy textures on Switch really come from, which is easily the big negative with Switch comparisons. Switch 2 is seemingly side stepping this problem with 12GB RAM.

So yes, if Switch 2 is optimized for mixed precision, we are looking at closer to half the PS5's performance, in here I simply mean that if a game is targeting 4K60 via FSR on PS5, Switch 2 should be able to have similar settings at 4K30 via DLSS, even if Switch 2 is rendering at a lower resolution, DLSS > FSR. Now if both are pushing 30FPS or 60FPS, there is much less room to keep up with the PS5, which is where Switch 2 could see a drop of render and output resolution beyond where PS5 would go, but would still need to dial back some settings.

Switch 2 isn't as powerful as the PS5, that isn't the point of this post and if optimizations in mixed precision are not made like they are on the Switch, or if developers use mixed precision on PS5 for more than just upscaling, the gap will be ~3 times Switch 2, and that's OK as well. Switch 2 is exceeding our expectations, it's vastly more powerful than our beliefs when these threads were started, and somehow Nintendo and Nvidia are making a next generation console that you can take on the go. Wild stuff.
 
Sounds a bit too good to be true. Waiting for collaboration from dev sources before I celebrate.
Yeah I don't think we have any rumor currently saying this will be used by Nintendo, just that it's the latest offering from Macronix (officially so).

But there are reasons to expect better game cards, the main one being that they always use better proprietary cartridge/card storage in every new gen.
 
Yeah I don't think we have any rumor currently saying this will be used by Nintendo, just that it's the latest offering from Macronix (officially so).

But there are reasons to expect better game cards, the main one being that they always use better proprietary cartridge/card storage in every new gen.
Much like previous gens, I expect there to be some amount of crossover between the generations, even when it comes to Game Card technology. Small games might make more sense on "last gen" (Nintendo Switch Gen1) technology Game Cards, even if the code therein is for Nintendo Switch (Gen2).
 
Much like previous gens, I expect there to be some amount of crossover between the generations, even when it comes to Game Card technology. Small games might make more sense on "last gen" (Nintendo Switch Gen1) technology Game Cards, even if the code therein is for Nintendo Switch (Gen2).
Not if the new cards are that much cheaper.
 
Honest question:

I’m of the opinion that Nintendo will release the Switch 2 during 2H of 2024, but, the fact they’re releasing a Mario Red OLED, and now once again the MK8D Switch V2 bundle, makes me think they want to ship as many units as they can before something big comes close next year. As close as March, April 2024. I know it could just be a case of maximization of sales and wanting to empty out supply chain and store shelves, but is it possible for the Switch Next Gen to come out early next year?
 
Honest question:

I’m of the opinion that Nintendo will release the Switch 2 during 2H of 2024, but, the fact they’re releasing a Mario Red OLED, and now once again the MK8D Switch V2 bundle, makes me think they want to ship as many units as they can before something big comes close next year. As close as March, April 2024. I know it could just be a case of maximization of sales and wanting to empty out supply chain and store shelves, but is it possible for the Switch Next Gen to come out early next year?

Sure

(also Lite AC bundles too)
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom