• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

Part 2 of Thraktor's attempts to underclock his GPU!

So, after investigating a bit more, it turns out that the command line tool nvidia-smi (which you'll probably be familiar with if you've ever used CUDA) allows you to set fixed GPU clocks, and supports clock speeds as low as 405MHz. This is exactly what I was looking for, however after looking at the data it's not behaving quite as I'd like when it comes to voltages, which limits its usefulness for our purposes a bit. Anyway, I figured I'd report my findings here.

For these runs, I set the fixed clock speed on my RTX 3070 using nvidia-smi, performed a benchmark run of Metro Exodus Enhanced Edition (4K ultra settings with high raytracing and balanced DLSS) and used GPU-Z to log GPU data during the run. Then I took the average GPU chip power consumption (which excludes RAM and other GPU components), and the GPU voltage. The voltage would typically vary a bit, so I took the peak sustained voltage during the run. I also tracked the FPS reported by the Metro Exodus benchmark tool, just to have it. Here's the data I got:

Clock (MHz)Voltage (V)Power Draw (W)FPSW/SMFPS/W
4050.78137.115.090.8070.4067
4950.78141.618.400.9050.4420
6000.78147.322.121.0270.4681
7050.78152.325.731.1380.4916
7950.78156.528.541.2270.5056
9000.78161.532.051.3360.5214
10050.78166.035.141.4350.5323
10950.78170.538.091.5320.5404
12000.78176.141.351.6540.5434
13050.78179.744.441.7330.5575
13950.79387.646.681.9050.5326
15000.837102.549.712.2270.4851
16050.868120.452.302.6170.4344

Now, if you look at the voltage column, you'll probably notice the issue. Below 1.3GHz, when I set a clock speed using nvidia-smi, the voltage doesn't drop lower than 781mV. This contrasts to the behaviour when I limit clock speeds using MSI Afterburner, where I managed to run at 1155MHz at 721mV. I added the higher-clocked runs to test this, and it looks like everything at and above 1.3GHz is running the same voltage as I see on the default voltage curve in Afterburner, but below this point nvidia-smi seems to set 781mV as a limit for some reason. The nvidia-smi tool itself doesn't allow you to directly control voltages, so I'm not sure if there's a way I can work around this.

As a result of this voltage limitation, every clock speed below 1.3GHz is consuming more power than it should, were it at optimal voltage. If we compare to my previous 1155MHz/721mV run which ran at about 62W, the same clock here would consume over 10W more, due to the higher voltage. So effectively this is reasonably accurate for the range of frequencies that Nintendo would never use, but inaccurate in the range of actually plausible frequencies!

In any case, this is actually a useful instruction on how hitting a voltage floor impacts power efficiency. The last column is FPS/W, which is a measure of efficiency. If you read this from the bottom up, you can see as clocks drop from 1.6GHz down to 1.3GHz efficiency improves quite a lot, as the reduced voltage means power consumption is dropping faster than performance is. However, below 1.3GHz the voltage is static, so efficiency gradually gets worse as you go below that point. You still save power by dropping clock speeds, but you're giving up more performance than you're saving in power consumption.

This is why I suggest that Nintendo might disable some SMs in portable mode. The actual peak of performance per Watt will be at a much lower frequency than we've got here, probably in the 400-600MHz range, but there is a point where clocking down actually loses efficiency, and if they're not capable of running all 12 SMs at that peak efficiency clock in portable mode, then they'll get better returns by disabling SMs than clocking lower.
All of this data...
Rendered moot because of the A2000 showing far more Efficiency than this point and Orin/Drake running on their own 8nm node (assuming 8nm Drake), and an adjusted uArch due to the cache and all that
 
All of this data...
Rendered moot because of the A2000 showing far more Efficiency than this point and Orin/Drake running on their own 8nm node (assuming 8nm Drake), and an adjusted uArch due to the cache and all that
I don't think there's any difference between the process node used for the RTX 3070 and the process node used for Orin/Drake, assuming Samsung's 8N process node is used.
 
All of this data...
Rendered moot because of the A2000 showing far more Efficiency than this point and Orin/Drake running on their own 8nm node (assuming 8nm Drake), and an adjusted uArch due to the cache and all that

Except A2000 isn't showing far more efficiency than my RTX 3070, it's voltage curve is basically identical to my card's, just with a lower minimum voltage (possibly due to binning with it being a more expensive card). The only sense in which it's "more efficient" is that it's clocked lower out of the box with a much more restrictive power limit. There's also no reason to believe that Orin is running on it's own 8nm node, and with a transistor density basically identical to desktop Ampere cards the evidence suggests there's no meaningful difference in the process.

Edit: And any efficiencies due to addition cache would be in reducing the RAM power consumption, which I'm not including in these figures.
 
I don't think there's any difference between the process node used for the RTX 3070 and the process node used for Orin/Drake, assuming Samsung's 8N process node is used.
Well even then, we can see with the A2000 that cards designed for lower wattages scale far better than the 3070
 
0
Except A2000 isn't showing far more efficiency than my RTX 3070, it's voltage curve is basically identical to my card's, just with a lower minimum voltage (possibly due to binning with it being a more expensive card). The only sense in which it's "more efficient" is that it's clocked lower out of the box with a much more restrictive power limit. There's also no reason to believe that Orin is running on it's own 8nm node, and with a transistor density basically identical to desktop Ampere cards the evidence suggests there's no meaningful difference in the process.
But then again, Orin and Drake are on their own uArch more or less with the cache change being substantial versus Ampere

That Cache can not be ignored and also Orin and Drake especially are designed around a lower wattage power envelope
 
That Cache can not be ignored and also Orin and Drake especially are designed around a lower wattage power envelope
What he answered is true though. The cache minimizes power draw due to the fact the GPU won't need to go to the RAM pool as much, but for his testing, it's irrelevant as he only testing the GPU power consumption, excluding the RAM.
 
What he answered is true though. The cache minimizes power draw due to the fact the GPU won't need to go to the RAM pool as much, but for his testing, it's irrelevant as he only testing the GPU power consumption, excluding the RAM.
Well, again that just doesn't dodge the fact that Orin/Drake are both designed for that sort of power envelope due to them having CPUs embedded in them.

Not to mention Orin using SEC 8N and Drake likely would use that 8nm variant if it's 8nm which incurs its own power/voltage curve.
 
Part 2 of Thraktor's attempts to underclock his GPU!

So, after investigating a bit more, it turns out that the command line tool nvidia-smi (which you'll probably be familiar with if you've ever used CUDA) allows you to set fixed GPU clocks, and supports clock speeds as low as 405MHz. This is exactly what I was looking for, however after looking at the data it's not behaving quite as I'd like when it comes to voltages, which limits its usefulness for our purposes a bit. Anyway, I figured I'd report my findings here.

For these runs, I set the fixed clock speed on my RTX 3070 using nvidia-smi, performed a benchmark run of Metro Exodus Enhanced Edition (4K ultra settings with high raytracing and balanced DLSS) and used GPU-Z to log GPU data during the run. Then I took the average GPU chip power consumption (which excludes RAM and other GPU components), and the GPU voltage. The voltage would typically vary a bit, so I took the peak sustained voltage during the run. I also tracked the FPS reported by the Metro Exodus benchmark tool, just to have it. Here's the data I got:

Clock (MHz)Voltage (V)Power Draw (W)FPSW/SMFPS/W
4050.78137.115.090.8070.4067
4950.78141.618.400.9050.4420
6000.78147.322.121.0270.4681
7050.78152.325.731.1380.4916
7950.78156.528.541.2270.5056
9000.78161.532.051.3360.5214
10050.78166.035.141.4350.5323
10950.78170.538.091.5320.5404
12000.78176.141.351.6540.5434
13050.78179.744.441.7330.5575
13950.79387.646.681.9050.5326
15000.837102.549.712.2270.4851
16050.868120.452.302.6170.4344

Now, if you look at the voltage column, you'll probably notice the issue. Below 1.3GHz, when I set a clock speed using nvidia-smi, the voltage doesn't drop lower than 781mV. This contrasts to the behaviour when I limit clock speeds using MSI Afterburner, where I managed to run at 1155MHz at 721mV. I added the higher-clocked runs to test this, and it looks like everything at and above 1.3GHz is running the same voltage as I see on the default voltage curve in Afterburner, but below this point nvidia-smi seems to set 781mV as a limit for some reason. The nvidia-smi tool itself doesn't allow you to directly control voltages, so I'm not sure if there's a way I can work around this.

As a result of this voltage limitation, every clock speed below 1.3GHz is consuming more power than it should, were it at optimal voltage. If we compare to my previous 1155MHz/721mV run which ran at about 62W, the same clock here would consume over 10W more, due to the higher voltage. So effectively this is reasonably accurate for the range of frequencies that Nintendo would never use, but inaccurate in the range of actually plausible frequencies!

In any case, this is actually a useful instruction on how hitting a voltage floor impacts power efficiency. The last column is FPS/W, which is a measure of efficiency. If you read this from the bottom up, you can see as clocks drop from 1.6GHz down to 1.3GHz efficiency improves quite a lot, as the reduced voltage means power consumption is dropping faster than performance is. However, below 1.3GHz the voltage is static, so efficiency gradually gets worse as you go below that point. You still save power by dropping clock speeds, but you're giving up more performance than you're saving in power consumption.

This is why I suggest that Nintendo might disable some SMs in portable mode. The actual peak of performance per Watt will be at a much lower frequency than we've got here, probably in the 400-600MHz range, but there is a point where clocking down actually loses efficiency, and if they're not capable of running all 12 SMs at that peak efficiency clock in portable mode, then they'll get better returns by disabling SMs than clocking lower.
This doesn’t actually tell us if the performance per watt would be lower in that 400-600MHz range though, right? We still don’t really know, right?
 
Well, again that just doesn't dodge the fact that Orin/Drake are both designed for that sort of power envelope due to them having CPUs embedded in them.

Not to mention Orin using SEC 8N and Drake likely would use that 8nm variant if it's 8nm which incurs its own power/voltage curve.
Right. Drake will be manufactured with it's own power curve(V/f, Perf/W) and the node will be tweaked to achieve it's best characteristics. In that sense, you're right that we can't compare a Desktop card with optimized mobile SoC. I just don't feel right dismissing all of this interesting data because of that.
 
Right. Drake will be manufactured with it's own power curve(V/f, Perf/W) and the node will be tweaked to achieve it's best characteristics. In that sense, you're right that we can't compare a Desktop card with optimized mobile SoC. I just don't feel right dismissing all of this interesting data because of that.
Yeah, I'm not dismissing all of it, I just don't think it's really relevant to Drake and can't ultra be used as reasoning for turning off SMs.

NVN2's Driver not having an 8SM mention too or any other mention of SMs other than 12 also hurts the case for them turning SMs off in portable mode
 
This doesn’t actually tell us if the performance per watt would be lower in that 400-600MHz range though, right? We still don’t really know, right?

We don't know yet.
What Thraktor shows us is a clear demonstration of the importance of the range of, and change in, voltage with regards to power efficiency.
We don't know what the voltage floor is for Drake yet.

Edit: Or in other words; perf/watt rises as long as you can lower voltage (and remain stable). Max perf/watt point is the lowest stable voltage.
 
Last edited:
0
That doesn't mean that said playable library won't get 4k enhancements or exclusive features.
And even with that, it will be inevitable that many developers would probably make certain games "4k Switch" exclusive to ease on development costs since they wouldn't need to spend much time optimizing for base Switch.
A fair number of titles both from 1st and 3rd party will still be developed for base hardware, but will still likely come with features to entice people to make the upgrade.
Eventually it will transition to mainly making "4k Switch" exclusive titles like late era GBC.

Well, every new game will have “enhancements” geared for the DLSS Switch, no doubt.

Still, remains to be seen how exactly they will be handling portable mode for this thing. It seems to me everyone is going still to have to develop a 540p-720p version of games anyways. I don’t think making a normal switch version with 4 cpu cores is really that much of a deterrent? Enough to ignore the 115 million devices? Avoid the the millions more $200-$350 Switches that will still be sold over the next few years?

I’m the wrong guy to engage with about this though, I don’t believe it’s the current Switch power that scares ports away from Switch. Witcher 3 is proof any game in the last 5 years could have had a switch port of any game if the publisher thought it was worth the time/effort/resources. It’s the risk/reward analaysis that keeps them away for the most part. The demand for certain multiplats on the switch platform is just most likely too low to bother, can’t compete with 1st party gaming on a 1st party gaming machine.

I don’t believe publishers, certainly not Nintendo, believe demand for their games on Switch suddenly increase along with resolution/framerates. A 3rd party publisher who was already wary of demand for their game on the current switch is suddenly going to make a 4K Switch only port? Why ignore the larger userbase?

There will be 3rd party publishers making specific games that will only run on the new hardware, I’m not saying there won’t be. There will always be publishers who want to experiment what can be appealing on Switch portable hardware (like the aforementioned CDPR effort).

I’m just saying it won’t be as fragmented as people expect it to be. I don’t expect any different kind of 3rd party support the next 4-5 years than we saw the last 4-5 years.

Again, GBC launched with Nintendo exclusives. Having colors in games is a huge gameplay difference between previous monochrome. Better resolution graphics iq bump? Not so much. Not enough of a difference to start fragmenting your development between two different systems.
 
It seems to me that MVG is basically saying he doesn't expect back-compat which is insane. I def expect that. And no, I do not want to double dip on BOTW for a higher res/frame rate experience... I expect that to come with the price of the new console.
That's what he flat out says around the 9 min 30 sec mark. He thinks it's going to be called a Switch 4K that's a revision and not a new gen, but it won't be BC with everything, but that they'll try to patch as many games as possible.
 
I think I‘m mostly in line with what MVG is speculating in the video on Spanwave. Name will focus on its function (Switch 4K) and the consumer hardware will probably be a bit less powerful as you would think after this leak and rumors. Launch date I‘m not so sure, but if they plan it to release it alongside BOTW 2 and that game makes it this year then I would say September or October.
How they are managing to do Backwards Compatibility will be for sure interesting.
 
That's what he flat out says around the 9 min 30 sec mark. He thinks it's going to be called a Switch 4K that's a revision and not a new gen, but it won't be BC with everything, but that they'll try to patch as many games as possible.
Man... hope he's wrong about that. What a take... I just can't see them launching it with an empty library...
 
I think I‘m mostly in line what MVG is speculating in the video on Spanwave. Name will focus on its function (Switch 4K) and the consumer hardware will probably be a bit less powerful as you would think after this leak and rumors. Launch date I‘m not so sure, but if they plan it to release it alongside BOTW 2 and that game makes it this year then I would say September or October.
???
This is the consumer hardware.

NVN2's Driver explicitly states a 12SM GPU.

You don't just change that at this point
 
Man... hope he's wrong about that. What a take... I just can't see them launching it with an empty library...
It's a revision and part of the current Switch family, but it isn't compatible with current Switch games and Nintendo would need to patch everything and a lot of games that aren't in ongoing dev or are too old won't be playable on the Switch 4K, which again, it's not a new machine but a revision and part of current Switch family.
Honestly, what a take from MVG :)
 
Those all happened before the pandemic, before consumer electronics were selling out at absurd rates. I have zero doubt that an enhanced Switch would sell everything they ship for the foreseeable future.

I mean, the Switch OLED model literally proves this.
Nintendo themselves have acknowledged that the pandemic bump for recreational consumer electronics has already diminished significantly enough in 2021 to mention it in their quarterly financial statements. So this scenario you’ve created does not align with reality.

All OLED proves is supply does not currently meet demand. But what is the top end of that demand? Is in well over 750K per year? Because that’s the difference between the current known number of OLEDs sold of total Switches expected to sell this year and the 4.6 million figure I listed for 20% of all Switches expected to sell this fiscal year. Can you say with absolute determination and certainty that you know demand outstrips that 20% figure? Cuz I can’t.
 
That's what he flat out says around the 9 min 30 sec mark. He thinks it's going to be called a Switch 4K that's a revision and not a new gen, but it won't be BC with everything, but that they'll try to patch as many games as possible.
This is a completely incoherent take from MVG. Being positioned as a revision without full BC is completely contradictory.
 
???
This is the consumer hardware.

NVN2's Driver explicitly states a 12SM GPU.

You don't just change that at this point
Just meant that we don’t have the full picture. From my compared to most of you limited knowledge I understand that this will be a powerful chip for sure. But I‘m still taking any speculation about the power of the potential hardware with a huge grain of salt as long as we are, despite of the leak, still mostly in rumor territory.
 
This is a completely incoherent take from MVG. Being positioned as a revision without full BC is completely contradictory.
Incoherent was the first word I thought of too. Usually people have to pick between "it will just be a slightly upgraded Switch" and "it won't have any BC."
 
Yikes. BoTW2 and Prime 4 already releasing as gimped games?
What does this mean? BotW 2 and Prime 4 are being developed for the current Switch. If they're released on the next model it will be as ports with merely the same kinds of enhancement as BotW saw on Switch (resolution bump, higher draw distance, etc.).
 
We're already seeing Nintendo's games hitting the limits of the current switch. To keep supporting Mariko once the floor has been raised so much is gonna be very difficult. This won't be a Forza Horizon 5 or a Horizon Forbidden West level of difference here. That only works when you have sufficient memory bandwidth and cpu power
Yeah
But I mean… as long as it exists though it should be fine.
If the obviously superior version is on switch next then I guess it could only help migrate those who aren’t satisfied with the lesser versions.

A sort of side thought.
Looking at GT7 comparisons, there’s really not a lot of difference in the versions across ps4,pro, and ps5
Load times is the biggest boon ps5 has over the others, and I guess raytracing would be next. Other than that the lesser versions run really well and look fairly similar. Granted ps5 has to do a lot of grunt work for very little results it’s targeting 4K, using RT, and some other effects… that mostly amount to subtle polish. I imagine that might be what switch versions might be like…

Perhaps another example would be Monster hunter rise on PC vs switch. It looks much better but not THAT much better… of course it was probably built for switch first and foremost… so…

Here’s that GT comparison


Here’s MHR


I think these are pretty good examples of games that need to be shared across hardware and the differences we could expect between versions.
 
Could it be that MVG knows something so he is just pretending to spout nonsense? Like plausible deniability?
If that was what was going on, wouldn't it be easier to just not go on podcasts and talk about the thing? I don't think anyone would blame a licensed Nintendo developer for not wanting to go on a podcast talking about Nintendo leaks.

It really kind of seems like MVG has two pretty strong ideas (this will only be a revision, and BC is impossible without including the original hardware due to precompiled shaders) that are contradictory and lead to a conclusion that doesn't make sense.
 
Last edited:
I get that some games will get "lost in translation", especially with such a large catalog of games in the eshop, but if the new hardware is automatically backwards compatible with 99% of the current Switch library, then that is a big shot in the foot on NIntendo's part.
 
0
No disagreements from me. But then again, the Cortex-A78 or the Cortex-A78C being used is probably a given since Orin's using the Cortex-A78AE for the CPU. The question is how many CPU cores are going to be used. 4? 6? 8?
Hopefully 8 A78s, or else it will be harder to catch up CPU wise with current gen PS5 and x series s.

Somebody brought up on an earlier page that PS5 cpu is 8x more more powerful than the PS4 jaguar cores. And if the switch is behind ps4 by about 3.5x (vs PS4), that would put the switch 24x less as powerful as PlayStation 5, which is hard to believe. I'd just be happy if we keep the same gap of switch vs PS4 vs switch 2 vs PS5...

Ps5's Ryzen CPU is pretty similar in performance to A78s per GHz, right? So something like 1.26Ghz
pet core at 7 cores each for A78s would narrow the gap to around 3x as less powerful as Ps5's CPU (3.6Ghz per core, 7 cores for gaming vs 1.27Ghx per core on A78). The more the merrier of course. 1.5Ghz would be amazing.
 
Could it be that MVG knows something so he is just pretending to spout nonsense? Like plausible deniability?
I mean that’s possible but having any inside info and speculating seems risky. He’s not an anonymous figure and Nintendo could easily make sure he never develops for switch (legitimately) again. If he does know something, he’s far better off staying quiet.

I’d bet he doesn’t know anything and is just having fun speculating on YouTube. There’s a hungry audience and he’s got a perspective. It’ll give him more attention on his own channel.
 
0
If that was what was going on, wouldn't it be easier to just not got on podcasts and talk about the thing? I don't think anyone would blame a licensed Nintendo developer for not wanting to go on a podcast talking about Nintendo leaks.

It really kind of seems like MVG has two pretty strong ideas (this will only be a revision, and BC is impossible without including the original hardware due to precompiled shaders) that are contradictory and lead to a conclusion that doesn't make sense.

I would have just assumed he wouldn't take part in these conversations.
Yeah that makes sense, I was kinda thinking of it as a "4D chess" kind of angle. Like, "if they see me abstaining they'll know I know something so in essence I'll have confirmed this exists" but yeah that's a pretty silly thought.
 
Could it be that MVG knows something so he is just pretending to spout nonsense? Like plausible deniability?

He's under NDA as a developer so even if he wanted to say something he couldn't. I am almost certain he's just speculating like us.
 
0
That's what he flat out says around the 9 min 30 sec mark. He thinks it's going to be called a Switch 4K that's a revision and not a new gen, but it won't be BC with everything, but that they'll try to patch as many games as possible.
Nothing against MVC (well, he's the part I enjoy the least from Nate podcasts, but that's besides the point), but this makez ZERO sense. It if like for some reason the New 3DS couldn't play some previous 3DS games. A "revision" without retrocompatibility is an oxymoron.
 
0
Leaving aside the strange situation of a revision not being able to play its own games, mass patching the Switch catalogue on a game by game basis is also an insane task for Nintendo’s own first party line up, let alone for all the 3rd party devs.
 
0
MVG used to deny a new Switch when Nate talked about it last year prior to E3. IIRC he said he didn’t hear anything from any of his dev friends. He doesn’t say that anymore. He’s more on the train, which tells me he probably has heard from his friends on this. It doesn’t mean there’s NDA on him just people he knows.
 
MVG used to deny a new Switch when Nate talked about it last year prior to E3. IIRC he said he didn’t hear anything from any of his dev friends. He doesn’t say that anymore. He’s more on the train, which tells me he probably has heard from his friends on this. It doesn’t mean there’s NDA on him just people he knows.
I wouldn’t be surprised if he knows people with access to devkits, but he probably doesn’t know what the full BC solution is beyond the fact that devs can patch their games. Hell, at this point probably only Nintendo and a few higher ups in some third party companies probably know the exact plan. Actual devs (as in the people who do the actual development) don’t really need to know anything at this point, especially of Nintendo and Nvidia have a solution for full or near full BC.
 
0
MVG used to deny a new Switch when Nate talked about it last year prior to E3. IIRC he said he didn’t hear anything from any of his dev friends. He doesn’t say that anymore. He’s more on the train, which tells me he probably has heard from his friends on this. It doesn’t mean there’s NDA on him just people he knows.
It's mostly a case of there being too much smoke for him to deny now. Mochi and myself have maintained our information is accurate and now the leak backs us on it.
 
Ps5's Ryzen CPU is pretty similar in performance to A78s per GHz, right?
Context matters, in single threaded performance, ARM A78 at 3GHz is around what the Zen 2 CPU does at 4GHz.

If it’s 8 A78 at 3GHz vs 8 Zen 2 at 3GHz and they have 8 threads enabled in the latter case, then the A78 would outperform it. If at 4GHz for the latter case it should be close.

But Zen 2 has SMT (1C/2T) which helps with performance, up to 50% better depending on the game. Some games scale well with SMT, others do not really see much of a benefit. It’s completely dependent on that the relative performance increase, could be 5% better.

It depends on the application.
 
Since Nintendo only apparently trimmed 4SMs from big Orin, cutting say, 4 CPU cores would also stand to reason. Previously I would have assumed we'd be fortunate if they just cut big Orin in half, specs-wise for T239/Drake.
I imagine Nintendo probably won't use the Cortex-A78AE since I don't think the Cortex-A78AE are necessarily designed with game development in mind. (I imagine the Cortex-A78 or the Cortex-A78C is a much better choice for game development.)
 
They could have to patch stuff as the builds wouldn't work seamlessly with a totally new chip right

But I assume the new device could have some kind of compatibility mode that could handle it like PS and Xbox, looks like they use emulators
 
Last edited:
I imagine Nintendo probably won't use the Cortex-A78AE since I don't think the Cortex-A78AE are necessarily designed with game development in mind. (I imagine the Cortex-A78 or the Cortex-A78C is a much better choice for game development.)
I wouldn't be surprised to see the A78AE in Drake. while it might not be made with game development in mind, it doesn't have any drawbacks against the A78
 
Hopefully a good chunk of games like Pokemon Legends Arceus, thr next Pokemon games, Zelda and Xenoblade's worlds can be loaded into this so there are much less pop up issues and grant us much larger draw distances. I know Monolith Soft will do black magic with this like they have with every Nintendo console they have touched and make Xeno3 look even more amazing. Why? Because they can and are a bunch of mad people that want MOAR, unlike us sane folks over here.

Ehem... hopefully it's revealed before E3 time/direct event so that other devs can showcase their games at the time of max visibility, that'd also mean it'd release in less than 6 months after announcement too. It may not happen, but I want to dream.
 
Anything is possible but that would require a fair amount of coincidences.

Last year we were told Dane was T239 and on 8nm. Nate also heard 8nm for the device that has devkits out since 2020.

This device is called Drake but also T239, and since its closest comparison is Orin still very likely 8nm. So it's extremely likely that this Drake is the same thing we heard about last year and the same things that has had devkits out since 2020.

It's possible that somehow Drake and Dane were confused and are two separate things, but the T239 name they both have in common makes that a bit unlikely.
We still don't have confirmation of Orin being on Samsung's 8nm process, I do find it weird that Nvidia has never confirmed the manufacturing process yet(maybe at GTC in March coming up soon).

I can't imagine Switch 4k, or any conventional naming schemes that other companies use. It will be something unique, or something that calls on Nintendo's own history.

"Switch Advance" incoming

Isn't this guy basically a fraud? I've seen so much shit reposted on era and elsewhere from him that was completely false.
Jeff Grubb has confirmed some stuff as well that Nick has heard in the past so I wouldn't call the man a fraud as I'm sure he has some connections out there in the industry.

Since Nintendo only apparently trimmed 4SMs from big Orin, cutting say, 4 CPU cores would also stand to reason. Previously I would have assumed we'd be fortunate if they just cut big Orin in half, specs-wise for T239/Drake.
The information that we were given in this leak definitely confirmed that Drake and Orin share somethings in common but are different in other aspects. Orin has 16-SM's over 2 GPC's(8 SM's per GPC) but Drake only has 1 GPC with 12-SM's, so it's far from just a trimmed down Orin at this point.

The potential of the clock gating feature with Drake just might be something that Nvidia borrows for the Lovelace Laptop GPU's that are supposed to launch in 2023. This could vastly allow for great efficiency gains over previous Nvidia architectures, not to keep referencing the automotive industry but it does in theory remind me of engine cylinder deactivation used to improve gas mileage.
 
I wouldn't be surprised to see the A78AE in Drake. while it might not be made with game development in mind, it doesn't have any drawbacks against the A78
However, a couple of drawbacks of the Cortex-A78AE (and the Cortex-A78) vs the Cortex-A78C is that the Cortex-A78AE (and the Cortex-A78) only supports up to 4 CPU cores per cluster in comparison to 8 CPU cores per cluster for the Cortex-A78C, which I think is one reason why the Cortex-A78AE seems quite big when taking a look of Nvidia's press die shot of Orin. And the Cortex-A78AE (and the Cortex-A78) support a max of 4 MB of L3 cache vs 8 MB of L3 cache on the Cortex-A78C.
 
Hopefully a good chunk of games like Pokemon Legends Arceus, thr next Pokemon games, Zelda and Xenoblade's worlds can be loaded into this so there are much less pop up issues and grant us much larger draw distances. I know Monolith Soft will do black magic with this like they have with every Nintendo console they have touched and make Xeno3 look even more amazing. Why? Because they can and are a bunch of mad people that want MOAR, unlike us sane folks over here.

Ehem... hopefully it's revealed before E3 time/direct event so that other devs can showcase their games at the time of max visibility, that'd also mean it'd release in less than 6 months after announcement too. It may not happen, but I want to dream.
fixing the pop-in would require patches. if Drake is coming relatively soon, the latest games should get patches for them
 
They could have to patch stuff as the builds wouldn't work seamlessly with a totally new chip right

But I assume the new device could have some kind of compatibility mode that could handle it like PS and Xbox, looks like they use emulators
I forget the exact details, but there’s something specific on the GPU side with how Switch games are made that means BC won’t be automatic on Nintendo’s end, however there no reason why it should have to come to games having to be patched to run on the new machine. Nvidia could potentially have made some sort of native solution or Nintendo could do something like translation or partial emulation on the one specific issue.
 
However, a couple of drawbacks of the Cortex-A78AE (and the Cortex-A78) vs the Cortex-A78C is that the Cortex-A78AE (and the Cortex-A78) only supports up to 4 CPU cores per cluster in comparison to 8 CPU cores per cluster for the Cortex-A78C, which I think is one reason why the Cortex-A78AE seems quite big when taking a look of Nvidia's press die shot of Orin. And the Cortex-A78AE (and the Cortex-A78) support a max of 4 MB of L3 cache vs 8 MB of L3 cache on the Cortex-A78C.
while it's a benefit, it'll probably be designed around. unlike PCs, dev can keep latency low by manually allocating tasks to cores
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom