• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Do you have audio editing experience and want to help out with the Famiboards Discussion Club Podcast? If so, we're looking for help and would love to have you on the team! Just let us know in the Podcast Thread if you are interested!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

I don’t understand why you guys are expecting Series S performance from Drake.

-Same-ish node if we get 7+ or 6nm
-Probably smaller die size (Series S was ~200 mm^2)
-Less power consumption (I’m seeing 80w peak for XSS)
-RDNA 2.0 and Ampere had similar rast. performance
-Drake might be halfway between Ampere and Lovelace

These 3TF docked make no sense top down.

Another sense check: Steam deck is also on TSMC 7 + RDNA 2.0, will likely consume more power than Drake, and hits half of Series S?

Steam Deck + DLSS performance is probably the ceiling here.

And that would be great.
 
Last edited:
I don’t understand why you guys are expecting Series S performance from Drake.
Most are not
-Same-ish node if we get 7+ or 6nm
-Probably smaller die size (Series S was ~200 mm^2)
-Less power consumption (I’m seeing 80w peak for XSS)
-RDNA 2.0 and Ampere had similar rast. performance
-Drake might be halfway between Ampere and Lovelace

These 3TF docked make no sense top down.

Another sense check: Steam deck is also on TSMC 7 + RDNA 2.0, will likely consume more power than Drake, and hits half of Series S?

Steam Deck + DLSS performance is probably the ceiling here.

And that would be great.
Steam Deck is a bad comparison for power consumption numbers because x86 is far more power hungry than ARM.
 
Most are not

Steam Deck is a bad comparison for power consumption numbers because x86 is far more power hungry than ARM.


It’s not a bad comparison. It’s a starting point. It’s not like you’re gonna get 2x total performance of steam deck out of a potentially 15w Drake system bc they went with ARM instead of x86. Arm isn’t magic pixie dust.

Pick some number. Maybe x86 takes 5w out of the Deck’s 15w total consumption. Say Arm runs at 3w. That’s a 40% advantage and it just gives Drake a 2w advantage. It doesn’t matter if the real number is 1w or 4w; that’s not gonna bridge the gap.

Someone of you think Drake can match a 70w-80w home console “bc arm and DLSS”

None of these highly technical arguement make sense thinking about it top-down.
 
Last edited:
Thanks for the post. So from this, only 1 downside that is bandwidth only 68 GB/s, slightly lower than Steam Deck. Every else is more than SD, XBO & PS4 with some technology from PS5. Awesome
It should be noted that Drake with 68GB/s can get a lot more mileage out of that than the devices you mentioned (except the PS5)
 
Someone of you think Drake can match a 70w-80w home console “bc arm and DLSS”

None of these highly technical arguement make sense thinking about it top-down.
As someone else said, it would be nice if you quoted whoever particularly has said this.

Personally I don't think it will produce results all that close to a series S but it'll be a lot closer than Switch was this gen.
 
It says here the screen is known to be HDR at Full HD resolution. Full HD usually means 1080p, which I'm skeptical of. I would have assumed they'd be reusing Switch OLED panels. Maybe it's just the resolution of a devkit display.

This is also saying DLSS is only used in docked mode. No DLSS in handheld mode + 1080p screen sounds not great to me, tbh.
This is the biggest thing against it for me. If DLSS is only used in the docked mode, it could make seamless switching between handheld -> docked and vice versa quite problematic. I really doubt they're going to screw up the core conceit of the console with so much on the line
 
0
CPU 1.5GHz
GPU portable 1.3TFLOPs, docked 2.3TFLOPs

So I heard Nate's Podcast the other day and he mentioned that the Nvidia leak for 239 did not mean anything for drake, but potentially speaking I remember another user on the leak tread mentioned the following,

That the CPU will be more powerful than PS4

THE GPU will be a little more powerfule than the Series S

Are these good assumptions based on Kopite leak and all other stuff that we have gether through the years?

That’s neither here nor there since I used the PS4 Pro to One X as a range.

Like it’ll fall somewhere between that.

@RennanNT

Thanks a lot!

Is the A78C the good CPU people were hoping for?

Base Xbox One GPU (with many improvements in architecture obviously) in handheld mode and 2.6tflops GPU with DLSS on top in docked mode would be fantastic. I would be very pleased with those specs. Let's just hope they don't skimp on RAM and it really is 12gb like I heard because even then it will probably mean 8-10gb for games. If they go with 8gb then we're talking 4-6gb for games...
 
0
It’s not a bad comparison. It’s a starting point. It’s not like you’re gonna get 2x total performance of steam deck out of a potentially 15w Drake system bc they went with ARM instead of x86. Arm isn’t magic pixie dust.

Pick some number. Maybe x86 takes 5w out of the Deck’s 15w total consumption. Say Arm runs at 3w. That’s a 40% advantage and it just gives Drake a 2w advantage. It doesn’t matter if the real number is 1w or 4w; that’s not gonna bridge the gap.

Someone of you think Drake can match a 70w-80w home console “bc arm and DLSS”

None of these highly technical arguement make sense thinking about it top-down.
There's more to it than just arm vs x86. Power consumption doesn't grows linearly with clocks like it does with number of cores. 4x A78 running at 2GHz consumes a lot more than 8x A78 running at 1GHz.

Drake has twice the number of CPU cores and 3 times the number of GPU cores (more than the XSS even). That, added to ARM advantage, is how they could have similar performance in a similar node with a wattage similar to the OG Switch. And then you have headroom to raise the clocks in docked mode. I dunno if it could go as high as 3TF in 6nm, but Deck level in handheld mode and higher clocks in docked mode is possible, even on the same node.

Here's some testing with Orin on 8nm, which is the closest we get from Drake, and a rough idea of how much Drake CPU/GPU at different clocks would consume on 8nm (OG Switch would consume roughly 6W in comparison):

 
Last edited:
Could you quote the people in this thread making this claim in the last few pages?

As someone else said, it would be nice if you quoted whoever particularly has said this.

Personally I don't think it will produce results all that close to a series S but it'll be a lot closer than Switch was this gen.


I spent 10-15 mins going through the last 10 pages to show some effort. Im sure I can find dozens more examples, but I’m not going to waste more time.

Expectations between Steam Deck and Series S are very common here and I argue for the reasons above that they seem unrealistic unless we argue for big die + big draw + TSMC 4nm.

Here is another example:
PS4 Slim, which launched in 2016 consumed 160w.

If we match that performance with Drake, it’s a 10x improvement per watt in 7 years.
 
There's more to it than just arm vs x86. Power consumption doesn't grows linearly with clocks like it does with number of clocks. 4x A78 running at 2GHz consumes a lot more than 8x A78 running at 1GHz.

Drake has twice the number of CPU cores and 3 times the number of GPU cores (more than the XSS even). That, added to ARM advantage, is how they could have similar performance in a similar node with a wattage similar to the OG Switch. And then you have headroom to raise the clocks in docked mode. I dunno if it could go as high as 3TF in 6nm, but Deck level in handheld mode and higher clocks in docked mode is possible, even on the same node.

Here's some testing with Orin on 8nm, which is the closest we get from Drake, and a rough idea of how much Drake CPU/GPU at different clocks would consume on 8nm (OG Switch would consume roughly 6W in comparison):


EDIT: nvm
 
0
More big red flags about the credibility of that spanish post. The same author of the Switch 2 post is talking about an upcoming Xbox Series Portable console using RDNA3. When asked about their sources or any proof, they made a feedback post but didn't actually responded.

I recommend our tech savy users here to also look at this : https://disruptiveludens.net/anx-nintendo-switch-2/ , which just sounds like a tech fanfiction.
 
I spent 10-15 mins going through the last 10 pages to show some effort. Im sure I can find dozens more examples, but I’m not going to waste more time.

I mean, one out of those four posts you quoted is someone asking if it was an appropriate comparison, and someone else then responding no. I remember because I also responded to that post. The rest don't even mention "Series S" by name and none say 3 TF. Pretty much every time someone brings up a Series S comparison in this thread, someone else responds telling them to lower their expectations. Series S equivalent performance is not what most people here are expecting. Good on you for not wasting your time.
 
I don’t understand why you guys are expecting Series S performance from Drake.

-Same-ish node if we get 7+ or 6nm
-Probably smaller die size (Series S was ~200 mm^2)
-Less power consumption (I’m seeing 80w peak for XSS)
-RDNA 2.0 and Ampere had similar rast. performance
-Drake might be halfway between Ampere and Lovelace

These 3TF docked make no sense top down.

Another sense check: Steam deck is also on TSMC 7 + RDNA 2.0, will likely consume more power than Drake, and hits half of Series S?

Steam Deck + DLSS performance is probably the ceiling here.

And that would be great.
3TF is definitely too optimistic I agree. I believe we will be over 1TF portable and around 2.5TF docked.

Now, with DLSS you could end up with a similar IQ than with the Series S' 4TF.
 
Someone of you think Drake can match a 70w-80w home console “bc arm and DLSS”

None of these highly technical arguement make sense thinking about it top-down.
If we were looking closely at watts as a sign of overall system capability, we might expect base Switch to reach 5-10% of PS4.
 
0
I spent 10-15 mins going through the last 10 pages to show some effort. Im sure I can find dozens more examples, but I’m not going to waste more time.

Expectations between Steam Deck and Series S are very common here and I argue for the reasons above that they seem unrealistic unless we argue for big die + big draw + TSMC 4nm.
Those posts aren't claiming that this will be Series S level which is what I was referring to.

Yes most of us believe it will outdo the Steam Deck which for a number of reasons is extremely reasonable.
Here is another example:
PS4 Slim, which launched in 2016 consumed 160w.

If we match that performance with Drake, it’s a 10x improvement per watt in 7 years.
Again you're comparing vastly different architectures. Mobile tech in general has improved much, much faster then desktop tech.
 
3TF is definitely too optimistic I agree. I believe we will be over 1TF portable and around 2.5TF docked.

Now, with DLSS you could end up with a similar IQ than with the Series S' 4TF.
2.5TF also feels too high given what I outlined above. You’re saying 1.5x Steam Deck.

But I hope you’re right.
Yes most of us believe it will outdo the Steam Deck which for a number of reasons is extremely reasonable.

Again you're comparing vastly different architectures. Mobile tech in general has improved much, much faster then desktop tech.

That wasn’t my point. Steam Deck is basically a PS4 GPU BECAUSE mobile tech goes faster.

Believing it will be strong than the Deck despite being on the same (7nm TSMC) or worse node (Samsung 8nm) almost exclusively bc of power consumption differentials between X86 and ARM feels very optimistic to me, if we have 2w-3w of wiggle room.
 
I recommend our tech savy users here to also look at this : https://disruptiveludens.net/anx-nintendo-switch-2/ , which just sounds like a tech fanfiction.
(This is assuming Google Translate does a decent job translating.)
1) What is "ANX"?
2) The claim that T239 doesn't have RT cores is complete nonsense, considering Orin has RT cores, and the illegal Nvidia leaks do confirm and mention that T239 is a custom variation of Orin and T239 has ray tracing support.
3) The claim that T239 uses the same SMs as A100 is complete nonsense, considering the illegal Nvidia leaks do strongly suggest T239 is much closer to consumer Ampere GPUs rather than the A100.
 
Steam Deck makes a poor comparison as its own performance will be highly variable, assuming the power limit isn't touched. The CPU itself has at least 5-6 watts difference between its own floor and ceiling. So what's leftover for the GPU is also going to vary a lot. Basically, given the default power limit, you can either push the CPU hard or push the GPU hard, but you can't do both at the same time.
 
That wasn’t my point. Steam Deck is basically a PS4 GPU BECAUSE mobile tech goes faster.

Believing it will be strong than the Deck despite being on the same (7nm TSMC) or worse node (Samsung 8nm) almost exclusively bc of power consumption differentials between X86 and ARM feels very optimistic to me, if we have 2w-3w of wiggle room.
It's not just the difference between ARM and x86, it's also the massive overhead the Steam OS costs compared to Horizon on Switch. And probably other factors that I'm personally not aware of.
 
I think if we want to really do a comparison, Tegra ORIN at its highest with everything on consumes noticeably less than the PS4.

And 20% less or so on a worse node than the Series S and on paper, more cores and a higher GPU load :p
 
0
Steam Deck makes a poor comparison as its own performance will be highly variable, assuming the power limit isn't touched. The CPU itself has at least 5-6 watts difference between its own floor and ceiling. So what's leftover for the GPU is also going to vary a lot. Basically, given the default power limit, you can either push the CPU hard or push the GPU hard, but you can't do both at the same time.

Okay….okay. I don’t want to be rude but that’s dumb distinction.

1. Drake will probably have the same power envelope
2. More power consumption by GPU will mean less available for CPU.
3. GPU power efficiency may not be material better for Drake than RDNA 2.0.
4. ARM advantage over x86 will be a low to mid single digit number.
 
I don’t understand why you guys are expecting Series S performance from Drake.
This comes up in thread a lot and it is almost immediately stomped down every time. “You guys” isn’t the “general thread consensus”

-Same-ish node if we get 7+ or 6nm
-Probably smaller die size (Series S was ~200 mm^2)
-Less power consumption (I’m seeing 80w peak for XSS)
-RDNA 2.0 and Ampere had similar rast. performance
-Drake might be halfway between Ampere and Lovelace
ARM efficiency isn’t a magic bullet but I think you are seriously underrating how much more power efficient ARM is than zen 2, probably about 4x clock-for-clock

Zen 2 is an excellent arch however, and Series S will outperform whatever actual clocks Nintend settles on, you are correct

These 3TF docked make no sense top down.
12 SMs = 1536 CUDA cores = 3072 FLOP/cycle

At original docked mode clocks, that’s 2.4 TF in docked mode. Assuming there is a clock upgrade to 1GHz, that’s 3 TF. We’ve run power numbers on Samsung 8nm (because Nvidia has made the Orin power curves available) and 1GHz seems feasible on any of the likely TSMC nodes, at ~Switch TDP

3TF is entirely possible.

Not that TF is everything. The best cash comparison between Series S and Drake and the claim that comes up over and over again is that Drake will enable a class of impossible ports from current gen systems the same way Switch got PS4 era impossible ports.

The existence of DLSS allows ports that are cut down to run at 720p to still achieve 2k+ resolutions at a loss of image quality.

Another sense check: Steam deck is also on TSMC 7 + RDNA 2.0, will likely consume more power than Drake, and hits half of Series S?
Again, 4x power draw on those damn x86 CPUs, and Steam Deck is shelling out for crap loads of CPU capacity. And it hits less than half of series S in terms of raw TFLOPS.

Also, Steam Deck is like 1.3x Switch in docked mode. The numbers get inflated because you can’t take off the controllers - the 20W draw can’t be compared to the 7W power draw with the Joy-Cons off and the battery charged m.

Steam Deck + DLSS performance is probably the ceiling here.

And that would be great.

This is an apples to oranges comparison. Steam Deck will absolutely clean Drake’s clock in raw CPU perf. Some of the perf will be wasted on the very thick OS layer, but in CPU bound scenarios, SD will win, and I don’t think it’s going to be close.

In handheld mode, if there isn’t a single clock upgrade, Drake will run at 1.4 FLOPS, comparable to the SD’s 1.6, and SD’s GPU isn’t paying the same OS penalty that the CPU is.

In other words I expect handheld mode to be a little shy of SD at considerably better power draw and thermals thanks to simple ARM efficiency, but falling behind in some really cranking CPU situations.

The in docked mode I expect the CPU situation will be the same and they GPU will be able to take advantage of higher power draw. The series S will exceed it in TFLOPs (4 to Drake’s 2.6-3), and will absolutely slay on the CPU level, running at faster clocks and having a smaller (but not Nintendo small) OS.
 
ARM efficiency isn’t a magic bullet but I think you are seriously underrating how much more power efficient ARM is than zen 2, probably about 4x clock-for-clock

4x means 75% power reduction. That’s means if Steam Deck uses 5w-6w on average than Drake would use 1.25w-1.5w. That feels to high:



But let’s go with it: That’s a 4w-5w delta.
 
Last edited:
4x means 75% power reduction. That’s means if Steam Deck uses 5w-6w on average than Drake would use 1.25w-1.5w. That feels to high:



But let’s go with it: That’s a 4w-5w delta.
Too high for the Zen 2 cores or for the ARM cores?
 
0
this is an investors article telling you to buy intel over apple...
And the reference it uses for debunking the "Power Myth" is an article from 10 years ago lol
It misses the point even throwing around benchmarks for performance and never really getting to the actual point... in that arm uses less electricity than x86
lol
unless I missed it... admittedly I didn't read the whole thing word for word
 
Last edited:
It's not just the difference between ARM and x86, it's also the massive overhead the Steam OS costs compared to Horizon on Switch. And probably other factors that I'm personally not aware of.

+ Proton compatibility layer converting Windows calls to Linux, for most Steam games. Though some games perform better on Proton.

Either way Switch 2 will never need to deal with that outside of emulation, no need to 'brute force' performance.
 
0
The whole APU in the Steam deck (just the chip) consumes what the whole switch in docked mode does for what it’s worth, so again the comparison isn’t quite right here.


With respect to FLOPS, note that a console like the switch or switch 2 have ver low and thin layers of abstraction that really lets developers use more of it. Getting more out of the hardware.

The Steam Deck relies in VULKAN which is way higher level than NVN, NVN2, GNM, METAL, DX12U, etc.

And it is never coded to it.


If it’s 1.6TFLOPs on the Steam deck and 1.6TFLOPs on Drake, Drake will get way better mileage usage out of those 1.6TFLOPS than the Steam deck ever could.


Unless the SD gets it’s own very low level API, has developers that code closer to the metal for it, basically the performance for the FLOPS of the Steam deck can’t really be compared to the perform per FLOPS of the Switch 2.

It was really faulty to begin with.


Not to mention these are entirely different architectures.


It’s why I harp on people like a banshee about using “iTs 80W, it can’t perform what an 80W system will do!!! It won’t even get close to half way!” as if that’s a fair argument.
 
Last edited:
Okay….okay. I don’t want to be rude but that’s dumb distinction.

1. Drake will probably have the same power envelope
2. More power consumption by GPU will mean less available for CPU.
3. GPU power efficiency may not be material better for Drake than RDNA 2.0.
4. ARM advantage over x86 will be a low to mid single digit number.
The point is that something like Drake will be more consistent with itself. If one wants to compare how games A, B, C, and so on fare between multiple platforms, it'd be nice if those platforms are at least internally consistent.

1. I think that's actually far under 50% likelihood. Docked Drake's power envelope should be sub 20 watts in its entirety (and I'm stretching that upward by multiple watts to catch 99.99% probability) . Steam Deck's default 15 watt limit is referring to just CPU+GPU; the system altogether pushes above 20 watts.
2. True for laptops/laptops-in-alternative-form (...which is what a Steam Deck or any other handheld PC is, really). Not true for the Switch specifically though! As is, the Switch's CPU/GPU run at frequencies specified in presets; you can see them here. For all intents and purposes, you can expect consistency.
3. IMO, in games that implement it, DLSS will be a major boon to perf/watt efficiency. Aside from that, depending on the game, Drake can outright have a higher power budget for its GPU at times, as bizarre as that sounds.
4. I'm not sure what this has to do with what I said?

Edit: Oh yea, since the Steam Deck's GPU is in integrated form instead of discrete... its version of RDNA 2 isn't at the same... efficacy? level of the discrete cards. The integrated versions of RDNA 2 are missing that big chunk of cache branded as 'Infinity Cache'. It's not the only thing that RDNA 2 offers over 1 (there's still the featureset updates and much improved clocks), but it's one of the facets that AMD tooted its own horn about.
 
Last edited:
this is an investors article telling you to buy intel aver apple...
And the reference it uses for debunking the "Power Myth" is an article from 10 years ago lol
It misses the point even throwing around benchmarks for performance and never really getting to the actual point... in that arm uses less electricity than x86
lol
unless I missed it... admittedly I didn't read the whole thing word for word
I'm glad I'm not the only one who thought the author's arguments were completely fatuous. He threw around enough jargon to imply he had a technical understanding of the subject and then used outdated evidence and said 'I told you so.'

The argument for Apple Silicon was never 'the most powerful chips on the planet,' it was 'high-end speed at battery-sipping wattage.'

The M1 Max (outside of dedicated gaming tasks) benchmarked at nearly 30% faster than the equivalent Intel mobile chipsets, while offering double the battery life.
 
I'm glad I'm not the only one who thought the author's arguments were completely fatuous. He threw around enough jargon to imply he had a technical understanding of the subject and then used outdated evidence and said 'I told you so.'

The argument for Apple Silicon was never 'the most powerful chips on the planet,' it was 'high-end speed at battery-sipping wattage.'

The M1 Max (outside of dedicated gaming tasks) benchmarked at nearly 30% faster than the equivalent Intel mobile chipsets, while offering double the battery life.
exactly
 
12 SMs = 1536 CUDA cores = 3072 FLOP/cycle

At original docked mode clocks, that’s 2.4 TF in docked mode. Assuming there is a clock upgrade to 1GHz, that’s 3 TF. We’ve run power numbers on Samsung 8nm (because Nvidia has made the Orin power curves available) and 1GHz seems feasible on any of the likely TSMC nodes, at ~Switch TDP

3TF is entirely possible.

Within a total power envelope of 15w?

What would be the breakdown?

3w-5w for the CPU
5w-7w for the GPU
3w-5w for memory, storage, and all other

?
 
@Vash_the_Stampede Thanks for your concern, but do you mind if I ask why you care so much about people in this thread being “optimistic”? My request is that you allow people to manage their own expectations and to not generalize what members in this thread might be thinking.
 
@Vash_the_Stampede Thanks for your concern, but do you mind if I ask why you care so much about people in this thread being “optimistic”? My request is that you allow people to manage their own expectations and to not generalize what members in this thread might be thinking.
I personally don’t mind if he just calls it out because I’ve seen his posts in the past and he is actually someone who is genuinely curious and willing to learn about a topic at hand. He’s also someone that does not mind being educated.
 
Within a total power envelope of 15w?

What would be the breakdown?

3w-5w for the CPU
5w-7w for the GPU
3w-5w for memory, storage, and all other

?
I don't think that any individual one of us can give answer all the parts, but I think we collectively can piece things together?

For the CPU, our expectations range from 2 to 3 watts (edit II: I mean that some of us expect 2 watts, some of us expect a higher number, capping at 3). The basis for that is the OG Switch's CPU power usage landing in the ballpark of 1.83 watts.
For the RAM, I think that our estimate for LPDDR5 at the full 102.4 GBps was low to mid 3 watts?
Storage when actively used under 'typical operating conditions' (25 Celsius)... for eMMC, it'd be about half a watt. If it ends up being eUFS, maybe add up another 1-3 tenths, so ballpark it at 0.6-0.8 watts. When idling, those two should be using near nothing. (edit: milliwatts when idling; as it's single digit volts multiplied by three digit microamperes under typical conditions)
The fan in OG Switch is rated for 5 volts/0.33 amperes, or 1.65 watts. That should be at max, but it shouldn't be running at max normally.
 
Last edited:
Within a total power envelope of 15w?

What would be the breakdown?

3w-5w for the CPU
5w-7w for the GPU
3w-5w for memory, storage, and all other

?
Based on AGX Orin (which is less efficient than normal A78's due to each having lockstep and redundancy island. etc, also the PVA/other AI/Auto features)
Fiddling with the power tool (which has inefficiencies) dropping 2 1.1GHz A78AEs (Which are less efficient than normal A78/A78C), at high usage only drops 0.6Ws from the total.

Dropping from 8 A78AEs down to 2 A78Aes both at high usage only drops 1.6W's (And that is considering A78AE is multiple clusters of cores versus a single cluster which adds more power usage)

And those are less performant and less efficient than normal A78s (Which have far better IPC than the x86 based Zen 2 in the Steam Deck and Series S|X/PS5
So the CPU side likely would draw 3Ws at absolute most at ~1GHz, and even more likely just around the 2W range considering removal of the safety island and making it single-cluser.

Therefore leaving a lot more power to the GPU to do stuff while maintaining competent CPU performance (Mind you, I do not think the CPU will be super close to the Series S, but close enough to where the sacrifice is mostly just not having those unlocked >70FPS high refresh modes)
 
0
I don't think that any individual one of us can give answer all the parts, but I think we collectively can piece things together?

For the CPU, our expectations range from 2 to 3 watts. The basis for that is the OG Switch's CPU power usage landing in the ballpark of 1.83 watts.
For the RAM, I think that our estimate for LPDDR5 at the full 102.4 GBps was low to mid 3 watts?
Storage when actively used under 'typical operating conditions' (25 Celsius)... for eMMC, it'd be about half a watt. If it ends up being eUFS, maybe add up another 1-3 tenths, so ballpark it at 0.6-0.8 watts. When idling, those two should be using near nothing.
The fan in OG Switch is rated for 5 volts/0.33 amperes, or 1.65 watts. That should be at max, but it shouldn't be running at max normally.
iirc the speakers on the switch are rated for 1~2W.
Display should be similar I think
 
I believe thraktor also posted numbers that contradicted this.

To a degree.
I thought these were updated figures after Thraktor’s corrections, but I admittedly did not vet fully.

edit: is this it?

Orin_power_estimations.png

I went through the Nvidia power tools and went ham... This chart is correct, the GPU was turned off for the initial power reading, then that reading was subtracted for every single option, this actually took a few hours, but I'm bored at work. The reason you have to turn off the GPU for the initial GPU reading, and then estimate the 6TPC, is because moving from 2TPC to 8TPC and subtracting the 2TPC is ignoring the other components in the GPU that need power, for instance, at 420MHz 6TPC's draw 5.7w as Thraktor pointed out, not 4.7w like we initially thought. The (wattage)* with that symbol next to them is what Drake (minus Orin's tweaks) would consume... I'd suggest that Drake's SoC would consume maybe 5% less watts at a given clock than Orin.

So, as it turns out, the Jetson PowerEstimator can be conned into using 6 TPCs instead of 7. The API endpoint will accept any configuration, it's just that the front end only lets you submit actual valid Orin configs, and front end is very readable JS.

Dropping from 7TPC's @420MHz to 6TPC@420MHz with all other settings the same (8CPU cores @ 1.5GHz, DLA + PLA off, EMC Freq @3199, EMC load at low, CPU and GPU load at high) I save a grand total of .8 watt, exactly
 
Last edited:
iirc the speakers on the switch are rated for 1~2W.
Display should be similar I think
So following that logic and
CPU/GPU estimates on 8nm
We'd have something around:
CPU (at 1GHz) + GPU (at 900Mhz) + RAM + Speakers+ Display + Storage + Fan ~= 25-27W for docked play.
and with the GPU at ~400Mhz somewhere around 15-17W.
more or less.
I find this hard to believe unless like another user said earlier in the thread, they bump up the battery capacity.
 
I personally don’t mind if he just calls it out because I’ve seen his posts in the past and he is actually someone who is genuinely curious and willing to learn about a topic at hand. He’s also someone that does not mind being educated.
Is being optimistic a virtue that deserves to be “called out”? Couldn’t a person simply explain things from their perspective alone? What about pessimism? Is being a pessimist A-OK, while an optimist is a bridge too far?
 
So following that logic and

We'd have something around:
CPU (at 1GHz) + GPU (at 900Mhz) + RAM + Speakers+ Display + Storage + Fan ~= 25-27W for docked play.
and with the GPU at ~400Mhz somewhere around 15-17W.
more or less.
I find this hard to believe considering unless like another user said earlier in the thread, they bump up the battery capacity.
I’m only casually knowledgable about this stuff but even based on napkin math it feels like there are a dozen reasons Samsung 8nm is a poor fit.
 
For the CPU, our expectations range from 2 to 3 watts. The basis for that is the OG Switch's CPU power usage landing in the ballpark of 1.83 watts.
On reread, I realize that this particular wording can be ambiguous.
The intent is 'some of us think 2 watts, some of us think higher than that, like 3 or so', not 'we collectively agree that the CPU's power usage will bounce between 2 and 3 watts'.
(I personally expect 2 watts)
 
0
So following that logic and

We'd have something around:
CPU (at 1GHz) + GPU (at 900Mhz) + RAM + Speakers+ Display + Storage + Fan ~= 25-27W for docked play.
and with the GPU at ~400Mhz somewhere around 15-17W.
more or less.
I find this hard to believe unless like another user said earlier in the thread, they bump up the battery capacity.
Actually, I forgot about the joycons.
Anandtech has an article:
where they measured power consumption on a 2017 switch (the power hungry, pre-2019 node shrink model)
they noticed how in docked mode the consumption goes down a bit (which I believe might be not only the screen turning off, but the joycons being detached).
So, depending on whether the user is taking the joycons out or not, that 25-27W estimate could be quite different (it's already wrong because I factored in the display which is turned off in docked mode anyways).

But the real issue here is portable mode... If their measurement is indeed accurate for min brightness + joycons and console fully charged, then drake is indeed quite far...
 
0
But let’s go with it: That’s a 4w-5w delta.
I’m not sure why you’re trying to reverse engineer Drake numbers by looking at Steam Deck. Instead of trying to compare Steam Deck to Drake, which are different arches, why not look at Orin? Orin has documented power draw and (indirectly) documented power curves.


Within a total power envelope of 15w?

What would be the breakdown?

3w-5w for the CPU
5w-7w for the GPU
3w-5w for memory, storage, and all other

?
On 8nm, that GPU config runs 9.6W and the CPU config runs 4.8W. These are effectively documented by the Orin power curves provided by NVidia. This is assuming the A78AE cores, not in lockstep mode, run the same as the A78C cores.

That gives us a 14.4W power draw, obviously untenable. This is why most folk no longer believe it’s running on 8nm.

ARM suggests a 20% win in TDP at iso performance from the Samsung 8nm to TSMC 7nm, which seems like another hand wavy baseline. That gets you a SOC running at 11.5W.

That’s high, I will admit, but yes, that gets you nicely inside your 15W envelope.

For the record I see other possibilities, including Nintendo going a net clock downgrade for their successor system. But yes, these clocks are possible, based on what we know about Orin, in a reasonable power envelope

Edit: wrote this, then went dark for work, then posted without seeing various replies. My numbers are somewhat pessimistic but i figured a type 1 error was called for.
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom