• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

Where on earth are you getting early from my post? I never said the word early. The claim has been “late 22/H1 23” for a very long while now.

H1 is quite literally first half of the calendar year from January to June, H2 is what we are in right now and goes from July to December.
"Early 2023" has been claimed specifically (see a couple pages back), as well as implied by the "late 2022" part of the window. Bloomberg's cited release window for third party games was "during or after the second half of [2022]" which gives the same impression.

Personally I will form no expectations beyond "2022 or 2023" until we have more concrete evidence. But just going by the rumors, late 2022 through early 2023 has been claimed as the high-probability zone, so past a certain point earlier in 2022 or later in 2023 are less likely.
 
The claims have consistently been by early 2023 at the latest, and early 2023 would have the common meaning of the first quarter. There's no reason to assume May or June right now.
 
Even though this gets repeated in this thread repeatedly , the only time people should start worrying is if September goes by and nothing happens
 
I’ve had an absolutely awful mental health day today but I just want to say thank you for the gifs on the last page. Y’all put a big smile on my face <3
 

What? Nothing strange or funny about people providing each other with much needed cooling via water during these heaty days. ._.

Cooling is an important thing generally. See Nintendo, telling you not to play with the Switch at high temps!

Hopefully they put a better cooling into Drake.
 


Safe to say that Nintendo doesn’t really have any major competitor in this space.

Tencent had a patent a while back for a portable gaming PC which was switch-like in nature, but it seems like they are going the route of portable system that gets cloud services instead.
 
0
How much internal storage do you think the next Switch will have? Will game sizes be bigger and third-party support be better, creating a need for a lot more storage? Or will Nintendo cheap out and stick with 64GB? I think at least 256 GB would be nice.
Probably will expect players to buy a MicroSD card to pick up the slack. 512GB cards ain't looking too bad lately and 1TB cards are in better reach this year.

If I had to give a straight answer, 128GB internal storage to start is probably a sensible route coming from Switch's 32GB Launch and 64GB in the OLED.
 
In terms of performance, what do you guys expect for Drake in portable mode, something beyond or behind the Steam Deck?

A bit better than the steam deck at the same power profile with some improvements thanks to optimizations, maybe even better if they end up using an advanced node. If on Samsung 8nm you'll get better performance out of the steam deck since you can increase the power profile.
 
0
In terms of performance, what do you guys expect for Drake in portable mode, something beyond or behind the Steam Deck?
Most Steam Deck games I’ve tried whether it’s a supported or unsupported games tend to run in the same ballpark meaning there isn’t a ton of optimisation in most games.

Games running on Drake will be totally different “coded to the metal” as they say (extreme optimisation) to get as much as twice the performance from on par unoptimised code.

Then there’s DLSS. Most modern AAA engines are designed to lean much more on the GPU due to the PS4/XBO generation being extremely lop sided with regards to their Jaguar CPU cores. This will be greatly beneficial to Drake because not only will it’s CPU to GPU power ratio be similar to last gen consoles but it has it’s secret weapon in DLSS to further lean into the GPU side.

It’s hard to say how Drake will perform without knowing clock speeds but I expect it to at least run AAA PS4/XBO ports at 720p/30fps when in handheld mode and potentially 60fps when docked and when DLSS is supported.
 
0
Significantly behind in raw power. Punch above its weight in actual games.
In order to be so behind in raw power the thing has to be clocked below even the lowest switch clock. :p

It’s 512 cores vs 1536, former running at 1GHz to 1.6GHz, and hovers more in like the 1.3GHz range. as in, 1-1.6TFLOPs, most of the time around 1.3TFLOPs.

For the latter to be “significantly behind in raw power” it would be better for it to just be turned off if they are going to clock it that low.

Wasting energy at that point.


Steam deck is 512 RDNA2 shading cores, 32 TMUs, 8 Ray Accelerators and 8 ROPs and no infinity cache.

Drake is 1536 Ampere shading cores, 48 TMUs, 12 Ray Tracing cores and 16 ROPs.


I’d say, middle of the road, Drake will be behind the Steam deck because of other reasons, but not significantly behind where any of this matters. Maybe a bit behind.
 
Last edited:
In order to be so behind in raw power the thing has to be clocked below even the lowest switch clock. :p

It’s 512 cores vs 1536, former running at 1GHz to 1.6GHz, and hovers more in like the 1.3GHz range. as in, 1-1.6TFLOPs, most of the time around 1.3TFLOPs.

For the latter to be “significantly behind in raw power” it would be better for it to just be turned off if they are going to clock it that low.

Wasting energy at that point.
it's nintendo tho
 
In order to be so behind in raw power the thing has to be clocked below even the lowest switch clock. :p

It’s 512 cores vs 1536, former running at 1GHz to 1.6GHz, and hovers more in like the 1.3GHz range. as in, 1-1.6TFLOPs, most of the time around 1.3TFLOPs.

For the latter to be “significantly behind in raw power” it would be better for it to just be turned off if they are going to clock it that low.

Wasting energy at that point.
I know we been over this multiple times, but assuming 8 nm is all SM running in portable mode even feasible? If it is, wouldn't it be running below the lowest Switch clock?
 
In terms of performance, what do you guys expect for Drake in portable mode, something beyond or behind the Steam Deck?
Beyond it before RT and DLSS, for sure, given what we know about the prospective successor's GPU from the Nvidia leaks, although I don't find it appropriate to compare ARM-derivative devices to x86 ones. For all its intents and purposes, Steam Deck is, in my opinion, an over-engineered GameGear without the charm, and every bit as bad in the battery life department. It's also almost 50% heavier than a Wii U GamePad, but without the bad press, and it doesn't have a docked mode. I would go even further, and say that the primary reason why anybody mentions Steam Deck at all is because there are people who never came to terms with the existing Switch, and are perpetually thirsty for so-called "MORE POWER pro-devices" - It just happened to appear around the same time that the Switch's OLED Model was revealed, and quench said thirst temporarily. I'm actually very confident that Codename Drake will go harder because being in a position to receive competent PS5/XS ports is imperative to its success, and because much of the Switch's appeal was selling definitive portable experiences. These points are often lost in (online) discourse, but Nintendo and Nvidia understand this.
 
In terms of performance, what do you guys expect for Drake in portable mode, something beyond or behind the Steam Deck?
Number crunching, Deck.

Graphical output, Drake.

Deck has impressive stuff but Drake has better RT, access to DLSS, should be more optimised, smaller file sizes and a few other benefits.
 
I know we been over this multiple times, but assuming 8 nm is all SM running in portable mode even feasible? If it is, wouldn't it be running below the lowest Switch clock?
they wouldn't make it 12SM if they couldn't. that's just a waste of silicon. on 8nm, it would just mean 306MHz would return as the base clock
 
it's nintendo tho
Nintendo can kiss their business goodbye. 🤭

I know we been over this multiple times, but assuming 8 nm is all SM running in portable mode even feasible? If it is, wouldn't it be running below the lowest Switch clock?
The lowest clock is 307MHz for games. Even running at that clock it would only really be a bit behind the Steam deck.

Running it below and it probably passes the efficiency curve. There’s an optimal curve where at one point below it’s less efficient but past a certain point it’s also less efficient, middle of the road (whatever that is) gets them the most bang for buck.

If they go below even lowest of lows, it’s like closer to the switch boost mode for loading, where the GPU clocks to like 78MHz and the CPU clocks to 1.78GHz.😅
 
Probably will expect players to buy a MicroSD card to pick up the slack. 512GB cards ain't looking too bad lately and 1TB cards are in better reach this year.

If I had to give a straight answer, 128GB internal storage to start is probably a sensible route coming from Switch's 32GB Launch and 64GB in the OLED.
I already bought a 1TB last month during prime days. I am all set.
 
0
at what wattage? the 15W setting?
The “average” 1.3GHz GPU clock for the Steam deck. It peaks at 1.6GHz (15W), and at lowest it is at 1GHz (like 4-8W?), but seems to average to around 1.3GHz in use based on what Thraktor noted.

You’d have to go higher to the 15W setting to be “significantly” ahead, and even then it’s not an apple to apple comparison anyway as a console and a PC aren’t treated the same way.
 
A pretty notable error is that DSi is shown about a year too early. That plus its slow worldwide rollout and later XL revision puts it much closer to 3DS than it appears here.

BOTW2 2022 date was announced in June 2021. Well over a year from a possible late 2022 release. Plenty of wiggle room for things to move from late 2022 to early 2023 (or vice-versa) without totally wrecking things.
This only made me think about the following comparison purely in terms of battery performance, release order and design changes:
2017 switch: launch DS
switch lite: DS lite
switch OLED: DSi XL
DS-Console-Specs-Comparison.png

So the more I think about it, the OLED really seems to be the end of the road for the switch timeline.
If we think of the 3DS as an iteration upon the DS family instead of purely a next gen device, I think a "switch pro", "switch 2", "super nintendo switch" (my favourite) would be basically the iterative equivalent of the 3DS.

I mean, other than an SoC change, I don't really see much room for improvement when it comes to the switch OLED. I say that because I don't really see nintendo making another version of the OLED with say,
  • 128GB+ storage -- not gonna happen, nintendo would have to sell two or more versions of the console and price it at ~400$ which simply wouldn't sell that well, they always had a solid audience at the 125~300ish $ mark
  • 6-8GB RAM -- also unlikely given that this wouldn't fix memory bandwidth issues on the X1 SoC which is more of an issue than available memory
  • an overclocked X1 SoC at a smaller node (8~12nm) -- aka the "full blood" switch those chinese forums were speaking of; imo, this is also unlikely given the fact that many games on switch have multiple bottlenecks, so even though a cpu overclock fixes framerates on certain games, this still would be an investment from nintendo that wouldn't yield that much of an improvement - many games run poorly due to memory bandwidth constraints, RAM amount and the X1's GPU horsepower (or lack thereof)
  • a better wifi/ethernet adapter (or 5G support) -- The least likely out of all the aforementioned. People don't realize why wireless adapters are so limited on the switch: the way nintendo handles their online games like splatoon 2 is by having players remain in sync and in case someone has a faulty connection, a de-sync occurs and they're removed from the lobby (or in a worse scenario, they take the whole lobby down which prompts a connection error for everyone).

And like I said, de-syncs occur because some people are playing with a more stable connection than others, not because they have a faster connection (since every switch is hard-capped to the same rather low Up/Down limit).
Introducing a better wireless/ethernet adapter would not only increase instability on lobbies caused by differences in latency between players (higher speed internet does improve ping a big, yes) but 5G capabilities would also make it even worse in terms of connection stability while also requiring nintendo to make deals with multiple cellular companies globally (highly unlikely).
 



a video on the SD with the TDP set to 3W. Bright Memory Infinite and Kingdom Come Deliverance are tested, coincidently, games whom have switch versions. also 3W is lower than the Switch's TDP I think
 
BTW, a 1.3GHz GPU clock speed for Codename Drake would mean we're talking about a prospective 4TF machine before RT &DLSS. 1.6GHz, and it would be around 4.8TF. As Steam Deck doesn't have a docked mode, that's still very high for a portable device, especially relative to the 1.565GHz of XBox Series S, the 1.8GHz of XBox Series X, and the 2.2GHz of the PS5. 1536 CUDA Cores is actually 20% more than the XBox Series S's 20CU GPU (1280 Shader Cores) before clock speed, RT & DLSS. It is 2/3 of the PS5's 36CU GPU (2304 Shader Cores); To add another layer of perspective here, 2/3 is the same ratio for XB1 and PS4 (XB1 had 768, PS4 had 1152). So, the raw graphical grunt is a lot closer than some of you have anticipated. Codename Drake will have an A78C class CPU with it, which was quite disruptive - A78C can clock up to 3.3GHz, but I suspect that it will be around 2-2.3GHz and believe the sub 2GHz consensus estimations on here to be woefully conservative. Coming back to the GPU, even a 1GHz clock for docked performance (977MHz, to be precise) would put this in the 3TF realm before RT and DLSS. Of course, it isn't a 1-for-1 comparison, and there are other variables to consider, but the overall gist is that it certainly won't be "underpowered". I think the face-offs between it and XSS could have some pleasant surprises.
 
seems like the lowest Steam Deck's gpu can go is 200MHz (that I've seen so far)



BTW, a 1.3GHz GPU clock speed for Codename Drake would mean we're talking about a prospective 4TF machine before RT &DLSS. 1.6GHz, and it would be around 4.8TF. As Steam Deck doesn't have a docked mode, that's still very high for a portable device, especially relative to the 1.565GHz of XBox Series S, the 1.8GHz of XBox Series X, and the 2.2GHz of the PS5. 1536 CUDA Cores is actually 20% more than the XBox Series S's 20CU GPU (1280 Shader Cores) before clock speed, RT & DLSS. It is 2/3 of the PS5's 36CU GPU (2304 Shader Cores); To add another layer of perspective here, 2/3 is the same ratio for XB1 and PS4 (XB1 had 768, PS4 had 1152). So, the raw graphical grunt is a lot closer than some of you have anticipated. Codename Drake will have an A78C class CPU with it, which was quite disruptive - A78C can clock up to 3.3GHz, but I suspect that it will be around 2-2.3GHz and believe the sub 2GHz consensus estimations on here to be woefully conservative. Coming back to the GPU, even a 1GHz clock for docked performance (977MHz, to be precise) would put this in the 3TF realm before RT and DLSS. Of course, it isn't a 1-for-1 comparison, and there are other variables to consider, but the overall gist is that it certainly won't be "underpowered". I think the face-offs between it and XSS could have some pleasant surprises.
the SD doesn't have a docked mode, but the SoC hits its assigned peak at 15W TDP. it won't go higher in a dock
 



a video on the SD with the TDP set to 3W. Bright Memory Infinite and Kingdom Come Deliverance are tested, coincidently, games whom have switch versions. also 3W is lower than the Switch's TDP I think

Ye

BTW, a 1.3GHz GPU clock speed for Codename Drake would mean we're talking about a prospective 4TF machine before RT &DLSS. 1.6GHz, and it would be around 4.8TF. As Steam Deck doesn't have a docked mode, that's still very high for a portable device, especially relative to the 1.565GHz of XBox Series S, the 1.8GHz of XBox Series X, and the 2.2GHz of the PS5. 1536 CUDA Cores is actually 20% more than the XBox Series S's 20CU GPU (1280 Shader Cores) before clock speed, RT & DLSS. It is 2/3 of the PS5's 36CU GPU (2304 Shader Cores); To add another layer of perspective here, 2/3 is the same ratio for XB1 and PS4 (XB1 had 768, PS4 had 1152). So, the raw graphical grunt is a lot closer than some of you have anticipated. Codename Drake will have an A78C class CPU with it, which was quite disruptive - A78C can clock up to 3.3GHz, but I suspect that it will be around 2-2.3GHz and believe the sub 2GHz consensus estimations on here to be woefully conservative. Coming back to the GPU, even a 1GHz clock for docked performance (977MHz, to be precise) would put this in the 3TF realm before RT and DLSS. Of course, it isn't a 1-for-1 comparison, and there are other variables to consider, but the overall gist is that it certainly won't be "underpowered". I think the face-offs between it and XSS could have some pleasant surprises.
Er, I’d take a few steps back with the CPU part.

And GPU too.

It still has to be cooled and has to be a portable system.

And will be even more bandwidth constrained at those higher clocks. All that power but a funnel to feed it right.

A mess.

Unless they have some large on-die cache, I don’t see it.
 
Last edited:
BTW, a 1.3GHz GPU clock speed for Codename Drake would mean we're talking about a prospective 4TF machine before RT &DLSS.....
they wouldn't make it 12SM if they couldn't. that's just a waste of silicon. on 8nm, it would just mean 306MHz would return as the base clock
Number crunching, Deck.

Graphical output, Drake.

Deck has impressive stuff but Drake has better RT, access to DLSS, should be more optimised, smaller file sizes and a few other benefits.
Recapping for the people who are lost in the thread:
NVIDIA's detailed documentation for the Orin SoC states "14 SMs" and "128 CUDA cores per SM" (for the 32GB model that results in 1792 CUDA cores and I believe 16 SMs for the 64GB model that has 2048 CUDA cores).
The rumored "drake" SoC supposedly has 12SMs resulting in 1536 CUDA cores however, there is also a compelling cut-down orin spec on wikipedia that cites 1024 CUDA cores (8 SMs).
The calculation that is being used to estimate FP32 performance in TFLOPs is as follows: GPU clock speed * CUDA cores * 2 / 1000
Assuming the best possible scenario for a Orin SoC switch we'd have: (1.3Ghz * 1536 CUDA cores * 2) / 1000 ~=3.99 TFLOPs of FP32 performance putting it near PS4 Pro and Xbox Series S performance.
At worst, we'd have: 0.624 GHz * 1024 CUDA cores * 2 / 1000 ~= 1.27 TFLOPs.

My opinion:

They'll use the 8 SMs (1024 CUDA cores) cut down chip and clock it a bit under 1GHz.
Assuming 0.624 GHz for docked mode, that would be 1.27 TFLOPs (1.9 TFLOPs for the 1536 "drake" SoC) and ~350Mhz for handheld mode ~= 0.7 TFLOP (1 TFLOP for the drake chip).

Now, a quick comparison with the Steam Deck:
At the maximum 15W setting, the calculation for the steam deck goes as follows:
(1.6GHz * 512 Shading Units * 2) / 1000 ~= 1.6TFLOPs (which is valve's reported performance for the device).
However, steam deck fans fail to mention how that's the performance of the device at the maximum TDP limit which, as everyone knows results in the device draining too much power. Realistically, if we were to make a valid perf. per watt comparison, I'd use either 7W or 10W as a metric for the Deck.

Problem is, that's just one part of the history. I don't know exactly what's the average clock speed for the steam deck iGPU, and after watching multiple youtube videos of the device running different games, I've noticed that like laptops, it clocks down the CPU to ~2GHz (from 3+GHz) to boost the GPU clock up and I've seen it use anywhere from 200Mhz to 600Mhz and even 1GHz depending entirely on the game.
Now, assuming a in-between, I think it's safe to say 600Mhz is a good baseline for handheld mode without compromising battery life too much.
So that leaves us at:
(0.6GHz * 512 Shading Units * 2) / 1000 ~= 0.6TFLOPs.

So in summary, even though the 8GB Orin SoC appears to be a bit inferior to a Steam deck clocked at 15W, the handheld performance seems more promising (even more so considering we aren't even taking DLSS into account and how much NVIDIA's solution is better than AMD's FSR in motion).

CPU-wise, yeah of course a latest gen x86 CPU trounces an ARM chip that isn't made by Apple in single and multicore. But given the technical info that's publicly available for ARM A78 SoCs, I don't think the difference is too relevant.

Also, FP32 TFLOPs/GFLOPs performance doesn't tell the whole story... people also forget about things like memory bandwidth (lowest spec Orin reaches 102 GB/s while the steam deck maxes out at 88.00 GB/s - both mind you, at 128bit wide buses).
 
Last edited:
Recapping for the people who are lost in the thread:
NVIDIA's detailed documentation for the Orin SoC states "14 SMs" and "128 CUDA cores per SM" (for the 32GB model that results in 1792 CUDA cores and I believe 16 SMs for the 64GB model that has 2048 CUDA cores).
The rumored "drake" SoC supposedly has 12SMs resulting in 1536 CUDA cores however, there is also a compelling cut-down orin spec on wikipedia that cites 1024 CUDA cores (8 SMs).
The calculation that is being used to estimate FP32 performance in TFLOPs is as follows: GPU clock speed * CUDA cores * 2 / 1000
Assuming the best possible scenario for a Orin SoC switch we'd have: 2Ghz * 1536 CUDA cores * 2 / 1000 ~= 6.144 TFLOPs of FP32 performance putting it above PS4 Pro and Xbox Series S performance.
At worst, we'd have: 2Ghz * 1024 CUDA cores * 2 / 1000 ~= 4.096 TFLOPs.

My opinion:

They'll use the 8 SMs (1024 CUDA cores) cut down chip and clock it at a bit above/a bit under 1GHz. I'm expecting it to be under 1.5GHz.
Assuming 1.2GHz for docked mode, that would be 2.4 TFLOPs and ~500Mhz for handheld mode ~= 1TFLOP.

Now, a quick comparison with the Steam Deck:
At the maximum 15W setting, the calculation for the steam deck goes as follows:
(1.6GHz * 512 Shading Units * 2) / 1000 ~= 1.6TFLOPs (which is valve's reported performance for the device).
However, steam deck fans fail to mention how that's the performance of the device at the maximum TDP limit which, as everyone knows results in the device draining too much power. Realistically, if we were to make a valid perf. per watt comparison, I'd use either 7W or 10W as a metric for the Deck.

Problem is, that's just one part of the history. I don't know exactly what's the average clock speed for the steam deck iGPU, and after watching multiple youtube videos of the device running different games, I've noticed that like laptops, it clocks down the CPU to ~2GHz (from 3+GHz) to boost the GPU clock up and I've seen it use anywhere from 200Mhz to 600Mhz and even 1GHz depending entirely on the game.
Now, assuming a inbetween, I think it's safe to say 600Mhz is a good baseline for handheld mode without compromising battery life too much.
So that leaves us at:
(0.6GHz * 512 Shading Units * 2) / 1000 ~= 0.6TFLOPs.

So in summary, I don't know why people still think an Orin SoC switch would be inferior to the deck GPU-wise (even more so considering we aren't even taking DLSS into account and how much NVIDIA's solution is better than AMD's FSR in motion).

CPU-wise, yeah of course a latest gen x86 CPU trounces an ARM chip that isn't made by Apple in single and multicore. But given the technical info that's publicly available for ARM A78 SoCs, I don't think the difference is too relevant.

Also, FP32 TFLOPs/GFLOPs performance doesn't tell the whole story... people also forget about things like memory bandwidth (lowest spec Orin reaches 102 GB/s while the steam deck maxes out at 88.00 GB/s - both mind you, at 128bit wide buses).
why are you still assuming 2GHz on the gpu? also ARM doesn't get trounced as much as you think. but the cpu won't be clocked anywhere near as high as Steam Deck or other systems
 
why are you still assuming 2GHz on the gpu? also ARM doesn't get trounced as much as you think. but the cpu won't be clocked anywhere near as high as Steam Deck or other systems
I assumed 2Ghz only in the best-case scenario using the clock value reported on the wikipedia entry for the Orin SoC.
Re-read my post, I don't think they'll clock the GPU at that (even though they could).
Power draw will most likely be kept at a 15W maximum due to not only thermal constraints, but charging speeds as well.
 
Recapping for the people who are lost in the thread:
NVIDIA's detailed documentation for the Orin SoC states "14 SMs" and "128 CUDA cores per SM" (for the 32GB model that results in 1792 CUDA cores and I believe 16 SMs for the 64GB model that has 2048 CUDA cores).
The rumored "drake" SoC supposedly has 12SMs resulting in 1536 CUDA cores however, there is also a compelling cut-down orin spec on wikipedia that cites 1024 CUDA cores (8 SMs).
The calculation that is being used to estimate FP32 performance in TFLOPs is as follows: GPU clock speed * CUDA cores * 2 / 1000
Assuming the best possible scenario for a Orin SoC switch we'd have: 2Ghz * 1536 CUDA cores * 2 / 1000 ~= 6.144 TFLOPs of FP32 performance putting it above PS4 Pro and Xbox Series S performance.
At worst, we'd have: 2Ghz * 1024 CUDA cores * 2 / 1000 ~= 4.096 TFLOPs.

My opinion:

They'll use the 8 SMs (1024 CUDA cores) cut down chip and clock it at a bit above/a bit under 1GHz. I'm expecting it to be under 1.5GHz.
Assuming 1.2GHz for docked mode, that would be 2.4 TFLOPs and ~500Mhz for handheld mode ~= 1TFLOP.

Now, a quick comparison with the Steam Deck:
At the maximum 15W setting, the calculation for the steam deck goes as follows:
(1.6GHz * 512 Shading Units * 2) / 1000 ~= 1.6TFLOPs (which is valve's reported performance for the device).
However, steam deck fans fail to mention how that's the performance of the device at the maximum TDP limit which, as everyone knows results in the device draining too much power. Realistically, if we were to make a valid perf. per watt comparison, I'd use either 7W or 10W as a metric for the Deck.

Problem is, that's just one part of the history. I don't know exactly what's the average clock speed for the steam deck iGPU, and after watching multiple youtube videos of the device running different games, I've noticed that like laptops, it clocks down the CPU to ~2GHz (from 3+GHz) to boost the GPU clock up and I've seen it use anywhere from 200Mhz to 600Mhz and even 1GHz depending entirely on the game.
Now, assuming a inbetween, I think it's safe to say 600Mhz is a good baseline for handheld mode without compromising battery life too much.
So that leaves us at:
(0.6GHz * 512 Shading Units * 2) / 1000 ~= 0.6TFLOPs.

So in summary, I don't know why people still think an Orin SoC switch would be inferior to the deck GPU-wise (even more so considering we aren't even taking DLSS into account and how much NVIDIA's solution is better than AMD's FSR in motion).

CPU-wise, yeah of course a latest gen x86 CPU trounces an ARM chip that isn't made by Apple in single and multicore. But given the technical info that's publicly available for ARM A78 SoCs, I don't think the difference is too relevant.

Also, FP32 TFLOPs/GFLOPs performance doesn't tell the whole story... people also forget about things like memory bandwidth (lowest spec Orin reaches 102 GB/s while the steam deck maxes out at 88.00 GB/s - both mind you, at 128bit wide buses).
Where the did you get 2 GHz for GPU? The e highest Orion AGX module doesn't go beyond 1.3. We're more than likely not getting above that, and likely under that if anything (hopefully around 920 GHz).

Than is the first time I've heard the A78s are capable of 3Ghz. I searched on Wikipedia, but I have yet to see any phones actually use it and usually have it at 2.1 GHz or around that range. As for the Orion CPU profiles, they don't above ~2Ghz, so we really shouldn't be expecting anything higher.

In the absolute best case scenario for CPU, we could match it surpass Steam Deck in some scenarios if we have 8 A78s (7 for gaming) VS 4 Steam Deck CPUs at 3.5 GHz (3 for gaming). That would be roughly half the speed of x box series s. 7 cores at 1.5 GHz should match SD. I dunno about multithread performance,.since AMD CPUs specialize in that.
 
I assumed 2Ghz only in the best-case scenario using the clock value reported on the wikipedia entry for the Orin SoC.
Re-read my post, I don't think they'll clock the GPU at that (even though they could).
Power draw will most likely be kept at a 15W maximum due to not only thermal constraints, but charging speeds as well.
You might have been looking at the CPU freq.

I'm not sure how useful of a comparison Orin is anyway, going by Orin a 4SM 4CPU system could be as power hungry as Switch without doing much to the clocks, so I can only guess they are using something a lot more efficient.
 
I assumed 2Ghz only in the best-case scenario using the clock value reported on the wikipedia entry for the Orin SoC.
Re-read my post, I don't think they'll clock the GPU at that (even though they could).
Power draw will most likely be kept at a 15W maximum due to not only thermal constraints, but charging speeds as well.
You sure you are not mixing that up with the CPU clock speeds with the GPU?

I'm looking at the Tegra Wikipedia and see up to 2.2 GHz CPU and 1.3 CPU for Orion AGX, and 2.0Ghz CPU and 918 GHz GPU for Orion NX.

And I've known this for a while and go to this wikipedia often..Would be news for me if they boosted their GPU clockspeeds to 2.0Ghz.
 
0
Where the did you get 2 GHz for GPU? The e highest Orion AGX module doesn't go beyond 1.3. We're more than likely not getting above that, and likely under that if anything (hopefully around 920 GHz).

Than is the first time I've heard the A78s are capable of 3Ghz. I searched on Wikipedia, but I have yet to see any phones actually use it and usually have it at 2.1 GHz or around that range. As for the Orion CPU profiles, they don't above ~2Ghz, so we really shouldn't be expecting anything higher.

In the absolute best case scenario for CPU, we could match it surpass Steam Deck in some scenarios if we have 8 A78s (7 for gaming) VS 4 Steam Deck CPUs at 3.5 GHz (3 for gaming). That would be roughly half the speed of x box series s. 7 cores at 1.5 GHz should match SD. I dunno about multithread performance,.since AMD CPUs specialize in that.
Yeah, twas a mistake, I'll edit the post to better reflect the actual GPU performance.
going by Orin a 4SM 4CPU system could be as power hungry as Switch without doing much to the clocks, so I can only guess they are using something a lot more efficient.
Where did you get that info from? also, the cut down Orin SoC has 10,15 and 25W configurations, assuming 15 for docked mode and a bit under 10W for portable mode, that would match the switch in TDP alone while having all the performance benefits of the new arch.
 
facepalm-head.jpg

Looks like I'll have to edit that post again
Anyhow, that still doesn't answer my question: why do you think a theoretical 4C 4SM (512 CUDA core, double the switch's) would be even relevant to this conversation? and even if was a thing, at 10W or under it would still outperform the switch anyways.
 
facepalm-head.jpg

Looks like I'll have to edit that post again
Anyhow, that still doesn't answer my question: why do you think a theoretical 4C 4SM (512 CUDA core, double the switch's) would be even relevant to this conversation? and even if was a thing, at 10W or under it would still outperform the switch anyways.



If being low power, small and cost effective was the only goal, primarily the first two as the last one is a byproduct of the first two, then they would have paid enough for something that was fit for that. 4SM+4Core would have been more than enough for that. It offers new features, DLSS and a better CPU +GPU performance and keeps it affordable and manageable in terms of what they would push graphically.

AKA, it would have been a modest enough jump that they could maintain their current developmental scope without increase the cost so much. Especially with these rising developmental costs that they’ve been keenly aware of since the GameCube and have been trying their hardest to reduce its rapid rise.


But it already has 12SMs, this isn’t designed to be small whatsoever. 12SMs also draws a lot more power than 4SMs. And 12SMs allows for a much wider and larger graphical throughput of the GPU.

Nintendo is very good at working within their means, but 12SMs is more than anyone expected. They don’t expect a tepid and manageable increase. They expect a large increase that houses their FP software for quite a while (and their partners too).


Keep in mind that Nintendo‘s first party offerings have a brilliant art that scales very well, and this art helps overcome the shortcomings of the hardware because it’s less visible/less apparent. They don’t need some super big GPU if they want a good looking game, and even then with a switch the issue it has isn’t so much of the GPU but everything else about the system, such as the CPU and the memory bandwidth.


They could afford to go smaller and be fine, so long as the rest of it is good enough. They don’t chase the triple A western eye candy. However, clearly we see that they are doing more than necessary, in my opinion at least. I’m not sure what they have in mind that they were fine paying for such a big increase but I will keep an eye on what they will do.

Hell, 6SMs was probably even more than enough.
 
If being low power, small and cost effective was the only goal, primarily the first two as the last one is a byproduct of the first two, then they would have paid enough for something that was fit for that. 4SM+4Core would have been more than enough for that. It offers new features, DLSS and a better CPU +GPU performance and keeps it affordable and manageable in terms of what they would push graphically.

AKA, it would have been a modest enough jump that they could maintain their current developmental scope without increase the cost so much. Especially with these rising developmental costs that they’ve been keenly aware of since the GameCube and have been trying their hardest to reduce its rapid rise.


But it already has 12SMs, this isn’t designed to be small whatsoever. 12SMs also draws a lot more power than 4SMs. And 12SMs allows for a much wider and larger graphical throughput of the GPU.

Nintendo is very good at working within their means, but 12SMs is more than anyone expected. They don’t expect a tepid and manageable increase. They expect a large increase that houses their FP software for quite a while (and their partners too).


Keep in mind that Nintendo‘s first party offerings have a brilliant art that scales very well, and this art helps overcome the shortcomings of the hardware because it’s less visible/less apparent. They don’t need some super big GPU if they want a good looking game, and even then with a switch the issue it has isn’t so much of the GPU but everything else about the system, such as the CPU and the memory bandwidth.


They could afford to go smaller and be fine, so long as the rest of it is good enough. They don’t chase the triple A western eye candy. However, clearly we see that they are doing more than necessary, in my opinion at least. I’m not sure what they have in mind that they were fine paying for such a big increase but I will keep an eye on what they will do.

Hell, 6SMs was probably even more than enough.
Nintendo dreams of traced rays
 
If being low power, small and cost effective was the only goal, primarily the first two as the last one is a byproduct of the first two, then they would have paid enough for something that was fit for that. 4SM+4Core would have been more than enough for that. It offers new features, DLSS and a better CPU +GPU performance and keeps it affordable and manageable in terms of what they would push graphically.

AKA, it would have been a modest enough jump that they could maintain their current developmental scope without increase the cost so much. Especially with these rising developmental costs that they’ve been keenly aware of since the GameCube and have been trying their hardest to reduce its rapid rise.


But it already has 12SMs, this isn’t designed to be small whatsoever. 12SMs also draws a lot more power than 4SMs. And 12SMs allows for a much wider and larger graphical throughput of the GPU.

Nintendo is very good at working within their means, but 12SMs is more than anyone expected. They don’t expect a tepid and manageable increase. They expect a large increase that houses their FP software for quite a while (and their partners too).


Keep in mind that Nintendo‘s first party offerings have a brilliant art that scales very well, and this art helps overcome the shortcomings of the hardware because it’s less visible/less apparent. They don’t need some super big GPU if they want a good looking game, and even then with a switch the issue it has isn’t so much of the GPU but everything else about the system, such as the CPU and the memory bandwidth.


They could afford to go smaller and be fine, so long as the rest of it is good enough. They don’t chase the triple A western eye candy. However, clearly we see that they are doing more than necessary, in my opinion at least. I’m not sure what they have in mind that they were fine paying for such a big increase but I will keep an eye on what they will do.

Hell, 6SMs was probably even more than enough.
Despite you realizing that nintendo doesn't go for even mid-range hardware but really low end, there doesn't seem to be any actual documentation or spec info available online that would reinforce your claims of a 4cores, 4SMs Orin SoC.

Also, at 4SMs, we'd be looking at a docked performance of about 0.6TFLOPs which is lower than double of what the current switch is capable of.
Even if we take DLSS into account, I don't see why they'd go with such a low spec considering that:
  • the 400$ pricing for the Jetson Orin NX 8GB dev kit isn't representative of what nintendo would actually be paying NVIDIA, the kit contains built-in memory, the board itself, a case, I/O etc... nintendo is only after the chip itself they'll likely outsource the memory, storage and the device's board.
  • nintendo would make a deal with nvidia so they wouldn't even pay the same price of a dev kit anyways

But sure, let's go by your logic that they won't be using 12 SMs. The lower spec SoC that I mentioned which has data available on wikipedia as well, is an 8SM chip. Also, all of those SoCs have 10,15,20 and 25W TDP targets, we know the current gen switch uses about 15W in docked mode and a bit under 10W in handheld (~7W or so). So why are you assuming it would draw more than that when they already released the target TDPs?
 
Despite you realizing that nintendo doesn't go for even mid-range hardware but really low end, there doesn't seem to be any actual documentation or spec info available online that would reinforce your claims of a 4cores, 4SMs Orin SoC.

Also, at 4SMs, we'd be looking at a docked performance of about 0.6TFLOPs which is lower than double of what the current switch is capable of.
Even if we take DLSS into account, I don't see why they'd go with such a low spec considering that:
  • the 400$ pricing for the Jetson Orin NX 8GB dev kit isn't representative of what nintendo would actually be paying NVIDIA, the kit contains built-in memory, the board itself, a case, I/O etc... nintendo is only after the chip itself they'll likely outsource the memory, storage and the device's board.
  • nintendo would make a deal with nvidia so they wouldn't even pay the same price of a dev kit anyways

But sure, let's go by your logic that they won't be using 12 SMs. The lower spec SoC that I mentioned which has data available on wikipedia as well, is an 8SM chip. Also, all of those SoCs have 10,15,20 and 25W TDP targets, we know the current gen switch uses about 15W in docked mode and a bit under 10W in handheld (~7W or so). So why are you assuming it would draw more than that when they already released the target TDPs?
you're still using Orin as a representative of what Nintendo would use when it isn't. Orin isn't made for gaming and Nvidia wouldn't sell it to nintendo for such
 
there doesn't seem to be any actual documentation or spec info available online that would reinforce your claims of a 4cores, 4SMs Orin SoC.
Of course there wouldn’t, neither is Drake like the other ORIN configs. It’s an offshoot design for gaming, the other are for automotive.


The design would be based on what Nintendo is willing to pay and nvidia designing it with Nintendo’s design goals in mind based on what Nintendo is paying them. It’s a semi-custom design after all, it’s how these operate. This thing is like the Series X/S, PS5, PS4, XB1X, XB1, Wii U, Wii/GCN, etc. console manufacturers have been going semi-custom in this manner for quite some time.


There doesn’t need to be any documentation, it’s completely independent from that. ORIN is ORIN, Drake is Drake. Drake is based off of ORIN, but Drake is not ORIN or similar to it other than being a Tegra SoC with the same architecture of Ampere (and assumed CPU).


ORIN is for automotive. Drake is for a video game portable console.
 
Last edited:
Nice segment on DF Direct about where Drake will stand in relationship to Deck and Series S:

.

Not a whole lot new, and speculation of course (although Rich's "all I'm gonna say" was interesting :p):

Small summary:
  • Because games will be written specifically for the platform with a low-level API it will likely outperform Steam Deck due to the latter's OS overhead and unoptimised nature.
  • On par with Series S is not gonna happen due to battery life, but with strategic nips and tucks (and DLSS + Ampere being more efficient compared to RDNA2) some games might be comparable while docked.
Very reasonable, and still exciting. Looks like they're positioning Drake to be to PS5 what TX1 was do PS4: close enough for some "impossible ports".

edit: so many yeahs, dat sweet famiboards fame :p
 
Last edited:
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom