"Early 2023" has been claimed specifically (see a couple pages back), as well as implied by the "late 2022" part of the window. Bloomberg's cited release window for third party games was "during or after the second half of [2022]" which gives the same impression.Where on earth are you getting early from my post? I never said the word early. The claim has been “late 22/H1 23” for a very long while now.
H1 is quite literally first half of the calendar year from January to June, H2 is what we are in right now and goes from July to December.
Probably will expect players to buy a MicroSD card to pick up the slack. 512GB cards ain't looking too bad lately and 1TB cards are in better reach this year.How much internal storage do you think the next Switch will have? Will game sizes be bigger and third-party support be better, creating a need for a lot more storage? Or will Nintendo cheap out and stick with 64GB? I think at least 256 GB would be nice.
Significantly behind in raw power. Punch above its weight in actual games.In terms of performance, what do you guys expect for Drake in portable mode, something beyond or behind the Steam Deck?
In terms of performance, what do you guys expect for Drake in portable mode, something beyond or behind the Steam Deck?
Most Steam Deck games I’ve tried whether it’s a supported or unsupported games tend to run in the same ballpark meaning there isn’t a ton of optimisation in most games.In terms of performance, what do you guys expect for Drake in portable mode, something beyond or behind the Steam Deck?
In order to be so behind in raw power the thing has to be clocked below even the lowest switch clock.Significantly behind in raw power. Punch above its weight in actual games.
it's nintendo thoIn order to be so behind in raw power the thing has to be clocked below even the lowest switch clock.
It’s 512 cores vs 1536, former running at 1GHz to 1.6GHz, and hovers more in like the 1.3GHz range. as in, 1-1.6TFLOPs, most of the time around 1.3TFLOPs.
For the latter to be “significantly behind in raw power” it would be better for it to just be turned off if they are going to clock it that low.
Wasting energy at that point.
I know we been over this multiple times, but assuming 8 nm is all SM running in portable mode even feasible? If it is, wouldn't it be running below the lowest Switch clock?In order to be so behind in raw power the thing has to be clocked below even the lowest switch clock.
It’s 512 cores vs 1536, former running at 1GHz to 1.6GHz, and hovers more in like the 1.3GHz range. as in, 1-1.6TFLOPs, most of the time around 1.3TFLOPs.
For the latter to be “significantly behind in raw power” it would be better for it to just be turned off if they are going to clock it that low.
Wasting energy at that point.
Beyond it before RT and DLSS, for sure, given what we know about the prospective successor's GPU from the Nvidia leaks, although I don't find it appropriate to compare ARM-derivative devices to x86 ones. For all its intents and purposes, Steam Deck is, in my opinion, an over-engineered GameGear without the charm, and every bit as bad in the battery life department. It's also almost 50% heavier than a Wii U GamePad, but without the bad press, and it doesn't have a docked mode. I would go even further, and say that the primary reason why anybody mentions Steam Deck at all is because there are people who never came to terms with the existing Switch, and are perpetually thirsty for so-called "MORE POWER pro-devices" - It just happened to appear around the same time that the Switch's OLED Model was revealed, and quench said thirst temporarily. I'm actually very confident that Codename Drake will go harder because being in a position to receive competent PS5/XS ports is imperative to its success, and because much of the Switch's appeal was selling definitive portable experiences. These points are often lost in (online) discourse, but Nintendo and Nvidia understand this.In terms of performance, what do you guys expect for Drake in portable mode, something beyond or behind the Steam Deck?
Hope you're doing better today.I’ve had an absolutely awful mental health day today but I just want to say thank you for the gifs on the last page. Y’all put a big smile on my face <3
Number crunching, Deck.In terms of performance, what do you guys expect for Drake in portable mode, something beyond or behind the Steam Deck?
they wouldn't make it 12SM if they couldn't. that's just a waste of silicon. on 8nm, it would just mean 306MHz would return as the base clockI know we been over this multiple times, but assuming 8 nm is all SM running in portable mode even feasible? If it is, wouldn't it be running below the lowest Switch clock?
Nintendo can kiss their business goodbye.it's nintendo tho
The lowest clock is 307MHz for games. Even running at that clock it would only really be a bit behind the Steam deck.I know we been over this multiple times, but assuming 8 nm is all SM running in portable mode even feasible? If it is, wouldn't it be running below the lowest Switch clock?
at what wattage? the 15W setting?The lowest clock is 307MHz for games. Even running at that clock it would only really be a bit behind the Steam deck.
I already bought a 1TB last month during prime days. I am all set.Probably will expect players to buy a MicroSD card to pick up the slack. 512GB cards ain't looking too bad lately and 1TB cards are in better reach this year.
If I had to give a straight answer, 128GB internal storage to start is probably a sensible route coming from Switch's 32GB Launch and 64GB in the OLED.
The “average” 1.3GHz GPU clock for the Steam deck. It peaks at 1.6GHz (15W), and at lowest it is at 1GHz (like 4-8W?), but seems to average to around 1.3GHz in use based on what Thraktor noted.at what wattage? the 15W setting?
This only made me think about the following comparison purely in terms of battery performance, release order and design changes:A pretty notable error is that DSi is shown about a year too early. That plus its slow worldwide rollout and later XL revision puts it much closer to 3DS than it appears here.
BOTW2 2022 date was announced in June 2021. Well over a year from a possible late 2022 release. Plenty of wiggle room for things to move from late 2022 to early 2023 (or vice-versa) without totally wrecking things.
the SD doesn't have a docked mode, but the SoC hits its assigned peak at 15W TDP. it won't go higher in a dockBTW, a 1.3GHz GPU clock speed for Codename Drake would mean we're talking about a prospective 4TF machine before RT &DLSS. 1.6GHz, and it would be around 4.8TF. As Steam Deck doesn't have a docked mode, that's still very high for a portable device, especially relative to the 1.565GHz of XBox Series S, the 1.8GHz of XBox Series X, and the 2.2GHz of the PS5. 1536 CUDA Cores is actually 20% more than the XBox Series S's 20CU GPU (1280 Shader Cores) before clock speed, RT & DLSS. It is 2/3 of the PS5's 36CU GPU (2304 Shader Cores); To add another layer of perspective here, 2/3 is the same ratio for XB1 and PS4 (XB1 had 768, PS4 had 1152). So, the raw graphical grunt is a lot closer than some of you have anticipated. Codename Drake will have an A78C class CPU with it, which was quite disruptive - A78C can clock up to 3.3GHz, but I suspect that it will be around 2-2.3GHz and believe the sub 2GHz consensus estimations on here to be woefully conservative. Coming back to the GPU, even a 1GHz clock for docked performance (977MHz, to be precise) would put this in the 3TF realm before RT and DLSS. Of course, it isn't a 1-for-1 comparison, and there are other variables to consider, but the overall gist is that it certainly won't be "underpowered". I think the face-offs between it and XSS could have some pleasant surprises.
a video on the SD with the TDP set to 3W. Bright Memory Infinite and Kingdom Come Deliverance are tested, coincidently, games whom have switch versions. also 3W is lower than the Switch's TDP I think
Er, I’d take a few steps back with the CPU part.BTW, a 1.3GHz GPU clock speed for Codename Drake would mean we're talking about a prospective 4TF machine before RT &DLSS. 1.6GHz, and it would be around 4.8TF. As Steam Deck doesn't have a docked mode, that's still very high for a portable device, especially relative to the 1.565GHz of XBox Series S, the 1.8GHz of XBox Series X, and the 2.2GHz of the PS5. 1536 CUDA Cores is actually 20% more than the XBox Series S's 20CU GPU (1280 Shader Cores) before clock speed, RT & DLSS. It is 2/3 of the PS5's 36CU GPU (2304 Shader Cores); To add another layer of perspective here, 2/3 is the same ratio for XB1 and PS4 (XB1 had 768, PS4 had 1152). So, the raw graphical grunt is a lot closer than some of you have anticipated. Codename Drake will have an A78C class CPU with it, which was quite disruptive - A78C can clock up to 3.3GHz, but I suspect that it will be around 2-2.3GHz and believe the sub 2GHz consensus estimations on here to be woefully conservative. Coming back to the GPU, even a 1GHz clock for docked performance (977MHz, to be precise) would put this in the 3TF realm before RT and DLSS. Of course, it isn't a 1-for-1 comparison, and there are other variables to consider, but the overall gist is that it certainly won't be "underpowered". I think the face-offs between it and XSS could have some pleasant surprises.
BTW, a 1.3GHz GPU clock speed for Codename Drake would mean we're talking about a prospective 4TF machine before RT &DLSS.....
they wouldn't make it 12SM if they couldn't. that's just a waste of silicon. on 8nm, it would just mean 306MHz would return as the base clock
Recapping for the people who are lost in the thread:Number crunching, Deck.
Graphical output, Drake.
Deck has impressive stuff but Drake has better RT, access to DLSS, should be more optimised, smaller file sizes and a few other benefits.
why are you still assuming 2GHz on the gpu? also ARM doesn't get trounced as much as you think. but the cpu won't be clocked anywhere near as high as Steam Deck or other systemsRecapping for the people who are lost in the thread:
NVIDIA's detailed documentation for the Orin SoC states "14 SMs" and "128 CUDA cores per SM" (for the 32GB model that results in 1792 CUDA cores and I believe 16 SMs for the 64GB model that has 2048 CUDA cores).
The rumored "drake" SoC supposedly has 12SMs resulting in 1536 CUDA cores however, there is also a compelling cut-down orin spec on wikipedia that cites 1024 CUDA cores (8 SMs).
The calculation that is being used to estimate FP32 performance in TFLOPs is as follows: GPU clock speed * CUDA cores * 2 / 1000
Assuming the best possible scenario for a Orin SoC switch we'd have: 2Ghz * 1536 CUDA cores * 2 / 1000 ~= 6.144 TFLOPs of FP32 performance putting it above PS4 Pro and Xbox Series S performance.
At worst, we'd have: 2Ghz * 1024 CUDA cores * 2 / 1000 ~= 4.096 TFLOPs.
My opinion:
They'll use the 8 SMs (1024 CUDA cores) cut down chip and clock it at a bit above/a bit under 1GHz. I'm expecting it to be under 1.5GHz.
Assuming 1.2GHz for docked mode, that would be 2.4 TFLOPs and ~500Mhz for handheld mode ~= 1TFLOP.
Now, a quick comparison with the Steam Deck:
At the maximum 15W setting, the calculation for the steam deck goes as follows:
(1.6GHz * 512 Shading Units * 2) / 1000 ~= 1.6TFLOPs (which is valve's reported performance for the device).
However, steam deck fans fail to mention how that's the performance of the device at the maximum TDP limit which, as everyone knows results in the device draining too much power. Realistically, if we were to make a valid perf. per watt comparison, I'd use either 7W or 10W as a metric for the Deck.
Problem is, that's just one part of the history. I don't know exactly what's the average clock speed for the steam deck iGPU, and after watching multiple youtube videos of the device running different games, I've noticed that like laptops, it clocks down the CPU to ~2GHz (from 3+GHz) to boost the GPU clock up and I've seen it use anywhere from 200Mhz to 600Mhz and even 1GHz depending entirely on the game.
Now, assuming a inbetween, I think it's safe to say 600Mhz is a good baseline for handheld mode without compromising battery life too much.
So that leaves us at:
(0.6GHz * 512 Shading Units * 2) / 1000 ~= 0.6TFLOPs.
So in summary, I don't know why people still think an Orin SoC switch would be inferior to the deck GPU-wise (even more so considering we aren't even taking DLSS into account and how much NVIDIA's solution is better than AMD's FSR in motion).
CPU-wise, yeah of course a latest gen x86 CPU trounces an ARM chip that isn't made by Apple in single and multicore. But given the technical info that's publicly available for ARM A78 SoCs, I don't think the difference is too relevant.
Also, FP32 TFLOPs/GFLOPs performance doesn't tell the whole story... people also forget about things like memory bandwidth (lowest spec Orin reaches 102 GB/s while the steam deck maxes out at 88.00 GB/s - both mind you, at 128bit wide buses).
At this point, I feel live I've read every post already...from multiple users.I feel like I’ve read this post from another user before already.
I assumed 2Ghz only in the best-case scenario using the clock value reported on the wikipedia entry for the Orin SoC.why are you still assuming 2GHz on the gpu? also ARM doesn't get trounced as much as you think. but the cpu won't be clocked anywhere near as high as Steam Deck or other systems
Yeah but… this specific thing was done by a different user already about using ORINAt this point, I feel live I've read every post already...from multiple users.
Where the did you get 2 GHz for GPU? The e highest Orion AGX module doesn't go beyond 1.3. We're more than likely not getting above that, and likely under that if anything (hopefully around 920 GHz).Recapping for the people who are lost in the thread:
NVIDIA's detailed documentation for the Orin SoC states "14 SMs" and "128 CUDA cores per SM" (for the 32GB model that results in 1792 CUDA cores and I believe 16 SMs for the 64GB model that has 2048 CUDA cores).
The rumored "drake" SoC supposedly has 12SMs resulting in 1536 CUDA cores however, there is also a compelling cut-down orin spec on wikipedia that cites 1024 CUDA cores (8 SMs).
The calculation that is being used to estimate FP32 performance in TFLOPs is as follows: GPU clock speed * CUDA cores * 2 / 1000
Assuming the best possible scenario for a Orin SoC switch we'd have: 2Ghz * 1536 CUDA cores * 2 / 1000 ~= 6.144 TFLOPs of FP32 performance putting it above PS4 Pro and Xbox Series S performance.
At worst, we'd have: 2Ghz * 1024 CUDA cores * 2 / 1000 ~= 4.096 TFLOPs.
My opinion:
They'll use the 8 SMs (1024 CUDA cores) cut down chip and clock it at a bit above/a bit under 1GHz. I'm expecting it to be under 1.5GHz.
Assuming 1.2GHz for docked mode, that would be 2.4 TFLOPs and ~500Mhz for handheld mode ~= 1TFLOP.
Now, a quick comparison with the Steam Deck:
At the maximum 15W setting, the calculation for the steam deck goes as follows:
(1.6GHz * 512 Shading Units * 2) / 1000 ~= 1.6TFLOPs (which is valve's reported performance for the device).
However, steam deck fans fail to mention how that's the performance of the device at the maximum TDP limit which, as everyone knows results in the device draining too much power. Realistically, if we were to make a valid perf. per watt comparison, I'd use either 7W or 10W as a metric for the Deck.
Problem is, that's just one part of the history. I don't know exactly what's the average clock speed for the steam deck iGPU, and after watching multiple youtube videos of the device running different games, I've noticed that like laptops, it clocks down the CPU to ~2GHz (from 3+GHz) to boost the GPU clock up and I've seen it use anywhere from 200Mhz to 600Mhz and even 1GHz depending entirely on the game.
Now, assuming a inbetween, I think it's safe to say 600Mhz is a good baseline for handheld mode without compromising battery life too much.
So that leaves us at:
(0.6GHz * 512 Shading Units * 2) / 1000 ~= 0.6TFLOPs.
So in summary, I don't know why people still think an Orin SoC switch would be inferior to the deck GPU-wise (even more so considering we aren't even taking DLSS into account and how much NVIDIA's solution is better than AMD's FSR in motion).
CPU-wise, yeah of course a latest gen x86 CPU trounces an ARM chip that isn't made by Apple in single and multicore. But given the technical info that's publicly available for ARM A78 SoCs, I don't think the difference is too relevant.
Also, FP32 TFLOPs/GFLOPs performance doesn't tell the whole story... people also forget about things like memory bandwidth (lowest spec Orin reaches 102 GB/s while the steam deck maxes out at 88.00 GB/s - both mind you, at 128bit wide buses).
You might have been looking at the CPU freq.I assumed 2Ghz only in the best-case scenario using the clock value reported on the wikipedia entry for the Orin SoC.
Re-read my post, I don't think they'll clock the GPU at that (even though they could).
Power draw will most likely be kept at a 15W maximum due to not only thermal constraints, but charging speeds as well.
You sure you are not mixing that up with the CPU clock speeds with the GPU?I assumed 2Ghz only in the best-case scenario using the clock value reported on the wikipedia entry for the Orin SoC.
Re-read my post, I don't think they'll clock the GPU at that (even though they could).
Power draw will most likely be kept at a 15W maximum due to not only thermal constraints, but charging speeds as well.
Yeah, twas a mistake, I'll edit the post to better reflect the actual GPU performance.Where the did you get 2 GHz for GPU? The e highest Orion AGX module doesn't go beyond 1.3. We're more than likely not getting above that, and likely under that if anything (hopefully around 920 GHz).
Than is the first time I've heard the A78s are capable of 3Ghz. I searched on Wikipedia, but I have yet to see any phones actually use it and usually have it at 2.1 GHz or around that range. As for the Orion CPU profiles, they don't above ~2Ghz, so we really shouldn't be expecting anything higher.
In the absolute best case scenario for CPU, we could match it surpass Steam Deck in some scenarios if we have 8 A78s (7 for gaming) VS 4 Steam Deck CPUs at 3.5 GHz (3 for gaming). That would be roughly half the speed of x box series s. 7 cores at 1.5 GHz should match SD. I dunno about multithread performance,.since AMD CPUs specialize in that.
Where did you get that info from? also, the cut down Orin SoC has 10,15 and 25W configurations, assuming 15 for docked mode and a bit under 10W for portable mode, that would match the switch in TDP alone while having all the performance benefits of the new arch.going by Orin a 4SM 4CPU system could be as power hungry as Switch without doing much to the clocks, so I can only guess they are using something a lot more efficient.
the orin spec sheets lists clocks for everythingWhere did you get that info from? also, the cut down Orin SoC has 10,15 and 25W configurations, assuming 15 for docked mode and a bit under 10W for portable mode, that would match the switch in TDP alone while having all the performance benefits of the new arch.
Where did you get that info from? also, the cut down Orin SoC has 10,15 and 25W configurations, assuming 15 for docked mode and a bit under 10W for portable mode, that would match the switch in TDP alone while having all the performance benefits of the new arch.
the orin spec sheets lists clocks for everything
Looks like I'll have to edit that post again
Anyhow, that still doesn't answer my question: why do you think a theoretical 4C 4SM (512 CUDA core, double the switch's) would be even relevant to this conversation? and even if was a thing, at 10W or under it would still outperform the switch anyways.
Nintendo dreams of traced raysIf being low power, small and cost effective was the only goal, primarily the first two as the last one is a byproduct of the first two, then they would have paid enough for something that was fit for that. 4SM+4Core would have been more than enough for that. It offers new features, DLSS and a better CPU +GPU performance and keeps it affordable and manageable in terms of what they would push graphically.
AKA, it would have been a modest enough jump that they could maintain their current developmental scope without increase the cost so much. Especially with these rising developmental costs that they’ve been keenly aware of since the GameCube and have been trying their hardest to reduce its rapid rise.
But it already has 12SMs, this isn’t designed to be small whatsoever. 12SMs also draws a lot more power than 4SMs. And 12SMs allows for a much wider and larger graphical throughput of the GPU.
Nintendo is very good at working within their means, but 12SMs is more than anyone expected. They don’t expect a tepid and manageable increase. They expect a large increase that houses their FP software for quite a while (and their partners too).
Keep in mind that Nintendo‘s first party offerings have a brilliant art that scales very well, and this art helps overcome the shortcomings of the hardware because it’s less visible/less apparent. They don’t need some super big GPU if they want a good looking game, and even then with a switch the issue it has isn’t so much of the GPU but everything else about the system, such as the CPU and the memory bandwidth.
They could afford to go smaller and be fine, so long as the rest of it is good enough. They don’t chase the triple A western eye candy. However, clearly we see that they are doing more than necessary, in my opinion at least. I’m not sure what they have in mind that they were fine paying for such a big increase but I will keep an eye on what they will do.
Hell, 6SMs was probably even more than enough.
Despite you realizing that nintendo doesn't go for even mid-range hardware but really low end, there doesn't seem to be any actual documentation or spec info available online that would reinforce your claims of a 4cores, 4SMs Orin SoC.If being low power, small and cost effective was the only goal, primarily the first two as the last one is a byproduct of the first two, then they would have paid enough for something that was fit for that. 4SM+4Core would have been more than enough for that. It offers new features, DLSS and a better CPU +GPU performance and keeps it affordable and manageable in terms of what they would push graphically.
AKA, it would have been a modest enough jump that they could maintain their current developmental scope without increase the cost so much. Especially with these rising developmental costs that they’ve been keenly aware of since the GameCube and have been trying their hardest to reduce its rapid rise.
But it already has 12SMs, this isn’t designed to be small whatsoever. 12SMs also draws a lot more power than 4SMs. And 12SMs allows for a much wider and larger graphical throughput of the GPU.
Nintendo is very good at working within their means, but 12SMs is more than anyone expected. They don’t expect a tepid and manageable increase. They expect a large increase that houses their FP software for quite a while (and their partners too).
Keep in mind that Nintendo‘s first party offerings have a brilliant art that scales very well, and this art helps overcome the shortcomings of the hardware because it’s less visible/less apparent. They don’t need some super big GPU if they want a good looking game, and even then with a switch the issue it has isn’t so much of the GPU but everything else about the system, such as the CPU and the memory bandwidth.
They could afford to go smaller and be fine, so long as the rest of it is good enough. They don’t chase the triple A western eye candy. However, clearly we see that they are doing more than necessary, in my opinion at least. I’m not sure what they have in mind that they were fine paying for such a big increase but I will keep an eye on what they will do.
Hell, 6SMs was probably even more than enough.
you're still using Orin as a representative of what Nintendo would use when it isn't. Orin isn't made for gaming and Nvidia wouldn't sell it to nintendo for suchDespite you realizing that nintendo doesn't go for even mid-range hardware but really low end, there doesn't seem to be any actual documentation or spec info available online that would reinforce your claims of a 4cores, 4SMs Orin SoC.
Also, at 4SMs, we'd be looking at a docked performance of about 0.6TFLOPs which is lower than double of what the current switch is capable of.
Even if we take DLSS into account, I don't see why they'd go with such a low spec considering that:
- the 400$ pricing for the Jetson Orin NX 8GB dev kit isn't representative of what nintendo would actually be paying NVIDIA, the kit contains built-in memory, the board itself, a case, I/O etc... nintendo is only after the chip itself they'll likely outsource the memory, storage and the device's board.
- nintendo would make a deal with nvidia so they wouldn't even pay the same price of a dev kit anyways
But sure, let's go by your logic that they won't be using 12 SMs. The lower spec SoC that I mentioned which has data available on wikipedia as well, is an 8SM chip. Also, all of those SoCs have 10,15,20 and 25W TDP targets, we know the current gen switch uses about 15W in docked mode and a bit under 10W in handheld (~7W or so). So why are you assuming it would draw more than that when they already released the target TDPs?
I am, thank you <3Hope you're doing better today.
Of course there wouldn’t, neither is Drake like the other ORIN configs. It’s an offshoot design for gaming, the other are for automotive.there doesn't seem to be any actual documentation or spec info available online that would reinforce your claims of a 4cores, 4SMs Orin SoC.
the Series S is the best thing to happen to DrakeVery reasonable, and still exciting. Looks like they're positioning Drake to be to PS5 what TX1 was do PS4: close enough for some "impossible ports".