• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Do you have audio editing experience and want to help out with the Famiboards Discussion Club Podcast? If so, we're looking for help and would love to have you on the team! Just let us know in the Podcast Thread if you are interested!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

So you believe that Qualcomm's outright lying to NotebookCheck, XDA Developers, Hardwareluxx, etc.?

Also, the CPU diagram of the Snapdragon 8cx Gen 3 shown by Qualcomm implies that a heterogenous CPU configuration rather than a homogeneous CPU configuration (which the Cortex-A78C is) is being used.

An argument can be made that the 4 "Efficiency" cores used in the Snapdragon 8cx Gen 3 are Cortex-A55 cores rather than the Cortex-A78 cores, considering how Qualcomm designates the "Efficiency" nomenclature to the Cortex-A55 cores.

I wish Qualcomm would actually be more transparent when hardware details are concerned.


Qualcomm has never used the "Prime" nomenclature to describe the Cortex-A78 though. Qualcomm so far has reserved the "Prime" nomenclature to the Cortex-X1. And I'm saying this, because Qualcomm described that the Snapdragon 8cx Gen 3 has 4 "Prime" cores and 4 "Efficiency" cores.

Anyway, the Kirin 9000 Geekbench 5 single-core scores on average seems similar to the Exynos 1080 Geekbench 5 single-core scores.

~
Edit: The FTC has sued to block Nvidia's attempt to acquire Arm. I think Nvidia's attempt to acquire Arm is dead.
Qualcomm described the CPU as having 4 Prime/performance cores and 4 efficiency cores and didn't say whether the Prime/performance cores were X1 cores or A78 cores, only that the cores would be clocked at 3.0GHz. All those media outlets are interpreting the CPU diagram of the Snapdragon 8cx as a heterogenous CPU configuration without specifically knowing if the big cores were X1 cores. So yes, I'm questioning all those media outlets. Leakers have severally said that it would be an 8*A78 core configuration and the 1.4x ST performance jump over the last gen 8cx (A76) would suggest that it would still be using A series cores instead of a X series one.
 
I’m not really sure what you mean here, can you clarify?

They list 4 perf cores and 4 efficiency cores in the slides from QC. Even have the perf cores as big cores and the efficiency cores as small in the diagram.

They are heavily implying it is heterogeneous, not homogenous, in configuration.
You cannot compare an A78 with 512Kb L2$ with its own power rail (shared with the 3 other performance cores on S888 or a A78 dedicated one + 1 exclusive power rail for X1 on E2100) with an A78 with less L2$/L3$/System cache from a smaller and cheaper SOC. As an example, Kirin 9000 was on par with S888/E2100 using 4*A77 on TSMC's 5 nm with one of them being the big core @3.16GHz and the 3 other being middle cores @2.54GHz.

ARM cores must be compared with similarly configured ARM cores. It would be more relevant to compare the A78 core from a S888 with the middle A77 core from K9000 than with any performance A78 core from the 780G.
 
Last edited:
Qualcomm described the CPU as having 4 Prime/performance cores and 4 efficiency cores and didn't say whether the Prime/performance cores were X1 cores or A78 cores, only that the cores would be clocked at 3.0GHz. All those media outlets are interpreting the CPU diagram of the Snapdragon 8cx as a heterogenous CPU configuration without specifically knowing if the big cores were X1 cores. So yes, I'm questioning all those media outlets. Leakers have severally said that it would be an 8*A78 core configuration and the 1.4x ST performance jump over the last gen 8cx (A76) would suggest that it would still be using A series cores instead of a X series one.
Are the several leakers you've mentioned reliable ones? Ronald Quandt, which I mentioned, never specifically mentioned which CPU core Qualcomm's using for the Snapdragon 8cx Gen 3 (Cortex-X1? Cortex-A78?). He only mentioned Gold+ cores and Gold cores.
 
All we got today was "kryo" or "adreno". They didn't say 4*X1 and 4*A78. They said 4*big cores and 4*efficient cores. Which does not exclude 4*A78 with a more L2$ and power budget and 4*A78 with less cache and power budget.
 
0

The only reliable information concerning the 4 big cores being X1s comes from non official engineer statement passed to a journalist. Not even Ian Cutress know for sure if it really is running X1 cores.
 

The only reliable information concerning the 4 big cores being X1s comes from non official engineer statement passed to a journalist. Not even Ian Cutress know for sure if it really is running X1 cores.

Dr Cutress hasn’t commented on that at all really, on whether it is this or that.

But anyway,

It’s would be very suspicious if multiple sources got the same information at the same event and the only thing to conclude is that “it’s not confirmed and it’s like this X set up here not this Y setup”
 
Dr Cutress hasn’t commented on that at all really, on whether it is this or that.

But anyway,

It’s would be very suspicious if multiple sources got the same information at the same event and the only thing to conclude is that “it’s not confirmed and it’s like this X set up here not this Y setup”

Dr. Cutress response is literally below the tweet I quoted.
 

Yeah, I realised that now. But anyway, Dr Ian Cutress only said he forgot to ask the engineer in question, not that he was unsure.

So no one knows for sure the real 8cx gen3 CPU configuration. That's a bit frustrating considering that it would be the most powerful SOC of QC portfolio (but not as frustrating as knowing nothing about their new gaming Gen3x chip).
Qualcomm's lack of transparency is so bad that Charlie Demerjian is shitting all over Qualcomm (and I think rightfully so). And I hope that Charlie Demerjian isn't implying here that Qualcomm plans on gutting Nuvia as well.
 
Yeah, I realised that now. But anyway, Dr Ian Cutress only said he forgot to ask the engineer in question, not that he was unsure.


Qualcomm's lack of transparency is so bad that Charlie Demerjian is shitting all over Qualcomm (and I think rightfully so). And I hope that Charlie Demerjian isn't implying here that Qualcomm plans on gutting Nuvia as well.
I find Qualcomm's position arrogant. They have had a monopoly on 4G and now 5G chips for years. They will probably be beaten by Mediatek who took Huawei's place in TSMC cutting edge capacities. They complain about the lack of performance of Samsung process but could not have the volume to stay ahead by only sourcing from TSMC's. I think 8gen1+ will only be supplied in small quantities (K9000 like volume, maybe close to a paper launch) in order to be able to claim the Android crown.
 
I find Qualcomm's position arrogant. They have had a monopoly on 4G and now 5G chips for years. They will probably be beaten by Mediatek who took Huawei's place in TSMC cutting edge capacities. They complain about the lack of performance of Samsung process but could not have the volume to stay ahead by only sourcing from TSMC's. I think 8gen1+ will only be supplied in small quantities (K9000 like volume, maybe close to a paper launch) in order to be able to claim the Android crown.
It’s off-topic, but it’s almost enough to make me glad major smartphone manufacturers are looking at ways to cut them out of 4G/5G chips in their devices.
 
I find Qualcomm's position arrogant. They have had a monopoly on 4G and now 5G chips for years. They will probably be beaten by Mediatek who took Huawei's place in TSMC cutting edge capacities. They complain about the lack of performance of Samsung process but could not have the volume to stay ahead by only sourcing from TSMC's. I think 8gen1+ will only be supplied in small quantities (K9000 like volume, maybe close to a paper launch) in order to be able to claim the Android crown.
Agreed. I think Qualcomm is also becoming complacent, similar to how Intel was becoming complacent. (Hence why I said Qualcomm's the Intel amongst the Arm licensees.) And I don't trust Qualcomm's intent when Qualcomm offered to buy Arm alongside a consortium of Arm licensees if Softbank sells Arm via IPO.

It’s off-topic, but it’s almost enough to make me glad major smartphone manufacturers are looking at ways to cut them out of 4G/5G chips in their devices.
Unfortunately, Qualcomm still has a stronghold in the US due to Qualcomm's patents on 4G, 5G, etc.

~

Edit: Before I forget, I love Jensen Huang's comments about Cristiano Amon. :LOL:



Anyway, SiFive announced the P650 processor today, which apparently outperforms the Cortex-A77. So in scenarios where Nvidia's attempted acquisition of Arm is blocked, Nvidia's not willing to licence Cortex-A cores or design custom Arm based CPUs tailored for Nintendo's needs in the future, and Nintendo doesn't really care about backwards compatibility for the console after the DLSS model*, I suppose RISC-V is an option for Nintendo. (Keep in mind that I don't think the likelihood is high.)
 
Last edited:
Quoted by: SiG
1
You cannot compare an A78 with 512Kb L2$ with its own power rail (shared with the 3 other performance cores on S888 or a A78 dedicated one + 1 exclusive power rail for X1 on E2100) with an A78 with less L2$/L3$/System cache from a smaller and cheaper SOC. As an example, Kirin 9000 was on par with S888/E2100 using 4*A77 on TSMC's 5 nm with one of them being the big core @3.16GHz and the 3 other being middle cores @2.54GHz.

ARM cores must be compared with similarly configured ARM cores. It would be more relevant to compare the A78 core from a S888 with the middle A77 core from K9000 than with any performance A78 core from the 780G.
The performance difference, despite that, is not that much higher for the A78 that are prime cores or sustained perf cores in a single setup. Doing the math they deviate by a few percentages but not so much as to be 20%, not like the X1 which can deviate by as +20% in single threaded performance. The Exynos 1080 and 2100 have a larger difference in single threaded performance and have a more similar setup. Even the dimensity 1200, clocked as high as 3GHz (the cap) doesn’t outdo the prime X1 in single threaded. And it’s closer in perf delta to the prime A78 despite having a unique configuration

The perf that was leaked on Geekbench does not align with the A78, even the setup similar to the Dimensity 1200, the math does not check out. It aligns with the single threaded performance of the X1


And even if we take into account the touted perf of +85%, that is over the 8cx Gen 2. This is the 8cx Gen 2:

https://browser.geekbench.com/v5/cpu/search?utf8=✓&q=8cx+gen+2


This is that 85% increase of that aligns closer to the leaked geekbench (off by a few hundred):


https://browser.geekbench.com/v5/cpu/search?utf8=✓&q=8cx+gen+3

These multi core scores are about 60-83% higher than the Gen 2

And the 8CX G2 was also 4+4 big+little setup :p


Regardless, the 8CX Gen 3 isn’t homogenous, it’s heterogeneous
 
0
Anyway, SiFive announced the P650 processor today, which apparently outperforms the Cortex-A77. So in scenarios where Nvidia's attempted acquisition of Arm is blocked, Nvidia's not willing to licence Cortex-A cores or design custom Arm based CPUs tailored for Nintendo's needs in the future, and Nintendo doesn't really care about backwards compatibility for the console after the DLSS model*, I suppose RISC-V is an option for Nintendo. (Keep in mind that I don't think the likelihood is high.)
So are we back to "Nintendo being stuck with a crappy chip" scenario? The only way I see this is Nintendo designing their own "SoC" with an Nvidia GPU and a custom ARM block. I really doubt they would want to jump to RISC as the nighmares of N64 development would no doubt still be in the minds of Nintendo's alumni veterans.
 
So are we back to "Nintendo being stuck with a crappy chip" scenario? The only way I see this is Nintendo designing their own "SoC" with an Nvidia GPU and a custom ARM block.
Probably not. I was simply speculating on what Nintendo could do in the absolute worst case scenario where Nvidia's attempt to acquire Arm is blocked and Nvidia's not willing to licence Cortex-A cores or design custom Arm based CPUs tailored for Nintendo's needs in the future, assuming Nintendo doesn't care about backwards compatibility for consoles releasing after the DLSS model* (not including refreshes). And I think the likelihood for that is very low.

I don't think Nintendo has an architectural licence from Arm, so I don't think Nintendo can design a custom Arm based CPU. And I'm not sure if Nintendo has a core licence from Arm, which would allow Nintendo to use Arm's Cortex-A CPU designs.
 
So are we back to "Nintendo being stuck with a crappy chip" scenario? The only way I see this is Nintendo designing their own "SoC" with an Nvidia GPU and a custom ARM block. I really doubt they would want to jump to RISC as the nighmares of N64 development would no doubt still be in the minds of Nintendo's alumni veterans.
You're confusing RISC-V, which is a specific ISA, with the general design philosophy of RISC, which Nintendo never stopped using CPUs that (at least ostensibly) adhere to.
 
0
I wonder if Nintendo has a licence(s) for Armv8 and/or Armv9 CPU designs. I don't think Nintendo has a licence for Armv8 CPU designs since Nvidia already has a licence for Armv8 CPU designs.
 
0
Probably not. I was simply speculating on what Nintendo could do in the absolute worst case scenario where Nvidia's attempt to acquire Arm is blocked and Nvidia's not willing to licence Cortex-A cores or design custom Arm based CPUs tailored for Nintendo's needs in the future, assuming Nintendo doesn't care about backwards compatibility for consoles releasing after the DLSS model* (not including refreshes). And I think the likelihood for that is very low.

I don't think Nintendo has an architectural licence from Arm, so I don't think Nintendo can design a custom Arm based CPU. And I'm not sure if Nintendo has a core licence from Arm, which would allow Nintendo to use Arm's Cortex-A CPU designs.
It's a lot of ifs. Not faulting you for bringing them up as possibilities, but so long as Nintendo is willing to buy derivative designs of their SoCs in the 100mil+ range every 5-7 years, I think Nvidia will do whatever Nintendo asks. It's a lucrative business arrangement to put one's foot down on without proper cause. The only way it would happen is if Nintendo could be convinced to abandon ARM CPUs, and I don't see that being that case any time soon.
 
0
re: Nvidia's future with ARM, I think there's a couple things that are worth keeping in mind:
  1. Nintendo is almost certainly one of the biggest purchasers of Tegra chips at their power budget, if not in general. Nvidia is not going to be forcing them into a CPU ISA they don't want.
  2. Hardware moves slowly. By the time this concern even has a chance to matter, it won't be for Dane, or likely even the successor to Dane, but the system after that.
RISC-V for Nintendo is probably broadly in the same state as it is for the rest of the industry. Possibly useful for microcontrollers or other similar scenarios, but otherwise mostly a plan B if something goes seriously wrong with ARM.
 
0
I think it depends on if Nintendo wants to continue using Arm's Cortex-A designs for future consoles, or if Nintendo's willing to use custom Arm based CPU designs from companies Nintendo has worked with (e.g. Nvidia, AMD, etc.).

Qualcomm offered to buy Arm alongside a consortium of Arm licensees if Softbank sells Arm via IPO instead of selling Arm to Nvidia. And I think that can be very problematic if Nintendo wants to continue using Arm's Cortex-A designs for future consoles. I feel like in the scenario that Qualcomm, alongside a consortium of Arm licensees, buy Arm from Softbank via Softbank, Qualcomm and the consortium of Arm licensees could vote to stop R&D funding for Arm's future Cortex-A designs, especially with Qualcomm acquiring Nuvia, which has Gerald William III, the person who designed Apple's Arm based CPUs, from Cyclone to Firestorm, as well as the performance CPU cores in the Apple M1 Pro and the Apple M1 Max, in the helm, and Samsung's rumoured to hire former Apple and AMD engineers to help design custom Arm based CPUs for future Exynos SoCs. Qualcomm's as notoriously infamous as, if not more infamous than, Nvidia, for anti-competitive behaviour, at least in the US and the UK, for good reasons.
 
I think it depends on if Nintendo wants to continue using Arm's Cortex-A designs for future consoles, or if Nintendo's willing to use custom Arm based CPU designs from companies Nintendo has worked with (e.g. Nvidia, AMD, etc.).

Qualcomm offered to buy Arm alongside a consortium of Arm licensees if Softbank sells Arm via IPO instead of selling Arm to Nvidia. And I think that can be very problematic if Nintendo wants to continue using Arm's Cortex-A designs for future consoles. I feel like in the scenario that Qualcomm, alongside a consortium of Arm licensees, buy Arm from Softbank via Softbank, Qualcomm and the consortium of Arm licensees could vote to stop R&D funding for Arm's future Cortex-A designs, especially with Qualcomm acquiring Nuvia, which has Gerald William III, the person who designed Apple's Arm based CPUs, from Cyclone to Firestorm, as well as the performance CPU cores in the Apple M1 Pro and the Apple M1 Max, in the helm, and Samsung's rumoured to hire former Apple and AMD engineers to help design custom Arm based CPUs for future Exynos SoCs. Qualcomm's as notoriously infamous as, if not more infamous than, Nvidia, for anti-competitive behaviour, at least in the US and the UK, for good reasons.
I honestly wonder how companies like MediaTek, Rockchip,etc feel about the potential of ARM going IOP.

TBH, ARM Going IOP may have a slight chance of working out if MediaTek gets the majority or a big enough stake to continue CPU Development for their own use.

If not however, I do wonder if NVIDIA and MediaTek's partnership may grow beyond the current "Laptop only" plan they have ATM
 
I honestly wonder how companies like MediaTek, Rockchip,etc feel about the potential of ARM going IOP.

TBH, ARM Going IOP may have a slight chance of working out if MediaTek gets the majority or a big enough stake to continue CPU Development for their own use.

If not however, I do wonder if NVIDIA and MediaTek's partnership may grow beyond the current "Laptop only" plan they have ATM
Any type of majority ownership with a vested interest is problematic, cause it's a conflict of interest investing in the development for themselves while making it available for everyone else. Anyone that wants to take on development can just take an architectural license and keep the goods for themselves. I guess anyone with left without the money and talent (and time) to build up a team can agree to join in on the shared ARM development pool, but not sure that's a huge difference from the current situation other than costing them more.

As for Nvidia and MediaTek, in conjunction with the ARM deal (whether it goes through or not), I've thought for a while the ultimate goal would be licensed Nvidia GPU IP for third party SoCs...which could be problematic competition wise if they also owned ARM. Well most licensees (like MediaTek!) would get better GPU options by default, AMD would be the one that'd really be screwed if the default GPU IP for most of the world's new devices was Nvidia.
 
0
Back in the old place @Dakhil posted this PixArt patent of an analog stick/circle pad hybrid that uses optical sensors to contactlessly detect motions. This in theory may stop drifts from developing. I'm bringing this up because GuliKit announced (not yet released) a new KingKong 2 controller with electromagnetic sticks that are contactless to prevent drifting.

4431753.jpg
4431754.jpg




The optical and magnetic sensors are both potential solutions to the analog stick drifting issue. I'm sure that Nintendo has them in the R&D lab. Whether they use them in future products is 🤷‍♂️.
 
So are we back to "Nintendo being stuck with a crappy chip" scenario? The only way I see this is Nintendo designing their own "SoC" with an Nvidia GPU and a custom ARM block. I really doubt they would want to jump to RISC as the nighmares of N64 development would no doubt still be in the minds of Nintendo's alumni veterans.
RISC is a principle of one of the instruction set architectures with which a CPU can be designed. It means Reduced Instructions Set Computer, meaning it's a "design rule" that instructions stored in a CPU should be as simple and small and not-complex as they can be and then you can use them to create your program. Nintendo, as far as I know, has been using that "design" forever in their handhelds and in consoles since the N64, and Playstation 1 and Saturn used RISC CPUs as well. So did the PS3 and Xbox 360. ARM even stands for Advanced RISC Machine. Which means it's a computer that abides by RISC "rules".
RISC-V is a very specific type of RISC instructions set architecture that is kind of free real estate and anyone can go and use that instruction set architecture for their CPU without paying licenses. So it's more of a concrete thing based on that RISC rule. ARM is the same thing, except the licenses.
 
RISC is a principle of one of the instruction set architectures with which a CPU can be designed. It means Reduced Instructions Set Computer, meaning it's a "design rule" that instructions stored in a CPU should be as simple and small and not-complex as they can be and then you can use them to create your program. Nintendo, as far as I know, has been using that "design" forever in their handhelds and in consoles since the N64, and Playstation 1 and Saturn used RISC CPUs as well. So did the PS3 and Xbox 360. ARM even stands for Advanced RISC Machine. Which means it's a computer that abides by RISC "rules".
RISC-V is a very specific type of RISC instructions set architecture that is kind of free real estate and anyone can go and use that instruction set architecture for their CPU without paying licenses. So it's more of a concrete thing based on that RISC rule. ARM is the same thing, except the licenses.
the RISC vs CISC debate isn't particularly relevant these days from what I read. it's not a limiting factor anymore at least
 
the RISC vs CISC debate isn't particularly relevant these days from what I read. it's not a limiting factor anymore at least
Yeah, it's not super relevant in a modern context. There's a lot of (really pedantic) debate to be had if CPUs are really even "pure" RISC or CISC anymore.
yeah, it's not at all. RISC almost disappeared from desktop grade CPUs lol. Just wanted to quip on the few things I know lmao.
Desktop and servers are basically the only place where a CISC ISA is still dominant, and even that is at risk as ARM continues its ascent.
 
yeah, it's not at all. RISC almost disappeared from desktop grade CPUs lol. Just wanted to quip on the few things I know lmao.
Sorta. Basically RISC-architecture CPUs "won" in terms of the clear superiority of the design, but x86 "won" the market and it is a CISC ISA*. The way to square that circle is that the core of x86 ISA cpus is basically a RISC design, with a hardware compiler wrapper that translates the outer, complex ISA, into an internal, compact ISA.

This is one of the reasons that x86 descended CPUs are stuck in power/heat hell forever relative to something like ARM, even as ARM (and the proliferation of fixed function blocks) gets more and more CISC like overtime - they've got this (incredibly smart and sophisticated) chunk of silicon before you get to the "real" CPU . This is also how Intel was able to get weird stuff like hyperthreading/SMT so early in the game.

One of the tools that the 5-7th generation consoles had was the lack of backwards compat at the same time that CPU designs proliferated and math-coprocessors were evolving into GPUs rapidly. You could build really weird architectures, with no need to carry bad or aging decisions in silicon to the next generation. And that's part of what made Power/PowerPC based CPUs so compelling in that era - here were these RISC CPUs designed to be customized and special purpose.

*a set of acronyms that make sense together but if expanded sound insane
 
0
The problem the RISC architecture CPUs had was the couldn't build up enough marketshare to justify the ongoing development costs
Only in a few specific markets, which are starting to slip towards more traditionally RISC designs. RISC is very dominant overall, especially in devices that run off a battery.
 
0
Any upcoming milestones that inform when we can expect more information on Dane? I’m guessing the rest of the month will be quiet. Any educated guesses when tape out will happen and we can get more rumors?
 
I don't think Nintendo would go for a 4+4 clock setup. is it possible to have 7 cores at one clock and 1 core at a different clock if they're restricted to a 4-core cluster?
Theoretically yes, going by the Implementation options in the Arm DynamIQ Shared Unit MP135 Technical Reference Manual (p. A1-23).

But thinking further, I think there's a possibility Qualcomm's using 2 clusters of Cortex-A78 cores for the rumoured SC7280, considering that the Cortex-A78 (not the Cortex-A78C) supports a max of 4 CPU cores per cluster. And interestingly in the Arm DynamIQ Shared Unit Technical Reference Manual r4p1, Arm mentions that the max amount of permitted values for NUM_BIG_CORES is 4 cores.

Any upcoming milestones that inform when we can expect more information on Dane? I’m guessing the rest of the month will be quiet. Any educated guesses when tape out will happen and we can get more rumors?
I also don't expect any hardware news/rumours this month outside of potentially the launch window for the DLSS model* if Nintendo plans on showing a trailer of the Breath of the Wild sequel at The Game Awards 2021. And I think the latest Dane will be taped out is April 2022, assuming Nintendo plans to launch the DLSS model* at October 2022.
 
0
For the “fun” of it, but here are some low end specs I made up:

4SMs (512GPU cores), 500MHz handheld, 1000MHz docked. So 512GFLOPs portable (not stronger than current switch docked in actual perf, about the same) and 1TFLOP docked (weaker than the XB1 in docked before DLSS)


4 A78 cores (1clocked lower for the OS) at 2GHz. no A55s.

6GB of LPDDR5 RAM

51GB/s docked, lower in handheld. 64-bit.

LCD screen (yes, ditching the OLED for “???” reasons)

128GB UFS 2.1 internal storage. (Or same 64GB eMMC of the current switch to go even lower)


This is the low end. Like the lowest. The floor.

If you are speculating and somehow end up lower than this, you’re doing more than necessary, going above and beyond to be more negative here. I get being cautious as they are not as clear cut and are more fluid, but there’s a limit to being cautious and being realistic at the same time. Being cautious and somehow going lower than this is being unrealistic and unreasonable.

Take into account it’s supposed to be a derivative of an existing chip. A lot of the design is already done. They will customize it to what they need, as in remove what they don’t need.

ORIN and by extension ORIN NX are the chips that are being derived here. But these are automotive focused SoCs, therefore what can we reasonably assume is being removed? The automotive parts of course.

However, what should be considered is that this will be on an 8nm process most likely, because ORIN has only been reported as 8nm, nothing has been officially indicated as ORIN (and by extension ORIN NX) are on anything but 8nm. Not 7nm, not 5nm or even 4nm. There is a power draw that is considered for a portable device like this.

The CPU cores in ORIN+NX have a max frequency of 2GHz. The GPU has a max frequency of 1GHz.

I think that, if we are to discuss this thing, while we can be optimistic or pessimistic, there is a ceiling and a floor before it becomes just unrealistic for the other reasons.

All should be considered that it will need to run on a battery, and have a screen, and RAM, BT, and wifi, and other parts that drain battery too.


________________________________________




And, we can also discuss other things that are not how much power it will have. For example, the controller stick that FWD-BWD mentioned earlier. Or RT in games (when applicable). Or RT accelerated sound. Or maybe the sensor integrated into the dock for the return of the Wiimote….. or codecs that are useful in a device like this, etc.

For those that read, the ones that don’t comment but want to, you are encouraged to talk about other things too!

The soc is only part of the piece, there’s more to it.


Personally I’d like the pro controller to have an update 🗿, but I love its long battery life.


and don’t hold your self to a 300 dollar price tag, assume it’ll be 400 bucks and give yourself more elbow room with the other specs that aren’t SoC related, as that’s a separate thing to consider

;)
 
I'm really skeptical about Dane being made on 8 nm. We haven't had any official data concerning Orin S and a lot of mobile chips will have transitioned to nodes with extensive use of EUV with only entry level proposition still being made on 8 nm.

Nintendo having postponed the use of Dane separately from the OLED components and form factor could be an indication of a change of SoC and thus using a better node. Especially when fall 2022 will see the emergence of firsts 3 nm chips and 4/5/6 nm being cheaper.
However, what should be considered is that this will be on an 8nm process most likely, because ORIN has only been reported as 8nm, nothing has been officially indicated as ORIN (and by extension ORIN NX) are on anything but 8nm. Not 7nm, not 5nm or even 4nm.
There would have already been talks about Dane being fabricated using a 7 nm** process node if that's actually the case. But that hasn't been the case. In fact, kopite7kimi has consistently said that Orin's using Samsung's 8N process node since the beginning of this year (here, here, and here). kopite7kimi also said that Dane's a custom variant of Orin. And I don't expect Dane to be heavily different from Orin, outside of using the Cortex-A78C instead of the Cortex-A78AE, and removing all of the hardware features Nintendo has no use for (e.g. safety island, programmable vision accelerators (PVA), etc.).

Also, securing enough capacity for any particular process node is a process that realistically requires companies to make plans a couple of years in advance. That's the reason why there's been talks about Hopper being fabricated using TSMC's 5 nm** process node over a year ago.

I think the implied "Nintendo is 100% doomed if Nintendo doesn't use the most cutting-edge process nodes" sentiment from some people is getting really old. Process nodes alone aren't enough to ensure optimal performance and power efficiency. Architecture design is at least equally as important.
 
Last edited:
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom