• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

I’ve never heard of anyone having final silicon/close to final silicon devkits nearly 4 years out.

something went so horribly bad for that to happen.
Is it possible that Nintendo is trying a different approach to their product launch? If they gave dev kits out so many years ago that gives enough time for third parties to have completed games ready to sell at the start of the new hardware's lifecycle. Maybe there's a business case to be made for that strategy? They'd be confident in Nintendo this time given the Switch's success.
 
The main issue with this theory from what I can tell is the die size. Orin has a ton of automotive components that would be entirely unnecessary on a gaming console and a waste of silicon which means a waste of money. Not to mention that this thing is like 4x the size of the TX1 and unlikely to fit in any similar looking form factor.
What extra components does the SoC itself have, though? No strong opinion on the binning theory personally, but board components on Orin AGX or NX wouldn't be a barrier to binning the SoC.
 
What extra components does the SoC itself have, though? No strong opinion on the binning theory personally, but board components on Orin AGX or NX wouldn't be a barrier to binning the SoC.
Didn't it have extra camera vision accelerators and deep learning accelerators on the SoC itself? I thought there was more than that too, I'll see if I can find the render showing it's architecture when I get a chance.

Edit: it was this though someone had annotated it further pointing out the components not necessary for gaming:

640px-NVIDIA_Orin_press_-_die_-_annotated.jpg



This lists what's on the SoC, the two accelerators shouldn't be needed for gaming. No idea how much space that cuts out but I assumed it would be fairly substantial, along with 4 fewer SMs and CPU cores.
 
Last edited:
0
Is it possible that Nintendo is trying a different approach to their product launch? If they gave dev kits out so many years ago that gives enough time for third parties to have completed games ready to sell at the start of the new hardware's lifecycle. Maybe there's a business case to be made for that strategy? They'd be confident in Nintendo this time given the Switch's success.
that sounds like wasted time and money. launch games are probably gonna be ports of games they already have made or are being made for other current gen systems. those don't take years. even impossible ports like The Witcher took a year at most
 
The other issue with the binned Orin theory is that Drake is likely going to sell anywhere from like, 5 to 10+ times as many units as Orin AGX over it's lifetime, right? Why position your biggest seller that'll need by far the most supply as a binned unit?
 
that sounds like wasted time and money. launch games are probably gonna be ports of games they already have made or are being made for other current gen systems. those don't take years. even impossible ports like The Witcher took a year at most
It will be interesting to see if Square Enix decides to port Final Fantasy XV and Final Fantasy VII Remake for Drake. Maybe just Stranger of Paradise for maximum CHAOS.
 
Q: Why did Nintendo use this chip and not a custom chip?
A: They were able to get a good price from Nvidia who had a large supply. This same chip was used in multiple products including the Nvidia Shield.
Nintendo were porting to the Tegra X1 before the Tegra X1 was officially unveiled, while they may have gotten a good price for it the TX1 they used isn’t exactly stock.

The A53 cores are “disabled” or “fused off” unlike the stock tegra X1. The Shield 2019 variant came out with a stock TX1 found in the switch revision released a few months later.

But other TX1s like in the Pixel and the older Shield had the A53s on the die.

But I hear there were problems with that anyway… like the other products with the A53s lol


As for the rest of your post, I’m only going to say one thing and please do not take offense to how I put it: you can’t use “Because Nintendo logic” while also taking them using a large chip. These are contradictions.

You can use “because Nintendo” perhaps with the storage speed as that is a complete unknown quantity. But “because Nintendo they’ll save money” and “they’ll use stock Orin” do not compute.

You expend more per wafer to make a switch unit.

You customize it to have a specific chip for you and you only, this meets your demands and desires while also keeping costs as manageable as possible. And this isn’t only Nintendo but any company out there aims to make the process the most efficient as possible for longer term gains, some more than others.

Microsoft for reasons did not want to go above the die size of the original XBox One for the Series X SoC. It’s not always about being cheap, it’s about being smart with how you spend your money. It’s not so black and white.


Plus, from the data breach we know it’s not a binned Orin.
Is it possible that Nintendo is trying a different approach to their product launch? If they gave dev kits out so many years ago that gives enough time for third parties to have completed games ready to sell at the start of the new hardware's lifecycle. Maybe there's a business case to be made for that strategy? They'd be confident in Nintendo this time given the Switch's success.
They can do that, yes. But that’s not quite because of business strategies.

It’s for other reasons.

No businesses would have devkits for 4 years before a product released only to develop for a product that doesn’t exist yet and could be canned or delayed.


18months is the most I’ve heard of devkits being out, but usually from what I understand 8-12 months is the normal cadence.

But of course, I must factor a pandemic and delays that are above their hands, so I’d stretch the long end of 18 months to 24 months and form 8-12 months to 12-16 months before a product launches.
 
Last edited:
I don't know enough to comment on the binned Orin theory, but would like to share this Jetson family photo for size comparison:

KP2wR83.png

L to R: TX1, TX2, Xavier NX, AGX Xavier, Orin NX, AGX Orin (source)
I forgot that all the Tegra chips since the X1 have been for cars too or have had a use in cars.
 
0
The GeForce RTX 3080, 3080Ti and 3090 all come from the same GA102 chip with 28.3 billion transistors.

Exactly, they all use different binned GA102 chips. For example, RTX 3080 Ti is GA102-225 while RTX 3090 is GA102-300.

Orin is T234. Drake is T239. If Orin and Drake were just different bins of the same chip they would be distinguished by the suffixed numbers, not the primary ID.

As other users have pointed out it would not only be a huge die and waste of silicon, it also doesn't make sense to have the chip most in demand be such a drastic bin of a chip with such lower orders, and Orin NX already fills the role of using up binned T234.

All of that said: Welcome to Famiboards! Even if I disagree with what you're saying I love how well presented your argument is. Ambitious first posts put a smile on my face.
 
I don't care if its a Switch 2 or Super Switch, im just ready for more beefy hardware, because while i love and play the Switch more than my PS5, the games on the Switch are really starting to chug, so some new hardware this holiday would be a big welcome
 
Arm's future doesn't look very good, going by Masayoshi Son's ridiculously outrageous expectations with respect to Arm.


I don't know how reliable the source here is, so take with a healthy grain of salt.


But assuming there are grains of truth in the report, don't expect the price of electronics to decrease any time soon, at least not until 2025 at the earliest.

Did Lapsus$ and the Samsung Electronics employee in the Business Korea article collaborate to leak confidential information about Nvidia, including information about Drake? Does this confirm that Nintendo and Nvidia are collaborating with Samsung Foundry for the fabrication of Drake? :unsure:
I'm 100% joking. But who could have known Samsung Electronics could invite so much drama?
 
Anything special devs could do with this that they couldn’t do before or it’s just a drift fix and nothing else

I spent like an entire minute going over the first diagram wondering where the wipers and resistor pads went before looking at the next one and seeing the magnet lol.

Not only will this fix drift, well, the joycons current drift problem that arises from the wiper/resistor pads, (nothing's perfect, and I'm not nearly as familiar with em pots as resistor ones, but Im pretty sure most of the time any drift with these would be soft and could be nulled out with recalibration, rather than the hardware problem the ones now have), but it will likely improve the touchiness of the joycons analogs considerably, as it's no longer losing considerable motion range from mechanically converting rotational motion to linear motion (think about a trains wheel and spoke, how one end of the spoke is on the wheel and the other on a linear slide, the end on the wheel moves the entire circumference a far greater distance than the end on the linear slide, which just gets the diameter of the wheel. That's the fundamental concept of the difference between the rocker stick you move with your thumb, and the distance the linear wipers inside the housing move (there's more to it with scaling, but it's not necessary to get into). The joycons do NOT work off of much range at all, and it is very obvious when compared to other analogs, fortunately the joycons have motion sensors which can accommodate the needed dexterity for precision aiming, but not all games do this, and the ones that don't STIIIIINK on the joycon analogs.
 
We've seen that RT and 60fps is possible, just on a more limited scale. For handheld mode, it might be more limited, but we just don't know yet. Is 360p or even 270p reflections/shadows/ao/etc a viable tradeoff? Is that even readable in most situations?
You could make the point that RT is a waste of time entirely (esp with regards to battery life) on a 6” 720p screen because RT methods would not be obviously noticeable over traditional methods like baked lighting or cube maps/ssr reflections or baked shadows.

It all really depends on if Nintendo want to transition their entire development teams to RT based solutions on top of traditional baked methods as they’ll need to still factor in their sub 200gflop original Switch GPU.

RT will maybe be on a per case basis where the bigger budget titles like 3D Mario, Mario Kart and Zelda get RT aswell as traditional rendering methods while the lower tier games only get traditional rendering methods.
 
Product binning
TLDNR: I think that the Nvidia Orin (T234) chip that we now have VERY clear specs on, IS in fact the chip Nintendo will use in the next Switch, by way of a industry practice known as "binning".

1. I still hear talk about how the chip inside the original Switch was a "custom Nvidia chip for Nintendo". This is a lie. In 2017 Tech Insights did their own die shot and proved it was a stock, off the shelf Tegra X1(T210).
Q: Why did Nintendo use this chip and not a custom chip?
A: They were able to get a good price from Nvidia who had a large supply. This same chip was used in multiple products including the Nvidia Shield.

2. We need to consider that Nintendo may do the same thing again this time. That is, start with a stock chip and go from there. This would be less expensive and provide what I believe would be the same outcome.
We know that the full Orin (T234) chip is very large at 17 billion transistors. Based on pixel counts of all marketing images provided by Nvidia it could be around 450 mm2. (Very much a guess)

3. Too expensive and requiring too much power you say?
-Nvidia has documented the power saving features in their Tegra line, that allow them to disable/turn off CPU cores and parts of the GPU. The parts that are off consume zero power.
-A fully enabled T234 with the GPU clocked up to 1.3 GHz with a board module sells at $1599 USD for 1KU unit list price.
-The fully cut downT234 model with module (Jetson Orin NX 8GB) sells for $399 USD for 1KU unit list price.
Note: As a point of reference, 1.3 years before the Switch released the equivalent Tegra X1 module was announced for $299 USD for 1KU unit list price. ($357 adjusted for inflation)

4. Product binning. From Wikipedia: "Semiconductor manufacturing is an imprecise process, sometimes achieving as low as 30% yield. Defects in manufacturing are not always fatal, however; in many cases it is possible to salvage part of a failed batch of integrated circuits by modifying performance characteristics. For example, by reducing the clock frequency or disabling non-critical parts that are defective, the parts can be sold at a lower price, fulfilling the needs of lower-end market segments."
Companies have gotten smarter and built this into their design. As an example, the Xbox Series X chip contains 56 CUs but only 52 CUs are ever enabled, this increases the yields for Microsoft as they are the only customer for these wafers.
Relevant Example #1 - Nvidia Ampere based desktop GPUs:
The GeForce RTX 3080, 3080Ti and 3090 all come from the same GA102 chip with 28.3 billion transistors. Identical chip size and layout, yet their launch prices ranged from $699 to $1499 USD.
After they get made they are sorted into different "bins"
If all 82 CUs are good then it gets sold as a 3090.
If up to 2 CUs are defective, then it gets sold as a 3080 Ti.
If up to 14 CUs are defective, then it gets sold as a 3080.
The result is usable yields from each wafer are higher, and fewer chips get thrown into the garbage. (the garbage chips are a 100% loss).

Relevant Example #2 - Nvidia Jetson Orin complete lineup, with NEW final specs:
ModuleProcessorCoresFrequencyCore configuration1Frequency (MHz)TFLOPS (FP32)TFLOPS (FP16)DL TOPS (INT8)Bus widthBand-widthAvailabilityTDP in watts
Orin 64 (Full T234)Cortex-A78AE /w 9MB cache12up to 2.2 GHz2048:16:64 (16, 2, 8)13005.3210.649275256-bit204.8 GB/sDev Kit Q1 2022, Production Oct 202215-60
Orin 32 (4 CPU & 1 TPC disabled)Cortex-A78AE /w 6MB cache8up to 2.2 GHz1772:14:56 (14, 2, 7)9393.3656.73200256-bit204.8 GB/sOct 202215-40
Orin NX 16 (4 CPU & 1 GPC disabled)Cortex-A78AE /w 6MB cache8up to 2 GHz1024:8:32 (8, 1, 4)9181.883.76100128-bit102.4 GB/slate Q4 202210-25
Orin NX 8 (6 CPU & 1 GPC disabled)Cortex-A78AE /w 5.5 MB cache6up to 2 GHz1024:8:32 (8, 1, 4)7651.573.1370128-bit102.4 GB/slate Q4 202210-20
1 Shader Processors : Ray tracing cores : Tensor Cores (SM count, GPCs, TPCs)
You can confirm the above from Nvidia site here, here and here.
Of note, Nvidia shows the SOC in renders of all 4 of these modules as being the identical size. This suggests that they are all cut from wafers with the same 17 billion transistor design, just with more and more disabled at the factory level to meet each products specs.

The CPU and GPU are designed into logical clusters. During the binning process they can permanently disable parts of the chip along these logical lines that have been established. The disabled parts do not use any power and would be invisible to any software.
Specific to Orin the above table shows as that they can disable per CPU core as well as per TPC (texture processing cluster). This is important.
The full Orin GPU has 8 TPCs. Each TPC has 2 SMs for a total of 16 SMs. Each SM has 1 2nd-generation Ray Tracing core for a total of 16. Each SM is divided into 4 processing block that each contain: 1 3rd-generation Tensor core, 1 Texture unit and 32 CUDA cores. (Resulting in a total of 64 Tensor cores, 64 texture units and 2048 CUDA cores.)

5. What happens if we take the Orin 32 above, and instead of only disabling 1 TPC, we disable 2 TPCs (You know, for even better yields)? = Answer: Identical values to the leaked Drake/T239 specs!

ModuleProcessorCoresFrequencyCore configuration1Frequency (MHz)TFLOPS (FP32)TFLOPS (FP16)DL TOPS (INT8)Bus widthBand-widthTDP in watts
T239 (Drake) (4-8 CPU & 2 TPC disabled)Cortex-A78AE /w 3-6MB cache4-8under 2.2 GHz1536:12:48 (12, 2, 6)under 1300under 4under 8?256-bit204.8 GB/sunder 15-40?
1 Shader Processors : Ray tracing cores : Tensor Cores (SM count, GPCs, TPCs)

Now the only thing left is the final clock speeds for Drake, which remain unknown, and then how much Nintendo will underclock, but we can use all the known clocks to give us the most accurate range we have had so far!
devices with known clocksProcessorCoresFrequencyCore configuration1Frequency (MHz)TFLOPS (FP32)TFLOPS (FP16)Bus widthBand-widthTDP in watts
Orin 64Cortex-A78AE /w 6MB cache8
2200​
1536:12:48 (12, 2, 6)
1300​
3.994
7.987​
256-bit204.8 GB/sunder 15-60
Orin 32Cortex-A78AE /w 6MB cache8
2200​
1536:12:48 (12, 2, 6)
939​
2.885​
5.769​
256-bit204.8 GB/sunder 15-40
NX 16Cortex-A78AE /w 6MB cache8
2000​
1536:12:48 (12, 2, 6)
918​
2.820​
5.640​
128-bit102.4 GB/sunder 10-25
NX 8Cortex-A78AE /w 5.5 MB cache6
2000​
1536:12:48 (12, 2, 6)
765​
2.350​
4.700​
128-bit102.4 GB/sunder 10-20
Switch dockedCortex-A78AE /w 3MB cache4
1020​
1536:12:48 (12, 2, 6)
768​
2.359
4.719​
128-bit102.4 GB/s
Switch handheldCortex-A78AE /w 3MB cache4
1020​
1536:12:48 (12, 2, 6)
384​
1.180​
2.359​
128-bit102.4 GB/s

The above table can help you come to your own conclusions, but I can't see Nintendo clocking the GPU higher then Nvidia in their highest end Orin product. Also its hard to imagine Nintendo going with a docked clock lower then the current Switch. This gives a solid range between 2.4 and 4 TFLOPS of FP32 performance docked for a Switch built on Drake(T239).

6. What this means for the development for and production of the DLSS enabled Switch?

-The Jetson AGX Orin Developer Kit is out now, so everything Nintendo would need to build their own dev kit that runs on real hardware is available now. (not just a simulator) (The Orin Developer Kit allows you to flash it to emulate the Orin NX, so Nintendo would likely be doing something similar.)

Chip yields are always lowest at the start of manufacturing, and think of all the fully working Orin chips Nvidia needs to put into all their DRIVE systems for cars.
-Now think of how many chips will not make the cut. Either they can't be clocked at full speed and or some of the TPCs are defective.
-Nvidia will begin to stockpile chips that do not make the Orin 32 cutoff (up to 1 bad CPU cluster and up to 1 bad TPC)
-Note that there is about a 3 month gap between the production availability of the Orin 64 and the NX 8. Binning helps to explain this, as they never actually try and manufacture a NX 8 part, it is just a failed Orin 64 that they binned, stock piled and then sold.
-This would allow Nintendo to come in and buy a very large volume of binned T234 chips, perhaps in 2023, and put them directly into a new Switch.
-Nintendo can structure the deal that they are essentially buying NVidia's off the shelf chips industrial waste on the cheap.

Custom from day 1 = expensive
Compare this to how much Nvidia would charge Nintendo if instead the chip was truly custom from the ground up. Nintendo gets billed for chip design, chip tapeout, test, and all manufacturing costs. Nintendo would likely be paying the bill at the manufacturing level, meaning the worse the yields are, and the longer it takes to ramp up production the more expensive it is. The cost per viable custom built to spec T239 chip would be unknown before hand. Nintendo would be taking on a lot more risk with the potential for the costs to be much higher then originally projected.
This does not sound like the Nintendo we know. We have seen the crazy things they will do to keep costs low and predictable.

custom revision down the road = cost savings
Now as production is improved and yields go up, the number of binned chips goes down. As each console generation goes on we expect both supply and demand to increase as well.
This is where there is an additional opportunity for cost savings. It makes sense to have a long term plan to make a smaller less expensive version of the Orin chip, one with less then 17 billion transistors. Once all the kinds are worked out and the console cycle is in full swing, and you have a large predictable order size, you can go back to Nvidia and the foundry and get a revision made without all the parts you don't need. known chip, known process, known fab, known monthly order size in built. = lower cost per chip
And the great thing is that the games that run on it don't care which chip it is. The core specs are locked in stone. 6 TPCs, 1536 CUDA cores, 12 2nd-generation Ray Tracing cores, 48 3rd-generation Tensor cores.

Nintendo has already done this moving the Switch from the T210 chip to the T214 chip.

So what do you all think? Excited to hear all your feedback! I am only human, so if you find any specific mistakes with this post, please let me know.
There are several problems with this theory, but the biggest one is that it directly contradicts the recent Nvidia hack, which positively identifies the T239 as being a fully distinct chip from from T234 (GA10F vs GA10B).
Arm's future doesn't look very good, going by Masayoshi Son's ridiculously outrageous expectations with respect to Arm.
It's kind of unfortunate that ARM seems to be running into trouble right as it's starting to meaningfully displace x86.

Welp, at least there's always RISC-V as a plan B.
 
You could make the point that RT is a waste of time entirely (esp with regards to battery life) on a 6” 720p screen because RT methods would not be obviously noticeable over traditional methods like baked lighting or cube maps/ssr reflections or baked shadows.

It all really depends on if Nintendo want to transition their entire development teams to RT based solutions on top of traditional baked methods as they’ll need to still factor in their sub 200gflop original Switch GPU.

RT will maybe be on a per case basis where the bigger budget titles like 3D Mario, Mario Kart and Zelda get RT aswell as traditional rendering methods while the lower tier games only get traditional rendering methods.
Well you'll never hear "RT is a waste" out of me :p

Give me rays or give me death!

RT is something that is planned from the start I'm order to extract the most from it. It's definitely gonna be case by case, but if a game is coming from the base switch and scaled up to Drake, then RT will be easier to add. If the game is made with Drake and RT in mind, it was never coming to the base switch.
 
What extra components does the SoC itself have, though? No strong opinion on the binning theory personally, but board components on Orin AGX or NX wouldn't be a barrier to binning the SoC.
WtuDREM.png

AnBtv2z.png


Going by the provided block diagrams of Jetson AGX Orin above, the following hardware components from Jetson AGX Orin that Nintendo probably has no need for are:
  • Safety Island
  • Programmable Vision Accelerators (PVA)
  • HDR Image Signal Processor (ISP)
  • Video Ingest (VI)
  • Video and Image Compositor (VIC)
  • Generic Timestamp Engine (GTE)
  • Sensor Processing Engine (SPE / RTOS)
Nvidia Deep Learning Accelerators (NVDLA) and Audio Processing Engine (APE) are question marks for me.

I don't know if this has been mentioned here already, but Nvidia recently updated the Jetson AGX Orin technical brief.

Nvidia mentioned that the max GPU frequencies for the Jetson AGX Orin 32 GB module is 939 MHz and 1.3 GHz for the the Jetson AGX Orin 64 GB module.

Nvidia also mentioned that the max CPU frequency for both the Jetson AGX Orin 32 GB module and the the Jetson AGX Orin 64 GB module is 2.3 GHz, with the Jetson AGX Orin 32 GB module having 8 Cortex-A78AE cores and the Jetson AGX Orin 64 GB module having 12 Cortex-A78AE cores.

And the max frequency of the NVDLA in the Jetson AGX Orin 32 GB module is 1.4 GHz and 1.6 GHz for the Jetson AGX Orin 64 GB module.


 
I don't care if its a Switch 2 or Super Switch, im just ready for more beefy hardware, because while i love and play the Switch more than my PS5, the games on the Switch are really starting to chug, so some new hardware this holiday would be a big welcome
Your probably gonna see the same issues that plagued the Switch plague newer harder. Sometimes it really isn’t about how strong a piece of hardware is but how much a developer/publisher is willing to invest resources into the port & also what they invest those resources in for said port.

Then we’ll be right back here in about half to a year going in circles about stronger hardware.
 
I imagine in spite of newer, stronger hardware, politics (not in the strictest sense of the word) are still going to be the main factor for some third party developers. In general, pleasing everybody is literally impossible.
 
I imagine in spite of newer, stronger hardware, politics (not in the strictest sense of the word) are still going to be the main factor for some third party developers. In general, pleasing everybody is literally impossible.
I mean, the Switch was generally proof of that having something that's relatively powerful but also accessible can gain traction, but ultimately money is what greases the wheels of snagging lucrative 3rd party deals.

You are right, though. There's simply no pleasing some people and they're likely to complain no matter what we get.
 
0
This isn't entirely accurate based on what has come out over the past year or so. Nintendo supposedly actually worked with Nvidia when designing the TX1, so it was at least partially designed for their needs. It wasn't just a stock chip they saw on a shelf and picked. Plus, the theory that Nvidia had a large supply and would sell them for a good price was based fully on speculation and nothing concrete.

The main issue with this theory from what I can tell is the die size. Orin has a ton of automotive components that would be entirely unnecessary on a gaming console and a waste of silicon which means a waste of money. Not to mention that this thing is like 4x the size of the TX1 and unlikely to fit in any similar looking form factor.
Pretty sure the notion that Nintendo had any imput on the design of the tx1 is 100% theory.
 
Quoted by: SiG
1
Product binning
TLDNR: I think that the Nvidia Orin (T234) chip that we now have VERY clear specs on, IS in fact the chip Nintendo will use in the next Switch, by way of a industry practice known as "binning".

1. I still hear talk about how the chip inside the original Switch was a "custom Nvidia chip for Nintendo". This is a lie. In 2017 Tech Insights did their own die shot and proved it was a stock, off the shelf Tegra X1(T210).
Q: Why did Nintendo use this chip and not a custom chip?
A: They were able to get a good price from Nvidia who had a large supply. This same chip was used in multiple products including the Nvidia Shield.

2. We need to consider that Nintendo may do the same thing again this time. That is, start with a stock chip and go from there. This would be less expensive and provide what I believe would be the same outcome.
We know that the full Orin (T234) chip is very large at 17 billion transistors. Based on pixel counts of all marketing images provided by Nvidia it could be around 450 mm2. (Very much a guess)

3. Too expensive and requiring too much power you say?
-Nvidia has documented the power saving features in their Tegra line, that allow them to disable/turn off CPU cores and parts of the GPU. The parts that are off consume zero power.
-A fully enabled T234 with the GPU clocked up to 1.3 GHz with a board module sells at $1599 USD for 1KU unit list price.
-The fully cut downT234 model with module (Jetson Orin NX 8GB) sells for $399 USD for 1KU unit list price.
Note: As a point of reference, 1.3 years before the Switch released the equivalent Tegra X1 module was announced for $299 USD for 1KU unit list price. ($357 adjusted for inflation)

4. Product binning. From Wikipedia: "Semiconductor manufacturing is an imprecise process, sometimes achieving as low as 30% yield. Defects in manufacturing are not always fatal, however; in many cases it is possible to salvage part of a failed batch of integrated circuits by modifying performance characteristics. For example, by reducing the clock frequency or disabling non-critical parts that are defective, the parts can be sold at a lower price, fulfilling the needs of lower-end market segments."
Companies have gotten smarter and built this into their design. As an example, the Xbox Series X chip contains 56 CUs but only 52 CUs are ever enabled, this increases the yields for Microsoft as they are the only customer for these wafers.
Relevant Example #1 - Nvidia Ampere based desktop GPUs:
The GeForce RTX 3080, 3080Ti and 3090 all come from the same GA102 chip with 28.3 billion transistors. Identical chip size and layout, yet their launch prices ranged from $699 to $1499 USD.
After they get made they are sorted into different "bins"
If all 82 CUs are good then it gets sold as a 3090.
If up to 2 CUs are defective, then it gets sold as a 3080 Ti.
If up to 14 CUs are defective, then it gets sold as a 3080.
The result is usable yields from each wafer are higher, and fewer chips get thrown into the garbage. (the garbage chips are a 100% loss).

Relevant Example #2 - Nvidia Jetson Orin complete lineup, with NEW final specs:
ModuleProcessorCoresFrequencyCore configuration1Frequency (MHz)TFLOPS (FP32)TFLOPS (FP16)DL TOPS (INT8)Bus widthBand-widthAvailabilityTDP in watts
Orin 64 (Full T234)Cortex-A78AE /w 9MB cache12up to 2.2 GHz2048:16:64 (16, 2, 8)13005.3210.649275256-bit204.8 GB/sDev Kit Q1 2022, Production Oct 202215-60
Orin 32 (4 CPU & 1 TPC disabled)Cortex-A78AE /w 6MB cache8up to 2.2 GHz1772:14:56 (14, 2, 7)9393.3656.73200256-bit204.8 GB/sOct 202215-40
Orin NX 16 (4 CPU & 1 GPC disabled)Cortex-A78AE /w 6MB cache8up to 2 GHz1024:8:32 (8, 1, 4)9181.883.76100128-bit102.4 GB/slate Q4 202210-25
Orin NX 8 (6 CPU & 1 GPC disabled)Cortex-A78AE /w 5.5 MB cache6up to 2 GHz1024:8:32 (8, 1, 4)7651.573.1370128-bit102.4 GB/slate Q4 202210-20
1 Shader Processors : Ray tracing cores : Tensor Cores (SM count, GPCs, TPCs)
You can confirm the above from Nvidia site here, here and here.
Of note, Nvidia shows the SOC in renders of all 4 of these modules as being the identical size. This suggests that they are all cut from wafers with the same 17 billion transistor design, just with more and more disabled at the factory level to meet each products specs.

The CPU and GPU are designed into logical clusters. During the binning process they can permanently disable parts of the chip along these logical lines that have been established. The disabled parts do not use any power and would be invisible to any software.
Specific to Orin the above table shows as that they can disable per CPU core as well as per TPC (texture processing cluster). This is important.
The full Orin GPU has 8 TPCs. Each TPC has 2 SMs for a total of 16 SMs. Each SM has 1 2nd-generation Ray Tracing core for a total of 16. Each SM is divided into 4 processing block that each contain: 1 3rd-generation Tensor core, 1 Texture unit and 32 CUDA cores. (Resulting in a total of 64 Tensor cores, 64 texture units and 2048 CUDA cores.)

5. What happens if we take the Orin 32 above, and instead of only disabling 1 TPC, we disable 2 TPCs (You know, for even better yields)? = Answer: Identical values to the leaked Drake/T239 specs!

ModuleProcessorCoresFrequencyCore configuration1Frequency (MHz)TFLOPS (FP32)TFLOPS (FP16)DL TOPS (INT8)Bus widthBand-widthTDP in watts
T239 (Drake) (4-8 CPU & 2 TPC disabled)Cortex-A78AE /w 3-6MB cache4-8under 2.2 GHz1536:12:48 (12, 2, 6)under 1300under 4under 8?256-bit204.8 GB/sunder 15-40?
1 Shader Processors : Ray tracing cores : Tensor Cores (SM count, GPCs, TPCs)

Now the only thing left is the final clock speeds for Drake, which remain unknown, and then how much Nintendo will underclock, but we can use all the known clocks to give us the most accurate range we have had so far!
devices with known clocksProcessorCoresFrequencyCore configuration1Frequency (MHz)TFLOPS (FP32)TFLOPS (FP16)Bus widthBand-widthTDP in watts
Orin 64Cortex-A78AE /w 6MB cache8
2200​
1536:12:48 (12, 2, 6)
1300​
3.994
7.987​
256-bit204.8 GB/sunder 15-60
Orin 32Cortex-A78AE /w 6MB cache8
2200​
1536:12:48 (12, 2, 6)
939​
2.885​
5.769​
256-bit204.8 GB/sunder 15-40
NX 16Cortex-A78AE /w 6MB cache8
2000​
1536:12:48 (12, 2, 6)
918​
2.820​
5.640​
128-bit102.4 GB/sunder 10-25
NX 8Cortex-A78AE /w 5.5 MB cache6
2000​
1536:12:48 (12, 2, 6)
765​
2.350​
4.700​
128-bit102.4 GB/sunder 10-20
Switch dockedCortex-A78AE /w 3MB cache4
1020​
1536:12:48 (12, 2, 6)
768​
2.359
4.719​
128-bit102.4 GB/s
Switch handheldCortex-A78AE /w 3MB cache4
1020​
1536:12:48 (12, 2, 6)
384​
1.180​
2.359​
128-bit102.4 GB/s

The above table can help you come to your own conclusions, but I can't see Nintendo clocking the GPU higher then Nvidia in their highest end Orin product. Also its hard to imagine Nintendo going with a docked clock lower then the current Switch. This gives a solid range between 2.4 and 4 TFLOPS of FP32 performance docked for a Switch built on Drake(T239).

6. What this means for the development for and production of the DLSS enabled Switch?

-The Jetson AGX Orin Developer Kit is out now, so everything Nintendo would need to build their own dev kit that runs on real hardware is available now. (not just a simulator) (The Orin Developer Kit allows you to flash it to emulate the Orin NX, so Nintendo would likely be doing something similar.)

Chip yields are always lowest at the start of manufacturing, and think of all the fully working Orin chips Nvidia needs to put into all their DRIVE systems for cars.
-Now think of how many chips will not make the cut. Either they can't be clocked at full speed and or some of the TPCs are defective.
-Nvidia will begin to stockpile chips that do not make the Orin 32 cutoff (up to 1 bad CPU cluster and up to 1 bad TPC)
-Note that there is about a 3 month gap between the production availability of the Orin 64 and the NX 8. Binning helps to explain this, as they never actually try and manufacture a NX 8 part, it is just a failed Orin 64 that they binned, stock piled and then sold.
-This would allow Nintendo to come in and buy a very large volume of binned T234 chips, perhaps in 2023, and put them directly into a new Switch.
-Nintendo can structure the deal that they are essentially buying NVidia's off the shelf chips industrial waste on the cheap.

Custom from day 1 = expensive
Compare this to how much Nvidia would charge Nintendo if instead the chip was truly custom from the ground up. Nintendo gets billed for chip design, chip tapeout, test, and all manufacturing costs. Nintendo would likely be paying the bill at the manufacturing level, meaning the worse the yields are, and the longer it takes to ramp up production the more expensive it is. The cost per viable custom built to spec T239 chip would be unknown before hand. Nintendo would be taking on a lot more risk with the potential for the costs to be much higher then originally projected.
This does not sound like the Nintendo we know. We have seen the crazy things they will do to keep costs low and predictable.

custom revision down the road = cost savings
Now as production is improved and yields go up, the number of binned chips goes down. As each console generation goes on we expect both supply and demand to increase as well.
This is where there is an additional opportunity for cost savings. It makes sense to have a long term plan to make a smaller less expensive version of the Orin chip, one with less then 17 billion transistors. Once all the kinds are worked out and the console cycle is in full swing, and you have a large predictable order size, you can go back to Nvidia and the foundry and get a revision made without all the parts you don't need. known chip, known process, known fab, known monthly order size in built. = lower cost per chip
And the great thing is that the games that run on it don't care which chip it is. The core specs are locked in stone. 6 TPCs, 1536 CUDA cores, 12 2nd-generation Ray Tracing cores, 48 3rd-generation Tensor cores.

Nintendo has already done this moving the Switch from the T210 chip to the T214 chip.

So what do you all think? Excited to hear all your feedback! I am only human, so if you find any specific mistakes with this post, please let me know.
We know Drake has key architectural differences from Orin.

One RT core per SM instead of one per 2 sm, and instead of using Orins double rate tensor cores, it’s using desktop tensor cores. This rules out it being a binned Orin.
 
Thank you so much for all the feedback on my post "Product binning"!
The biggest hole in my theory is definitely the size of the SOC.
I could not find an example of a small form factor device with such a large SOC.
The closest example, which is somewhat apt was the GeForce RTX 3070/3070Ti/3080 Laptop chip (GA104) which is made using Samsung's 8N process and has 17.4 billion transistors. It has a die size of 392mm2 and can apparently fit into "ultra thin" 14" gaming laptops like the 0.66” /16.8 mm thick Razer Blade 14. It is definitely not common though.

Remember the debate about what the Mariko T214 would actually be like, and how it would compare to the T210?
There was a lot of speculation that it would allow games to run better.
Yes, its a new chip with a new number, but when it came to core specs that the Switch relied upon, it was the same.
4 A57s clocked at 1024 MHz and Maxwell based GPU with core config 256:16:16 clocked up to 768 MHz.

I read Nvidia's Linux kernel check-ins for fun and to this day there are still no references to T214. All the code references the T210.
In the last couple of months Nvidia has checked in a lot of code to add support for the T234. I think it is unlikely we will ever see specific code for the T239.
There was the X1 family and now there is the Orin family.

Can you develop and test a Nintendo Switch game using a devkit that contains a T210 and then run it on a Mariko T214 Switch? YES
Do I still believe you can develop and test a DLSS Nintendo Switch game using a devkit that contains a T234 and then have it run on whatever ends up being in the final retail DLSS Nintendo Switch with something that could be labeled T239? YES

So much seems pinned on this tweet from kopite7kimi:
"This is a preliminary picture of T234 in Wikipedia. Very clear.
So why do we always guess?
Nintendo will use a customized one, T239."
https://twitter.com/kopite7kimi
What other real sources do we have at this point?
If you read kopite7kimi entire tweet history you will see that he gets some thinks correct but some things wrong. Big picture he is on the right track, but some small details he gets wrong.
What is everyone here's take on "Nintendo will use a customized one, T239." ??
-Customized how, and in what way?
-"Custom" like the original Nintendo Switch SOC? LOL
-Customized as in magical special features just for Nintendo?
-You all just told me the full T234 die is too large, so I think "custom" means either parts disabled or removed, but nothing added.

Based on this, I think we can still use the full T234 as our highest possible starting point, and then cut down from there.
So logically the T239 would be less then a 4 TFLOP (FP32) part (see my earlier post for the breakdown). We hope they can find a solution to run Switch games on non Maxwell GPUs, but any kind of back compat mode would require the GPU to run at least at the original clock of 768 MHz.
As a result I still stand behind the range of 2.359-3.994 TFLOPS (FP32) docked, and a much lower number undocked.
I know not all TFOPS are equal, but that puts it between the original PS4 and the PS4 Pro (1.84 and 4.2 TFLOPS).
I think there are a lot of fanboys out there who are still holding out hope that the next Switch is going to be more powerful, something at the level of the PS5. I just wish we could clearly communicate to them that is not going to happen.

In the coming months we will get the power profile details for the Orin line. This will be very telling! For example the Jetson Orin NX 16GB has 3 power profiles, 10W | 15W | 25W. At 25W we expect all GPU and CPU cores on, and at full clocks for this model. But what is left on and what are the clocks when it runs in its 10W mode?
 
Pretty sure the notion that Nintendo had any imput on the design of the tx1 is 100% theory.
Well, they didn't have any input on the chip per-se, but the configuration we got for the Switch has 4GB of RAM instead of the usual 3.

That does make we wonder if they'll also be getting input from devs regarding the RAM configuration. I sure hope this time Nintendo realizes they're getting huge amounts of UE games brought to their platform ever since their switch to Nvidia. And while getting a new Monster Hunter is great and perhaps an exclusive RE, it would be interesting if Epic gave them feedback regarding any possible bandwidth issues and ways to address this.

I was jokingly thinking they could use HBM but that could be deemed too expensive, though balancing the cost of 6GB HBM(2?) versus 8GB GDDR5/6 to alleviate any bandwidth constraints might be a good excercise.
 
0
Based on this, I think we can still use the full T234 as our highest possible starting point, and then cut down from there.
So logically the T239 would be less then a 4 TFLOP (FP32) part (see my earlier post for the breakdown). We hope they can find a solution to run Switch games on non Maxwell GPUs, but any kind of back compat mode would require the GPU to run at least at the original clock of 768 MHz.
As a result I still stand behind the range of 2.359-3.994 TFLOPS (FP32) docked, and a much lower number undocked.
I know not all TFOPS are equal, but that puts it between the original PS4 and the PS4 Pro (1.84 and 4.2 TFLOPS).
I think there are a lot of fanboys out there who are still holding out hope that the next Switch is going to be more powerful, something at the level of the PS5. I just wish we could clearly communicate to them that is not going to happen.
Some things here
  • We Know the actual GPU specs for Drake/T239, it was literally embedded in the driver for NVN2's API that leaked with the DLSS Source Code and the other stuff when LAPSUS$ hacked NVIDIA.
    • Those specs being
      • 12SMS
      • 1536 CUDA cores
      • 1.3MB L1 Cache (out of Orin's spec oddly enough)
      • 1 or 4MB L2 Cache (likely 4MB as that is in line with Orin and also 1MB of L2 with 1.3MB of L1 is very weird scaling that is backwards to normal L-Cache scaling so 4MB is more likely)
      • 12RT cores (More Ampere than Orin/Lovelace?) (Unkown Generation)
      • 48 Tensor cores (Unkown Generation)
    • The key thing is that, I do agree with the TFLOP range, 1ghz Drake when docked likely would be a PS4 Pro rival in raw GPU raster power due to how much more efficient Ampere is over Polaris per-FLOP and how Drake, like Orin/Lovelace will be even more efficient than Ampere due to the extra L2 Cache.
      • more or less 3TFLOP Drake (Assuming a decent uplift over Ampere TFLOP efficiency) ~= 4TFLOP Polaris.
    • The PS4 Pro's GPU is at worst 25% behind the Series S's GPU and Drake has a lot of advantages tech-wise due to inheriting the better tech-standpoint Ampere is over IC-less RDNA2 that the Series S is at
      • The RT cores are more than double the speed of the Ray accelerators (BVH Traversal does a lot) even if it's just Ampere-generation RT cores
        • Funnily enough, 12 Ampere RT cores would accelerate RT even better than the PS5 can
      • Tensor cores can handle FP16 calcs and DLSS ofc, so that is a benefit, DLSS being a massive thing and that can close the gap with the Series S with DLSS quality mode alone, leaving Performance and Ultra Performance open to push well past the Series S GPU-wise.
  • Now, when discussing how it would compare to PS5, I say that with DLSS+NIS it likely can push quite close to PS5 level in first party titles when docked.
    • PS5 and Series X more or less are wasting their TFLOPs trying to brute-force >1440p Resolutions natively where Drake can hang out at 720p internally and upscale to 4K if they could get DLSS Running fast enough with 3.0 or use NIS to push something like 1440p-1800p outputs in Ultra Performance mode to 4K with 2.3's speed for DLSS.
    • Not to mention RTXGI would likely be used for Drake First party titles which is scalable enough for RTGI to run on OG Xbox One in software so Drake likely would run it very well at whatever targets they want for Res/FPS because of the scalability it has.
 
Thank you so much for all the feedback on my post "Product binning"!
The biggest hole in my theory is definitely the size of the SOC.
I could not find an example of a small form factor device with such a large SOC.
The closest example, which is somewhat apt was the GeForce RTX 3070/3070Ti/3080 Laptop chip (GA104) which is made using Samsung's 8N process and has 17.4 billion transistors. It has a die size of 392mm2 and can apparently fit into "ultra thin" 14" gaming laptops like the 0.66” /16.8 mm thick Razer Blade 14. It is definitely not common though.

Remember the debate about what the Mariko T214 would actually be like, and how it would compare to the T210?
There was a lot of speculation that it would allow games to run better.
Yes, its a new chip with a new number, but when it came to core specs that the Switch relied upon, it was the same.
4 A57s clocked at 1024 MHz and Maxwell based GPU with core config 256:16:16 clocked up to 768 MHz.

I read Nvidia's Linux kernel check-ins for fun and to this day there are still no references to T214. All the code references the T210.
In the last couple of months Nvidia has checked in a lot of code to add support for the T234. I think it is unlikely we will ever see specific code for the T239.
There was the X1 family and now there is the Orin family.

Can you develop and test a Nintendo Switch game using a devkit that contains a T210 and then run it on a Mariko T214 Switch? YES
Do I still believe you can develop and test a DLSS Nintendo Switch game using a devkit that contains a T234 and then have it run on whatever ends up being in the final retail DLSS Nintendo Switch with something that could be labeled T239? YES

So much seems pinned on this tweet from kopite7kimi:
"This is a preliminary picture of T234 in Wikipedia. Very clear.
So why do we always guess?
Nintendo will use a customized one, T239."
https://twitter.com/kopite7kimi
What other real sources do we have at this point?
If you read kopite7kimi entire tweet history you will see that he gets some thinks correct but some things wrong. Big picture he is on the right track, but some small details he gets wrong.
What is everyone here's take on "Nintendo will use a customized one, T239." ??
-Customized how, and in what way?
-"Custom" like the original Nintendo Switch SOC? LOL
-Customized as in magical special features just for Nintendo?
-You all just told me the full T234 die is too large, so I think "custom" means either parts disabled or removed, but nothing added.

Based on this, I think we can still use the full T234 as our highest possible starting point, and then cut down from there.
So logically the T239 would be less then a 4 TFLOP (FP32) part (see my earlier post for the breakdown). We hope they can find a solution to run Switch games on non Maxwell GPUs, but any kind of back compat mode would require the GPU to run at least at the original clock of 768 MHz.
As a result I still stand behind the range of 2.359-3.994 TFLOPS (FP32) docked, and a much lower number undocked.
I know not all TFOPS are equal, but that puts it between the original PS4 and the PS4 Pro (1.84 and 4.2 TFLOPS).
I think there are a lot of fanboys out there who are still holding out hope that the next Switch is going to be more powerful, something at the level of the PS5. I just wish we could clearly communicate to them that is not going to happen.

In the coming months we will get the power profile details for the Orin line. This will be very telling! For example the Jetson Orin NX 16GB has 3 power profiles, 10W | 15W | 25W. At 25W we expect all GPU and CPU cores on, and at full clocks for this model. But what is left on and what are the clocks when it runs in its 10W mode?
If those fanboys thinking that the device with a handheld form factor coming out in 2022 can do PS5 numbers actually exist, then we can give them a big old LMAO and tell them that that's obviously not happening. That doesn't mean games can't look great on a device that could approach the XSS specs, as long as you don't require it to look similar to the PS5.

The big question is whether the manufacturing process is Samsung 8nm or a better process node, and that's difficult to guess at right now I feel. Original process node for Orin was said to be 8nm, but that was also when they said it would do 200 TOPS at 65W, and now the Jetson Orin AGX can do 200 TOPS at a maximum of 40W, so they likely changed process node (I believe I have read about it, but I can't find a source for the process node change actually occurring). Anyways, you don't get such a significant jump without a jump to a better node, be it TSMC 7nm or Samsung 5nm (not sure the jump is big enough to warrant TSMC 5nm speculation, but I don't know with much certainty).

Edit: I would also add that most of the speculation constitutes discussion about clock speeds and potentially gated-off SMs in handheld mode. There isn't much reliance on customising the chip beyond what the leaked specs would suggest in any of the discussion.

That said, your questions are basically fully covered by Thraktor's post, which I think is still the most comprehensive collection of reasoned-out potential specs for the system to date:

I thought I'd do a quick round-up of what we know, and give some general idea of how big our margin of error is on the known and unknown variables on the new chip.

Chip

Codenamed Drake/T239. Related to Orin/T234. We don't have confirmation on manufacturing process. The base assumption is 8nm (same as Orin), however kopite7kimi, who previously leaked info about the chip and said 8nm, is now unsure on the manufacturing process. The fact that the GPU is much larger than expected may also indicate a different manufacturing process, but we don't have any hard evidence. We also don't know the power consumption limits Nintendo have chosen for the chip in either handheld or docked mode, which will impact clock expectations.

GPU
This is what the leaks have been about so far, so we have much more detailed info here. In particular, on the die we have:

12 SMs
Ampere architecture with 128 "cores" per SM, and tensor performance comparable to desktop Ampere per SM. Some lower-level changes compared to desktop Ampere, but difficult to gauge the impact of those.
12 RT cores
No specific info on these, in theory they could have changes compared to desktop Ampere, but personally I'm not going to assume any changes until we have evidence.
4MB L2 cache
This is higher than would be expected for a GPU of this size (most comparable would be RTX 3050 laptop, with 2MB L2). Same as PS5 GPU L2 and only a bit smaller than XBSX GPU L2 of 5MB. This should help reduce memory bandwidth requirements, but it's impossible to say exactly by how much. Note this isn't really an "infinity cache", which range from 16MB to 128MB on AMD's 6000-series GPUs, it's just a larger than normal cache.

Things we don't know: how many SMs are actually enabled in either docked or handheld mode, clocks, ROPs.

Performance range in docked mode: It's possible that we could have a couple of SMs binned for yields, as this is a bigger GPU than expected. This would probably come in the form of disabling one TPC (two SMs) brining it down to 10. Clocks depend heavily on the manufacturing process and whether Nintendo have significantly increased their docked power consumption over previous models. I'd expect clocks between 800MHz-1GHz are probably most likely, but on the high end of expectations (better manufacturing process and higher docked power consumption) it could push as high as 1.2GHz. I doubt it will be clocked lower than the 768MHz docked clock of the original Switch, but that's not strictly impossible.

Low-end: 10 SMs @ 768MHz - 1.97 Tflops FP32
High-end: 12 SMs @ 1.2GHz - 3.68 Tflops FP32

Obviously there's a very big range here, as we don't know power consumption or manufacturing process. It's also important to note that you can't simply compare Tflops figures between different architectures.

Performance range in handheld mode: This gets even trickier, as Drake is reportedly the only Ampere GPU which supports a particular clock-gating mode, which could potentially be used to disable SMs in handheld mode. This makes sense, though, as peak performance per watt will probably be somewhere in the 400-600MHz range, so it's more efficient to, say, have 6 SMs running at 500MHz than all 12 running at 250MHz. Handheld power consumption limits are also going to be very tight, so performance will be very much limited by manufacturing process. I'd expect handheld clocks to range from 400MHz to 600MHz, but this is very dependent on manufacturing process and the number of enabled SMs.

One other comment to make here is that we shouldn't necessarily expect the <=2x performance difference between docked and handheld that we saw on the original Switch. That was for a system designed around 720p output in portable mode and 1080p output docked, however here we're looking at a 4K docked output, and either 720p or 1080p portable, so there's a much bigger differential in resolution, and therefore a bigger differential in performance required. It's possible that we could get as much as a 4x differential between portable and docked GPU performance.

Low-end: 6 SMs @ 400 MHz - 614 Gflops FP32
High-end: 8 SMs @ 600 MHz - 1.2 Tflops FP32

There is of course DLSS on top of this, but it's not magic, and shouldn't be taken as a simple multiplier of performance. Many other aspects like memory bandwidth can still be a bottleneck.

CPU

The assumption here is that they'll use A78 cores. That isn't strictly confirmed, but given Orin uses A78 cores, it would be a surprise if Drake used anything else. We don't know either core count or clocks, and again they will depend on the manufacturing process. The number of active cores and clocks will almost certainly remain the same between handheld and docked mode, so the power consumption in handheld mode will be the limiting factor.

For core count, 4 is the minimum for compatibility, and 8 is probably the realistic maximum. The clocks could probably range from 1GHz to 2GHz, and this will depend both on the manufacturing process and number of cores (fewer cores means they can run at higher clocks).

The performance should be a significant improvement above Switch in any case. In the lower end of the spectrum, it should be roughly in line with XBO/PS4 CPU performance, and at the high-end it would sit somewhere between PS4 and PS5 CPU performance.

RAM

Again, the assumption is that they'll use LPDDR5, based on Orin using it, and there not being any realistic alternatives (aside from maybe LPDDR5X depending on timing). The main question mark here is the bus width, which will determine the bandwidth. The lowest possible bus width is 64-bit, which would give us 51.2GB/s of bandwidth, and the highest possible would be 256-bit, which would provide 204.8GB/s bandwidth. Bandwidth in handheld mode would likely be a lot lower to reduce power consumption.

Quantity of RAM is also unknown. On the low end they could conceivably go with just 6GB, but realistically 8GB is more likely. On the high end, in theory they could fit much more than that, but cost is the limiting factor.

Storage

There are no hard facts here, only speculation. Most people expect 128GB of built-in storage, but in theory it could be more or less than that.

In terms of speeds, the worst case scenario is that Nintendo retain the UHS-I SD card slot, and all games have to support ~100MB/s as a baseline. The best case scenario is that they use embedded UFS for built-in storage, and support either UFS cards or SD Express cards, which means games could be built around a 800-900MB/s baseline. The potential for game card read speeds is unknown, and it's possible that some games may require mandatory installs to benefit from higher storage speeds.

@Thraktor I had been meaning to ask you, but it slipped my mind: Did you take the possibility of a 5nm chip (Samung or TSMC?) as the ground for your maximum GPU spec here, or did you use some different process node for that?
 
Last edited:
Quoted by: SiG
1
I know not all TFOPS are equal, but that puts it between the original PS4 and the PS4 Pro (1.84 and 4.2 TFLOPS).
I think there are a lot of fanboys out there who are still holding out hope that the next Switch is going to be more powerful, something at the level of the PS5. I just wish we could clearly communicate to them that is not going to happen.
This is about what I'd come to expect realistically, and if even if we get this than I'm more than happy.

Some people want Nintendo to challenge Sony and Microsoft to the brute force TFLOPs and graphical wars. but they often concentrate on graphical fidelity more than anything.

While I do wish this means more solid framerates on Nintendo's end, I do hope new technology such as DLSS will be there to carry the heavy lifting of image fidelity while leaving more of the compute to actual game logic. That recent DF review of Kirby and the Forgotten Land does highlight the limitations that even their 1st party development teams such as HAL are reaching, and they've manage to pack in more graphical density than Mario Odyssey to boot.

So while they've done a good job really pushing that Tegra chip, newer, more powerful hardware would definitely help in the performance of their games and perhaps also with continued 3rd party support. Of course, I think it's also wise to reel in expectations...

If those fanboys thinking that the device with a handheld form factor coming out in 2022 can do PS5 numbers actually exist, then we can give them a big old LMAO and tell them that that's obviously not happening. That doesn't mean games can't look great on a device that could approach the XSS specs, as long as you don't require it to look similar to the PS5.
Those fanboys come in the form of "B-but Steam Deck! It can CRUSH the Switch and if this successor isn't as powerful as that device then NINTENDO IS DEAD!!!1" They often forget the Deck is targeting a different market (Mobile/Budget PCs!) and that it has it's own number of caveats that can't be ignored (Proton isn't compatible with everything; batterylife while in handheld can very greatly; requires to be properly managed like every PC).

Some things here
  • We Know the actual GPU specs for Drake/T239, it was literally embedded in the driver for NVN2's API that leaked with the DLSS Source Code and the other stuff when LAPSUS$ hacked NVIDIA.
    • Those specs being
      • 12SMS
      • 1536 CUDA cores
      • 1.3MB L1 Cache (out of Orin's spec oddly enough)
      • 1 or 4MB L2 Cache (likely 4MB as that is in line with Orin and also 1MB of L2 with 1.3MB of L1 is very weird scaling that is backwards to normal L-Cache scaling so 4MB is more likely)
      • 12RT cores (More Ampere than Orin/Lovelace?) (Unkown Generation)
      • 48 Tensor cores (Unkown Generation)
    • The key thing is that, I do agree with the TFLOP range, 1ghz Drake when docked likely would be a PS4 Pro rival in raw GPU raster power due to how much more efficient Ampere is over Polaris per-FLOP and how Drake, like Orin/Lovelace will be even more efficient than Ampere due to the extra L2 Cache.
      • more or less 3TFLOP Drake (Assuming a decent uplift over Ampere TFLOP efficiency) ~= 4TFLOP Polaris.
    • The PS4 Pro's GPU is at worst 25% behind the Series S's GPU and Drake has a lot of advantages tech-wise due to inheriting the better tech-standpoint Ampere is over IC-less RDNA2 that the Series S is at
      • The RT cores are more than double the speed of the Ray accelerators (BVH Traversal does a lot) even if it's just Ampere-generation RT cores
        • Funnily enough, 12 Ampere RT cores would accelerate RT even better than the PS5 can
      • Tensor cores can handle FP16 calcs and DLSS ofc, so that is a benefit, DLSS being a massive thing and that can close the gap with the Series S with DLSS quality mode alone, leaving Performance and Ultra Performance open to push well past the Series S GPU-wise.
  • Now, when discussing how it would compare to PS5, I say that with DLSS+NIS it likely can push quite close to PS5 level in first party titles when docked.
    • PS5 and Series X more or less are wasting their TFLOPs trying to brute-force >1440p Resolutions natively where Drake can hang out at 720p internally and upscale to 4K if they could get DLSS Running fast enough with 3.0 or use NIS to push something like 1440p-1800p outputs in Ultra Performance mode to 4K with 2.3's speed for DLSS.
    • Not to mention RTXGI would likely be used for Drake First party titles which is scalable enough for RTGI to run on OG Xbox One in software so Drake likely would run it very well at whatever targets they want for Res/FPS because of the scalability it has.
I think you might be trying to oversell what it could achieve with DLSS.

I could see it being able to reach 4K resolution with some graphical settings dialed back a bit or perhaps even specially configured for a more consistent performance. Because of this, I doubt VRR will be a thing on the successor, but hopefully the beefier specs mean less cutbacks while maintaining a fluid 60 or 30fps.
 
Those fanboys come in the form of "B-but Steam Deck! It can CRUSH the Switch and if this successor isn't as powerful as that device then NINTENDO IS DEAD!!!1" They often forget the Deck is targeting a different market (Mobile/Budget PCs!) and that it has it's own number of caveats that can't be ignored (Proton isn't compatible with everything; batterylife while in handheld can very greatly; requires to be properly managed like every PC).
I mean, the Steam Deck is 1.6 TF RDNA (max). That is probably going to be slightly better than the Drake chip in handheld, but quite inferior to the discussed specs in docked mode. And both devices are a long shot from PS5-level performance. Seems more like gut-feeling kinds of comments than actual substance tbh.
 
Quoted by: SiG
1
I mean, the Steam Deck is 1.6 TF RDNA (max). That is probably going to be slightly better than the Drake chip in handheld, but quite inferior to the discussed specs in docked mode. And both devices are a long shot from PS5-level performance. Seems more like gut-feeling kinds of comments than actual substance tbh.
Aren't most of the naysayers basically just going with gut-feelings, though? People cling more to marketing numbers and "the absolute best in class", i.e. hype sells and makes Nintendo's more humble approach to numbers and specs feel "tone-deaf", or "too lilttle too late". Never mind the logic behind the way they spec it...
 
Steam Deck has a 40Wh battery and during gaming, it can consume up to 30 W. In these circumstances, it lasts 1 hour and 15 minutes.
This almost Sega Nomad levels of laughable, at least to Nintendo.

For reference, the current Switch has a 16 Wh battery. When equipped with Erista, it could run BotW for three hours. Any launch game for the succ will aim for that I assume.
 
Aren't most of the naysayers basically just going with gut-feelings, though? People cling more to marketing numbers and "the absolute best in class", i.e. hype sells and makes Nintendo's more humble approach to numbers and specs feel "tone-deaf". Never mind the logic behind the way they spec it...
True, true. But that's also the crowd who can never be pleased, so focusing on them or trying to manage their expectations is likely futile. Probably best to tune them out.

(From your previous post - wanted to react to this as well):
I think you might be trying to oversell what it could achieve with DLSS.

I could see it being able to reach 4K resolution with some graphical settings dialed back a bit or perhaps even specially configured for a more consistent performance. Because of this, I doubt VRR will be a thing on the successor, but hopefully the beefier specs mean less cutbacks while maintaining a fluid 60 or 30fps.

"Being able to reach 4K resolution with some graphical settings dialed back" is quite an achievement, though. The high end consoles struggle to hit a consistent 4K, and often need some degree of dynamic resolution scaling (even for games based on last gen hardware like Horizon Forbidden West) - and that's fine: a full 4K image can be wasteful if power is limited, and you want to focus on different visual effects. If Drake can actually use 4K DLSS, that'd be an amazing performance already imo. It's not going to be similar in visuals as the high-end systems, but the comparison with the XSS could be quite favourable if DLSS is applicable in this way (i.e. 4K output pixels vs. XSS' 1080p-1440p visuals to counteract potentially lower graphical settings). XSS is still a stationary device, so matching or even getting close to such a device is quite a feat for handheld-based hardware. So yeah, I guess I wanted to say that the scenario you mention is the exact sales pitch I would have if I wanted to say why DLSS is almost magic heh. And I'm not sure @Alovon11 is trying to tell us much more than that (but correct me if I'm wrong!).

Edit: As for the 'close to PS5': I think Alovon makes a reasonable case that you can get an image that looks close enough for practical purposes (not similar, or indistinguishable, of course) to the 4K devices using the correct techniques. One thing we don't have enough info about is how well RTX GI would work on a Drake chip, so speculating on that is a bit hard. And ultra performance 4K has noticeable drawbacks compared with native or close-to-native rendering. So we should be careful with calling the images competitive. But there is a good case to be made that Drake has the technology that puts it in a sweet spot where it can dominate the low-end console market in terms of visual quality, i.e. outperforming the XSS depending on the potency of the RT cores in docked mode (for the lighting quality in RTX GI). An NVIDIA chip with high raw power (like the Drake has - for the thing it is trying to be) can probably punch well above its weight thanks to DLSS and RT core support in lighting, if a game takes the time to produce a visual profile that is fully optimised (best DLSS setting for raw rendering vs. upscaling quality, lighting supported by the RT cores). The NVIDIA software/hardware integration has become very impressive with the introduction of RTX, we shouldn't underestimate this imo. If Drake is indeed a 3.7 TF chip (12 SMs, at 1.2 GHz), and DLSS and RT core-accelerated lighting is used by developers, then the chip is in a position to output some remarkably high quality visuals I think. Not PS5 or XSX level, but quite possibly something that looks better than the XSS visuals in certain cases. Now, there remain limitations, most pressingly due to the RAM bandwidth but also the CPU performance could be slower than XSS by a clear margin depending on the precise specs. But even if it fails to outperform the XSS in most cases, that's still an impressive showing imo.
 
Last edited:
Edit: As for the 'close to PS5': I think Alovon makes a reasonable case that you can get an image that looks close enough for practical purposes (not similar, or indistinguishable, of course) to the 4K devices using the correct techniques. One thing we don't have enough info about is how well RTX GI would work on a Drake chip, so speculating on that is a bit hard. And ultra performance 4K has noticeable drawbacks compared with native or close-to-native rendering. So we should be careful with calling the images competitive. But there is a good case to be made that Drake has the technology that puts it in a sweet spot where it can dominate the low-end console market in terms of visual quality, i.e. outperforming the XSS depending on the potency of the RT cores in docked mode (for the lighting quality in RTX GI). An NVIDIA chip with high raw power (like the Drake has - for the thing it is trying to be) can probably punch well above its weight thanks to DLSS and RT core support in lighting, if a game takes the time to produce a visual profile that is fully optimised (best DLSS setting for raw rendering vs. upscaling quality, lighting supported by the RT cores). The NVIDIA software/hardware integration has become very impressive with the introduction of RTX, we shouldn't underestimate this imo. If Drake is indeed a 3.7 TF chip (12 SMs, at 1.2 GHz), and DLSS and RT core-accelerated lighting is used by developers, then the chip is in a position to output some remarkably high quality visuals I think. Not PS5 or XSX level, but quite possibly something that looks better than the XSS visuals in certain cases. Now, there remain limitations, most pressingly due to the RAM bandwidth but also the CPU performance could be slower than XSS by a clear margin depending on the precise specs. But even if it fails to outperform the XSS in most cases, that's still an impressive showing imo.
I agree with what you say.
But you should add that all of that might only work at 30 FPS. If you want your game to run at 60 FPS, there might ba a range of scenarios in which you may prefer to renounce to using DLSS and RTXGI in favour of raw rasterizsation. In itself, the Drake you describe is capable to run something like Mario Odyssey at 60 FPS at 4K in docked mode after all.
 
I agree with what you say.
But you should add that all of that might only work at 30 FPS. If you want your game to run at 60 FPS, there might ba a range of scenarios in which you may prefer to renounce to using DLSS and RTXGI in favour of raw rasterizsation. In itself, the Drake you describe is capable to run something like Mario Odyssey at 60 FPS at 4K in docked mode after all.
Yes indeed. Expecting 4K 60FPS on gen 8 level games is likely too much to ask, and in those circumstances going for a lower target resolution like 1440p (perhaps 720p -> 1440p) could offer a performance compromise at 60 fps, or you could target 30 fps instead. In the end, we cannot expect infinite performance due to the hardware/software integration, and limits need to be imposed. We should of course all be aware of that fact, and realise that for hardware based on a handheld form factor, getting to a PS4 Pro level of visual quality is already very impressive, let alone getting to XSS or beyond.
 
I agree with what you say.
But you should add that all of that might only work at 30 FPS. If you want your game to run at 60 FPS, there might ba a range of scenarios in which you may prefer to renounce to using DLSS and RTXGI in favour of raw rasterizsation. In itself, the Drake you describe is capable to run something like Mario Odyssey at 60 FPS at 4K in docked mode after all.
I feel like with regards to 4k@60 via DLSS, developers will still inevitably rely on, as DF would put it, "nips and tucks" regarding graphical settings. There's also the whole thing about games being optimized for it will relatively be easier to achieve that goal. Likewise, I could see 3rd parties device the same "late game port" strategy of using the Switch versiont to iron out any issues a title might have originally come with along with utilizing optimizations they've gained to further "patch" any existing PS5/XBS versions.
 
I guess now's as good a time as any to bring this back up: I've mentioned several times that Ampere has concurrency features that should allow the system to do DLSS in parallel with other tasks. Here's what I've based that interpretation on (source - 10 MB download). We start with a Turing frame analysis:

FOrzHoXXwB8o4ps


As you can see in the bottom frame buffer graph, DLSS (purple) runs at the very end of the rendering pipeline, with only a relatively small amount of post-processing after it finishes. In contrast, here's what Ampere looks like:

FOrzHoXXwB4B9T1


As you can see, the purple part indicating tensor core (and therefore DLSS) evaluation runs concurrently within the frame, and furthermore is not slotted at the end of the rendering pipeline but instead precisely in the middle. Here is my interpretation of this: Ampere allows the developer to evaluate DLSS operations while the rest of the GPU is busy evaluating the next frame. This would induce a 1-frame latency, but would allow DLSS to take much less runtime away from the frame buffer as compared to the Turing situation where it slots in at the end of each frame buffer, and therefore is sequential with the rendering of the frame.

If this interpretation is correct, then it means that the contribution of DLSS to the rendering time is averaged away completely by the fact that it runs concurrently with the rest of the GPU. This would be a feature that the RTX30 cards have, but not the RTX20 (Turing), so Alex' video did not show this effect, and instead predicted the additional render time for the Turing case, which makes complete sense (and this is also why I'd love to see someone with a 30-series card repeat Alex' experiment of trying to guess the DLSS cost from comparing DLSS on/off frame rates). If this interpretation is correct, then the impact of DLSS on the render time would be dependent only on the amount of power draw it takes away from the other components (i.e. do we need to clock the GPU lower to accommodate parallel DLSS and frame computations), not by the runtime of the algorithm itself.

I'm curious what people think: is this a valid interpretation of what the white paper says about these figures? The white paper is not explicit about tensor core evaluation, so it is necessary to put in some of your own interpretation of these figures. I'm curious what people think about this.
 
There is good news for those among us who want to see DLSS in portable mode. According to Igors' Lab experiment (I didn't know the source but the tests seem sensible), running Shadow of the Tomb Raider using DLSS actually saves energy per frame rendered:

2022-03-2510_39_16-wia0k73.png


In this example, the savings amount to 20% power on each frame. That is substantial when we consider what impact on the battery life this can have. It's not gargantuan mind you, but this might have caught Nintendo's attention.

I should say though that all the caveats regarding DLSS in handheld mode still apply: we have less power at disposal compared to docked mode and because of this, DLSS will likely eat up a lot of the time budget per frame. However, if it consistently saves up energy, then it can increase the likeliness of Nintendo trying to make it work.

And of course, the power savings would scale up in docked mode as well.
 
Last edited:
I feel like adding a little note about how things play out on the industry side: like John Linnemann said it a while ago in his review of Alien Isolation for the Switch, the industry should aspire to find ways to make individual pixels look prettier instead of pushing more of them.

By this logic, DLSS might not be a main focus of Nintendo at all.

Realizing that, I wonder if the only scenario in which DLSS can really be of value for Nintendo is in the context of handheld gaming in which power consumption is paramount. That would be, say, a 360p -> 720p scenario.

And there is some good news in that front, going at least by Igors' Lab experiment (I didn't know the source but the tests seem sensible). Running Shadow of the Tomb Raider using DLSS actually saved energy per frame rendered:

2022-03-2510_39_16-wia0k73.png


In this example, you save an average 20% power on each frame. That is substantial when we consider what impact on the battery life this can have. It's not gargantuan mind you, but this might have caught Nintendo's attention.

I should say finally though that all the caveats regarding DLSS in handheld mode still apply: we have less power at disposal compared to docked and because of this, DLSS will likely eat up a lot of the time budget per frame. However, if it consistently saves up energy, then I can increase the likeliness of Nintendo trying to make it work.

And of course, the power savings would scale up in docked mode as well.
I'm not sure that John would have meant that you should just leave it at 1080p or something like that with that statement. More so that the plethora of upscaling techniques (checkerboard rendering, DLSS, etc.) Would allow you to make up the difference that way. In that sense, DLSS is very much something that fits the bill, and it helps that DLSS is the best upscaling method in the business, allowing you to start at low native resolutions and produce better quality images than most other upscaling methods that start at higher resolutions. So DLSS very much has a place in there, and just leaving the image at 1080p is a bad idea when you have DLSS handy.

Starting at 360p in undocked might lead to poor output because DLSS possibly needs more info to produce a good image. That said, in the ultimate case it could make sense I feel.
 
Last edited:
Two things:
  1. These graphs don't show the control group - what's the frametime and volume of GPU work with RT off? If you knew the true cost of RT you might not do it at all (eg. in portable mode).
  2. Concurrency definitely does produce higher frame rates, but this comes at the cost of having more silicon active at any one time. The Y-axis on these graphs shows the computational load but another way to label it is in terms of power consumption or heat generated. You can get away with this on a desktop GPU but it's harder with a hybrid console.
 
I'm not sure that John would have meant that you should just leave it at 1080p or something like that with that statement. More so that the plethora of upscaling techniques (checkerboard rendering, DLSS, etc.) Would allow you to make up the difference that way. In that sense, DLSS is very much something that fits the bill, and it helps that DLSS is the best upscaling method in the business, allowing you to start at low native resolutions and produce better quality images than most other upscaling methods that start at higher resolutions. So DLSS very much has a place in there, and just leaving the image at 1080p is a bad idea when you have DLSS handy.

Starting at 360p in undocked might lead to poor output because DLSS possibly needs more info to produce a good image. That said, in the ultimate case it could make sense I feel.
Big brain fart. I edited the part of my message that made no sense. I should stop hanging around here while working :s

Good find again by the way with the Ampere/Turing DLSS implementation comparison graphs! Me too am interested in how much time is invested in DLSS in both architectures.
Two things:
  1. These graphs don't show the control group - what's the frametime and volume of GPU work with RT off? If you knew the true cost of RT you might not do it at all (eg. in portable mode).
  2. Concurrency definitely does produce higher frame rates, but this comes at the cost of having more silicon active at any one time. The Y-axis on these graphs shows the computational load but another way to label it is in terms of power consumption or heat generated. You can get away with this on a desktop GPU but it's harder with a hybrid console.
I wanted to point that out. I increasingly have the feeling that implementing DLSS for a low-powered device is actually arduous.
 
Last edited:
0
Two things:
  1. These graphs don't show the control group - what's the frametime and volume of GPU work with RT off? If you knew the true cost of RT you might not do it at all (eg. in portable mode).
  2. Concurrency definitely does produce higher frame rates, but this comes at the cost of having more silicon active at any one time. The Y-axis on these graphs shows the computational load but another way to label it is in terms of power consumption or heat generated. You can get away with this on a desktop GPU but it's harder with a hybrid console.
Yeah, these are good points. The comparison is definitely not ideal, and the level of activity of the RT cores kind of obscure the power draw that would be needed for overlapped DLSS. There's two things we need to establish:
1) Is it possible in principe to overlap the DLSS computation with the rest of the frame rendering? I'd say the above images indicate that it is.
2) How much more power does it take to run tensor cores and ALU cores concurrently? The tensor cores will be used for a part of the frame buffer only (DLSS shouldn't take the full 16 or 33 ms), so in the end we are interested in the AUC per ms averaged over a fixed frame buffer when taking out the RT core activity (Edit: Probably need a squared in there to get to power consumption). Unfortunately, that info does not appear to be available anywhere.

The latter consideration influences the choice of whether to use DLSS concurrently or in sequence - which is a trade-off between performance and peak heat generation. And you're right that the choice of whether to use RT cores depends on how much power they draw, which is especially important for handheld mode. The use of DLSS could allow you to set the GPU frequency lower because you render at a lower native resolution, so in that sense you could argue for DLSS even in that situation, but it remains a question how everything scales down on the RT core side of things (for example, RTX GI, which uses the RT cores, is said to be independent of resolution -but other RT techniques are not).

In docked mode is probably where DLSS will be used most, and over there small increase in wattage is less problematic than in handheld, while the need to mitigate the DLSS overhead is larger (because DLSS to 4K is significantly more expensive than to 1080p or 720p).
 
0
I’ve never heard of anyone having final silicon/close to final silicon devkits nearly 4 years out.

something went so horribly bad for that to happen.

I mean its possible that with Covid, chips shortages, costs raise, point that Switch keep selling great in any case so they dont need new hardware so soon, now with war in Ukraine and further cost raises...Nintendo simple delayed and changed (for instance different manufacturing processes) this basically next gen Switch production and launch.

For instance, few months ago people here were sure that we talking about 8nm, but now people here keep mentioning 5nm, 6nm and 7nm like possible process.


Personally, at this point I would be surprised to see this next gen Switch hardware announced and launched this year.
 
I mean its possible that with Covid, chips shortages, costs raise, point that Switch keep selling great in any case so they dont need new hardware so soon, now with war in Ukraine and further cost raises...Nintendo simple delayed and changed (for instance different manufacturing processes) this basically next gen Switch production and launch.

For instance, few months ago people here were sure that we talking about 8nm, but now people here keep mentioning 5nm, 6nm and 7nm like possible process.


Personally, at this point I would be surprised to see this next gen Switch hardware announced and launched this year.
Delaying a hardware launch is not something that can be done "simply". You have product and component orders from dozens of sources, fabrication space reserved at foundries that is extremely hot in demand, assembly lines reserved at places like Foxconn and their other factories in Malaysia and Vietnam, all of those moving pieces would be extremely difficult to coordinate if they wanted to delay it ~8 months away from launch due to recent events or continued high sales.

It's extremely difficult to do, nearly unfeasible. That's not to say it has to be this coming fiscal year, but if it isn't I don't think that's the case based on a decision made these past few months.
 
Delaying a hardware launch is not something that can be done "simply". You have product and component orders from dozens of sources, fabrication space reserved at foundries that is extremely hot in demand, assembly lines reserved at places like Foxconn and their other factories in Malaysia and Vietnam, all of those moving pieces would be extremely difficult to coordinate if they wanted to delay it ~8 months away from launch due to recent events or continued high sales.

It's extremely difficult to do, nearly unfeasible. That's not to say it has to be this coming fiscal year, but if it isn't I don't think that's the case based on a decision made these past few months.

First there is difference when you have hard date for launch (for instance after revel you gave public release date) compared to have internal plan when this could new hardware could be launch (for instance "we will launch it somewhere from end of 2022. until 1h 2023". so basically you have possible 9 months launch period with initial plan).
2nd, no one said doing that would be simple, but simple that could be reason, but its possible that all things I mentioned affected internally on possible delay of change manufacturing process.

Remember you were saying thats nearly impossible that they can change 8nm process, and now most people here think its possible that manufacturing process is changed.
Internal plan change could be done further than last few months ago and in same time leakers (like Nate or Bloomberg) still dont have new informations,
there is no need to act like everything is certain about this new hardware, nothing is 100% certain until official announcement.
 
Last edited:
First there is difference when you have hard date for launch (for instance after revel you gave public release date) compared to have internal plan when this could new hardware could be launch (for instance "we will launch it somewhere from end of 2022. until 1h 2023". so basically you have possible 9 months launch period with initial plan).
2nd, no one said its simple, but its possible that all things I mentioned affected internally on possible delay of change manufacturing process.

Remember you were saying thats nearly impossible that they can change 8nm process, and most people here think its possible that manufacturing process is changed.
I think you misunderstood- nobody really thinks the process node changed mid-development, that would be extremely difficult and unlikely. What people are now saying is that it's possible the info about it being 8nm was wrong in the first place.

And again the amount of coordination they need for a hardware launch makes it extremely likely that they have had a fairly rigid launch window in mind for the past several months if not year or more. Just because it's not announced doesn't mean it's not exceedingly difficult to change, since they'd need to re-coordinate all of those efforts again which can wind up costing some of these companies a lot of time and money.
 
I think you misunderstood- nobody really thinks the process node changed mid-development, that would be extremely difficult and unlikely. What people are now saying is that it's possible the info about it being 8nm was wrong in the first place.

And again the amount of coordination they need for a hardware launch makes it extremely likely that they have had a fairly rigid launch window in mind for the past several months if not year or more. Just because it's not announced doesn't mean it's not exceedingly difficult to change, since they'd need to re-coordinate all of those efforts again which can wind up costing some of these companies a lot of time and money.

So you think that for instance Nate could be wrong about 8nm, but in same time you think he accurate about time frame release?

That really depends mostly from time, how long from initial time launch window left when they maybe decided to delay internal initial plan for Switch launch window, but also from costs, maybe they realised some time last year that that 8nm would made more costs in long run than they thought in 2020. and despite internal delay they could have less cost in long run with lower manufacture process.

I trying here to stay open minded for possible changes saying that nothing is certain until Nintendo announce new hardware,
on other hand you were 100% certain month ago that its 8nm process and now you are not so certain any more.
 
Last edited:
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom