bixente
Tingle
Great job! You've got a good podcast voice, relaxing to listen to.My first podcast appearance ever. Hope people enjoy it.
Great job! You've got a good podcast voice, relaxing to listen to.My first podcast appearance ever. Hope people enjoy it.
I imagine ahypothetical scenario would be this: the lowest performance profile setting would allow you to run MK8D at 720p 60 fps in handheld mode, boost some textures, and still have enough performance overhead to utilise the DLSS algorithm to internally upres to 1440p, and then let downsampling provide a better 720p image. I believe that was the original purpose of DLSS, right? Before they figured out the algorithm was so powerful that it was the best-in-biz real-time super-resolution algorithm. As such, I would imagine that that could have a profound effect on image quality. I remember someone (maybe DF?) doing a comparison for this use case, and they were quite impressed with the improvement to the sub-pixel detail like hairs and such. So yeah, if that is possible, I would say definitely do it.How would you all feel about a game AI downsampling from a DLSS output resolution on a portable screen?
Just asking for science...
EDIT:
For clarity, I'm talking about something like DLSS + DLDSR on portable hardware.
I imagine ahypothetical scenario would be this: the lowest performance profile setting would allow you to run MK8D at 720p 60 fps in handheld mode, boost some textures, and still have enough performance overhead to utilise the DLSS algorithm to internally upres to 1440p, and then let downsampling provide a better 720p image. I believe that was the original purpose of DLSS, right? Before they figured out the algorithm was so powerful that it was the best-in-biz real-time super-resolution algorithm. As such, I would imagine that that could have a profound effect on image quality. I remember someone (maybe DF?) doing a comparison for this use case, and they were quite impressed with the improvement to the sub-pixel detail like hairs and such. So yeah, if that is possible, I would say definitely do it.
If the choice is between this and a noticeably better battery life (like costing more than an hour of battery life), I think the extra hour of battery could be preferred if it is a difference between 3 or 4 hours. If the difference if more than 7 vs. 8 hours, I'd say do the DLSS downsampling.
I think Insomniac Games' post from years back is revealing: it basically states that focusing on visual quality over frame rate had a better market reception than going for frame rate. Now I think that pendulum has swung back to some degree: more people have become interested in performance, hence why I think the 60 fps modes are rapidly becoming an option in multiple games. That said, it does indicate that visuals are important, and I would think that many would appreciate an option like the one you laid out, even at the cost of performance.As I mentioned before, this is not my proposal, but if users want the option to have use DLSS in this way, then I will do my best to provide it as an option.
I'm only talking about super sampling and then downsampling (starting from native resolution, upsampling, and then downsampling the result), which was the actual original purpose of DLSS and it's how the neutral networks were trained from the beginning.
I just want to know how much of an appetite there is for image quality conscious gamers to sacrifice a little performance (and subsequently battery life) for significantly improved image quality. Upsampling from a sub-native resolution would produce inferior image quality results when compared against its natively upsampled counterpart.
@brainchild to give my few cents on this matter, to me it’s difficult to say if it’s something I’d like as I haven’t actually witnessed it executed in a way that makes me want to like it, aka, I haven’t actually seen much of well executed attempts. Or seen it executed much if at all.
So a super sampled image that gets AI downsampled to the screen’s resolution in a way that is a smooth scaling sound good in theory, but I’m unsure how it would pay out in practice. I believe there’s a performance penalty as well for running a setup like this?
Lots of good answers to brainchild's question.
By the way brainchild, do you know how much time is needed to applying DLSS, DLDSR and RTXGI to a frame when running a game on an RTX 3080Ti?
Do you happen to know maybe how much time each step of the process would take?
I'd say Alex Battaglia from Digital Foundry might have attempted this "downsampling from an AI upscaled image" in one of his videos.How would you all feel about a game AI downsampling from a DLSS output resolution on a portable screen?
Just asking for science...
EDIT:
For clarity, I'm talking about something like DLSS + DLDSR on portable hardware.
I think the proliferation of performance vs quality modes is something of a side effect of games being stretched across a wide array of hardware combined with diminishing returns in resolution. Most games on PS5/XS (or even PS4 Pro/X1X) right now aren't really being designed with that hardware in mind, and thus have an excess of performance which is really pretty debatable what to do with.I think Insomniac Games' post from years back is revealing: it basically states that focusing on visual quality over frame rate had a better market reception than going for frame rate. Now I think that pendulum has swung back to some degree: more people have become interested in performance, hence why I think the 60 fps modes are rapidly becoming an option in multiple games. That said, it does indicate that visuals are important, and I would think that many would appreciate an option like the one you laid out, even at the cost of performance.
Of course, it will depend on the game as well I think: If you are making Ratchet & Clank like Insomniac, then you can get perfectly coherent and fluid gameplay at 30 fps. If you're making Mario Kart, please don't sacrifice a solid 60 fps for better visual quality: you need the accuracy that 60 fps provides in order to play well at 200cc.
Anyway, these are my personal thoughts on the matter. I'm sure other people will agree or disagree.
It is much appreciated that you take some of your precious time to educate a few forum dwellers. Seriously, it's awesome.I don't know their contributions to the rendering budget off the top of my head but I'll get the numbers for you when I have some time.
On PC is easy to patch, you only need to put the new DLSS archives on the old version folderI wonder how Nintendo/Nvidia will solve DLSS 2.x if Nvidia comes out with new versions that increase IQ and performance for a few %.
If release games are running on an old DLSS version and the new one would fix performance/ghosting/image quality I wonder if the games could look up at the OS to see if a newer one is installed?
They could, but don't create any expectations. Devs almost certainly don't want untested builds which could introduce new bugs to fix, NVidia will also likely avoid the responsibility of guaranteeing new DLSS versions doesn't break anything in old games they don't own.I wonder how Nintendo/Nvidia will solve DLSS 2.x if Nvidia comes out with new versions that increase IQ and performance for a few %.
If release games are running on an old DLSS version and the new one would fix performance/ghosting/image quality I wonder if the games could look up at the OS to see if a newer one is installed?
Either the OS will provide a new one with OS updates or Nintendo will update the DLSS version dev-side and devs choose to use the one the fits the development timeline of their project.I wonder how Nintendo/Nvidia will solve DLSS 2.x if Nvidia comes out with new versions that increase IQ and performance for a few %.
If release games are running on an old DLSS version and the new one would fix performance/ghosting/image quality I wonder if the games could look up at the OS to see if a newer one is installed?
Thanks. I actually hate my voice but I’m self conscious.Great job! You've got a good podcast voice, relaxing to listen to.
I'm a voice actor. We all hate our voices.Thanks. I actually hate my voice but I’m self conscious.
Based on how the Switch currently manages things, I wouldn't expect any attempt to update the DLSS library of a game that's already been released.I wonder how Nintendo/Nvidia will solve DLSS 2.x if Nvidia comes out with new versions that increase IQ and performance for a few %.
If release games are running on an old DLSS version and the new one would fix performance/ghosting/image quality I wonder if the games could look up at the OS to see if a newer one is installed?
Anything that can improve IQ without killing the frame rate is a good thing as far as I'm concerned.How would you all feel about a game AI downsampling from a DLSS output resolution on a portable screen?
Just asking for science...
EDIT:
For clarity, I'm talking about something like DLSS + DLDSR on portable hardware.
60fps is a choice rather than a hardware limitation. if you aim for 60fps with DLSS, you'll get itAre there thoughts on how DLSS fits in with 60fps games? Is there something in the leak that indicates that Nintendo has an implementation that runs in a timeframe amendable to 60FPS?
Maybe, maybe not. So far, DLSS 2.x versions have been backwards-compatible. And rather than forcing developers to have their games run against updated versions, the Nintendo SDK can simply provide a configuration option to enable roll-forward. Presumably it would be disabled by default, but the developer could enable it to consume minor version upgrades for artifacting fixes and so on, with the downside that it won't be something they've explicitly tested.They could, but don't create any expectations. Devs almost certainly don't want untested builds which could introduce new bugs to fix, NVidia will also likely avoid the responsibility of guaranteeing new DLSS versions doesn't break anything in old games they don't own.
I expect that it, for the most part, will be resolved in a similar way that the Series consoles resolved the performance issues of a game like Control which was handled by the platform holder side via an update to the system.I wonder how Nintendo/Nvidia will solve DLSS 2.x if Nvidia comes out with new versions that increase IQ and performance for a few %.
If release games are running on an old DLSS version and the new one would fix performance/ghosting/image quality I wonder if the games could look up at the OS to see if a newer one is installed?
You are likely correct. It was always just a theory.Look, I get that you seem very attached to this "T239 is actually a binned T234" theory, but the leaked files from the the Nvidia hack directly contradict it. We know that they're two separate chips beyond a reasonable doubt.
I don't see how Nintendo working with Nvidia to accomplish the following would not also involve an "increased R&D expenditure":Also, one would have to reconcile the idea of increased R&D expenditure at Nintendo with unfounded theories about binned chips.
The alternative is that the dev kits have been big boxes with floorswept desktop Ampere GPUs and who knows what CPU and other components in them. Not impossible, but Orin was supposed to be sampling in 2021, and I think the timing could have worked out to give a few partners a devkit in the form of an Orin prototype board. Of course there's no actual evidence to support this, it's just speculation based on the presence of GA10B in NVN2.TTBOMK is on another level where abbreviations are concerned
I think your idea makes sense @Q-True , it could make sense if they put out T234 as dev-kits. The only question I still have regarding that is the fact that dev kits have apparently been out in the wild since at least March of last year. I wonder if the T234 chip was as available back then.
Fair, but in the DF video on the subject a DLSS pass could potentially take almost half the rendering time available for a 60FPS title.60fps is a choice rather than a hardware limitation. if you aim for 60fps with DLSS, you'll get it
Fair, but in the DF video on the subject a DLSS pass could potentially take almost half the rendering time available for a 60FPS title.
I'm curious what the thoughts and/or details are about how that might actually work on the next Switch.
I know I dont know enough to have an actual opinion on the topic.
Dakhil & ShaunSwitch have made the excellent point that full Orin (T239) contains AV1 encode support!
Hahaha. Good to know. This helps actually.I'm a voice actor. We all hate our voices.
I mostly agree with this.Dakhil & ShaunSwitch have made the excellent point that full Orin (T239) contains AV1 encode support!
Features like hardware AVI encode/decode could be used by a future Nvidia Shield TV product!
I expect we will see future Nvidia Shield products, and I think its also reasonable that they could contain the same chip that ends up in the next Switch.(same die, unique configuration)
The contract between Nintendo and Nvidia clearly allowed Nvidia to sell the same chip and use it for other devices. We see that the T214 chip in all the newer Switch'es is also used in the newer Nvidia Shield. Same chip.
If Nintendo was concerned about secrecy and competition, they would put a "non compete" clause into their contact with Nvidia. Along the lines of "
A) Nvidia shall not compete directly with Nintendo in the portable video game space while our shared manufacturing contract is in place
B) Nvidia shall not sell the chip we have designed together to other parties for use in a competing portable video game console
C) Nvidia will not publicly share details about their involvement in our future video game products before Nintendo has publicly announced said products."
TTBOMK The Switch remains the only portable gaming system to use the TegraX1 chip.
Also the Nvidia Shield TV contains less RAM then the Switch. I don't think that is a complete accident. I think Nintendo would be scared that if Nvidia sold a cheaper mass consumer product with the identical configuration to Switch, someone might figure out how to run Switch games on it, bypassing the need for their hardware.
I think there is a proven track record of Nvidia and Nintendo working together to each ship their own products based on the same die, and I think it is reasonable that can continue with the die for the T239. Nintendo can continue to feel safe that Nvidia will not directly compete, and they are rewarded with lower per chip costs.
More, actually, 2/3rds by Alex's analysis. This is a really good video on the topic, but Alex is doing a lot of back of the envelope math, both for the cost of DLSS and at the potential power of a Switch Pro SoC.Fair, but in the DF video on the subject a DLSS pass could potentially take almost half the rendering time available for a 60FPS title.
Here is a high level explanation you didn't actually ask for.I'm curious what the thoughts and/or details are about how that might actually work on the next Switch.
There are a lot of costs involved in manufacturing, and there is an order of operations that the above does not fully take this into consideration.I was curious, but do any of you think that due to the world issues (not COVID or semiconductor shortages) would force Nintendo to sell the product at a loss? Shipping to certain regions like EU would be difficult this time around.
It would pay itself more in the long run.
Not directed at you, but just in general: People are focusing too much on the upfront cost in R&D and not the long term cost of this product. I’m sure Nintendo is willing to pay more upfront for something that would make more financial sense in the long run than to cheap out and end up paying more in the long term for their envisioned 7-9 year timeframe like their previous consoles (sans the Wii U).
Either A) the take a stock Orin NX, just turn off the car features or repurpose them, spend more for wafers to produce a desirable amount that can be shipped. Work something for the engineering of having ORIN fit into the form factor of the switch, including the cooling, the RAM, the storage, etc.
Or B) pay to have a customized ORIN (NOT BINNED), set for your own requirements and feature sets, no “wasted silicon” for features that had no purpose in a gaming console even with an extravagant gimmick attached to it. You are able to produce more per wafer and it could be smaller as it removes several unnecessary features, and you can fit it into the form factor of the switch + have appropriate cooling and it doesn’t differentiate too much.
Over time you pay less for silicon per chip the latter case but you pay a lot more upfront for a customized chip that is for you and you only(maybe in this case), you aren’t waiting for a binned chip for your product, who may not be in large quantities. You are more often having a number of working chips for your needs than to have binned chips that do not suit your needs.
They surely crunched the numbers and saw “hm, we spend more now can end us in X outcome vs spending less to very little (relatively) but having to spend more per product and even longer in the long term per chip (+ others)”
Order things happen in | Scenario A | Scenario B |
1. The chip is taped out and tested | Nintendo is the only customer, pay for 100% of this -$$$ | Nvidia pays for most of this |
2. the wafers are made in the fab based on this design | Nintendo is only customer, they pay for 100% of the wafer -$$$ | Nvidia pays and takes on risk for their customers |
3. Each wafer is cut up into chips and the binning process happens. With a 50% yield, half of the chips in each wafer has defects and can not be sold as a FULL chip. | Half of the chips on each wafer do not meet the minimum bar for a Nintendo T239 and as result are worthless industrial waste without a buyer. They are written off at a total loss and thrown out -$$$ Risk=HIGH as the yield is an unknown. Sometimes it will be worse. No way to calculate exactly. | 50% of the chips in each wafer have at least one defect, can not be sold as a high end Tegra product. As many as possible of the remaining 50% are binned and sold as other products with lower clocks and fewer TPCs. As much $$$ is extracted per wafer as possible. |
4. Sorted chips are sold off | Nintendo pays a lower price because: -the die has other customers. -they do not need the top tier clocks -they do not need all TPCs enabled Risk=LOW as costs per final chip are fixed |
Actually base TegraX1 had only 3GB of RAM to begin with. I believe it was Capcom who convinced Nintendo to up the specs to 4GB.TTBOMK The Switch remains the only portable gaming system to use the TegraX1 chip.
Also the Nvidia Shield TV contains less RAM then the Switch. I don't think that is a complete accident. I think Nintendo would be scared that if Nvidia sold a cheaper mass consumer product with the identical configuration to Switch, someone might figure out how to run Switch games on it, bypassing the need for their hardware.
They got arrested.Whatever happened to the nvidia hackers did they just quit and act like nothing ever happened?
Really? I missed thatThey got arrested.
They got arrested.
A 16-year-old from Oxford has been accused of being one of the leaders of cyber-crime gang Lapsus$.
The teenager, who is alleged to have amassed a $14m (£10.6m) fortune from hacking, has been named by rival hackers and researchers.
City of London Police say they have arrested seven teenagers in relation to the gang but will not say if he is one.
The boy's father told the BBC his family was concerned and was trying to keep him away from his computers.
Under his online moniker "White" or "Breachbase" the teenager, who has autism, is said to be behind the prolific Lapsus$ hacker crew, which is believed to be based in South America.
Lapsus$ is relatively new but has become one of the most talked about and feared hacker cyber-crime gangs, after successfully breaching major firms like Microsoft and then bragging about it online.
oh wow Nate with the burn lol but yeah i been stopped following SMD after his reaction to the Nintendo Switch reveal, the guy is a complete charlatanHe loves to imply without outright saying it but won't hesitate to claim credit if it actually happened. When wrong, however, he defaults to "speculation" as a shield.
Actually, Orin is T234. Drake is T239.Dakhil & ShaunSwitch have made the excellent point that full Orin (T239) contains AV1 encode support!
Features like hardware AVI encode/decode could be used by a future Nvidia Shield TV product!
I expect we will see future Nvidia Shield products, and I think its also reasonable that they could contain the same chip that ends up in the next Switch.(same die, unique configuration)
The contract between Nintendo and Nvidia clearly allowed Nvidia to sell the same chip and use it for other devices. We see that the T214 chip in all the newer Switch'es is also used in the newer Nvidia Shield. Same chip.
If Nintendo was concerned about secrecy and competition, they would put a "non compete" clause into their contact with Nvidia. Along the lines of "
A) Nvidia shall not compete directly with Nintendo in the portable video game space while our shared manufacturing contract is in place
B) Nvidia shall not sell the chip we have designed together to other parties for use in a competing portable video game console
C) Nvidia will not publicly share details about their involvement in our future video game products before Nintendo has publicly announced said products."
TTBOMK The Switch remains the only portable gaming system to use the TegraX1 chip.
Also the Nvidia Shield TV contains less RAM then the Switch. I don't think that is a complete accident. I think Nintendo would be scared that if Nvidia sold a cheaper mass consumer product with the identical configuration to Switch, someone might figure out how to run Switch games on it, bypassing the need for their hardware.
I think there is a proven track record of Nvidia and Nintendo working together to each ship their own products based on the same die, and I think it is reasonable that can continue with the die for the T239. Nintendo can continue to feel safe that Nvidia will not directly compete, and they are rewarded with lower per chip costs.
Any murmurs coming from GDC 2022 probably won't be reported on until a couple of weeks after GDC 2022.How about GDC. No rumours have come of that I suppose.
Nvidia already has a way to sell binned t234s it’s called Orin NX.There are a lot of costs involved in manufacturing, and there is an order of operations that the above does not fully take this into consideration.
We also know that Nintendo likes to still try and launch their new consoles at a profit, or as close to break even as possible.
Nintendo is risk/cost averse. They will compromise and look for a solution that allows them to break even at launch.
We hear talk of 50% yields per wafer on the Samsung 8nm process.
Simplified order of operations for this example:
Scenario A: Nintendo hires Nvidia to manufacture a wholly unique chip where they are the only customer and pays all associated costs at the foundry
Scenario B: Nintendo puts an order in for a chip they were consulted on, where the die has multiple uses, and only pays based on the quantity of end product they receive
Order things happen in Scenario A Scenario B 1. The chip is taped out and tested Nintendo is the only customer, pay for 100% of this -$$$ Nvidia pays for most of this 2. the wafers are made in the fab based on this design Nintendo is only customer, they pay for 100% of the wafer -$$$ Nvidia pays and takes on risk for their customers 3. Each wafer is cut up into chips and the binning process happens. With a 50% yield, half of the chips in each wafer has defects and can not be sold as a FULL chip. Half of the chips on each wafer do not meet the minimum bar for a Nintendo T239 and as result are worthless industrial waste without a buyer. They are written off at a total loss and thrown out -$$$
Risk=HIGH as the yield is an unknown. Sometimes it will be worse. No way to calculate exactly.50% of the chips in each wafer have at least one defect, can not be sold as a high end Tegra product. As many as possible of the remaining 50% are binned and sold as other products with lower clocks and fewer TPCs. As much $$$ is extracted per wafer as possible. 4. Sorted chips are sold off Nintendo pays a lower price because:
-the die has other customers.
-they do not need the top tier clocks
-they do not need all TPCs enabled
Risk=LOW as costs per final chip are fixed
It is also very possible to have a long term plan that could involve a mix of B and A after time. A way to both keep initial costs down and save money in the long term.
For example your dev kits and console launch units could ship with a binned T234 and then have large volume orders of T239s come in and replace them as you move through the cycle.
Now I bet your telling me, that sounds crazy, sell two totally different chip designs, with different transistor counts, as the identical product, just based on the supply of these chips vs the demand!
NVIDIA Ampere Example:
NVIDIA GeForce RTX 3060 GA104
NVIDIA GeForce RTX 3060 GA106
Here we have NVIDIA selling a product to customers, with the name "NVIDIA GeForce RTX 3060", with the core specs 3584 CUDA cores, 112 tensor cores and 28 RT cores.
Inside you could have either the chip:
-GA106 with 13 Billion transistors
-GA104 with 17.4 Billion transistors
This is yet another example of how Nvidia designs chips to be subsets of other chips. It's truly incredible that they have designed them so they have the option to take a binned GA104 and sell it as a GA106!
This opens up lots of options, so that they have more flexibility! Ultimately this allows Nvidia to be less wasteful, and sell more of what is in demand quickly.
We should all celebrate binning, as it's better for the environment, allowing us to make more final products from the same amount natural resources!
Again, binning is a bit of an unfounded idea in relation to Drake as GA10F IS its own GPU die.There are a lot of costs involved in manufacturing, and there is an order of operations that the above does not fully take this into consideration.
We also know that Nintendo likes to still try and launch their new consoles at a profit, or as close to break even as possible.
Nintendo is risk/cost averse. They will compromise and look for a solution that allows them to break even at launch.
We hear talk of 50% yields per wafer on the Samsung 8nm process.
Simplified order of operations for this example:
Scenario A: Nintendo hires Nvidia to manufacture a wholly unique chip where they are the only customer and pays all associated costs at the foundry
Scenario B: Nintendo puts an order in for a chip they were consulted on, where the die has multiple uses, and only pays based on the quantity of end product they receive
Order things happen in Scenario A Scenario B 1. The chip is taped out and tested Nintendo is the only customer, pay for 100% of this -$$$ Nvidia pays for most of this 2. the wafers are made in the fab based on this design Nintendo is only customer, they pay for 100% of the wafer -$$$ Nvidia pays and takes on risk for their customers 3. Each wafer is cut up into chips and the binning process happens. With a 50% yield, half of the chips in each wafer has defects and can not be sold as a FULL chip. Half of the chips on each wafer do not meet the minimum bar for a Nintendo T239 and as result are worthless industrial waste without a buyer. They are written off at a total loss and thrown out -$$$
Risk=HIGH as the yield is an unknown. Sometimes it will be worse. No way to calculate exactly.50% of the chips in each wafer have at least one defect, can not be sold as a high end Tegra product. As many as possible of the remaining 50% are binned and sold as other products with lower clocks and fewer TPCs. As much $$$ is extracted per wafer as possible. 4. Sorted chips are sold off Nintendo pays a lower price because:
-the die has other customers.
-they do not need the top tier clocks
-they do not need all TPCs enabled
Risk=LOW as costs per final chip are fixed
It is also very possible to have a long term plan that could involve a mix of B and A after time. A way to both keep initial costs down and save money in the long term.
For example your dev kits and console launch units could ship with a binned T234 and then have large volume orders of T239s come in and replace them as you move through the cycle.
Now I bet your telling me, that sounds crazy, sell two totally different chip designs, with different transistor counts, as the identical product, just based on the supply of these chips vs the demand!
NVIDIA Ampere Example:
NVIDIA GeForce RTX 3060 GA104
NVIDIA GeForce RTX 3060 GA106
Here we have NVIDIA selling a product to customers, with the name "NVIDIA GeForce RTX 3060", with the core specs 3584 CUDA cores, 112 tensor cores and 28 RT cores.
Inside you could have either the chip:
-GA106 with 13 Billion transistors
-GA104 with 17.4 Billion transistors
This is yet another example of how Nvidia designs chips to be subsets of other chips. It's truly incredible that they have designed them so they have the option to take a binned GA104 and sell it as a GA106!
This opens up lots of options, so that they have more flexibility! Ultimately this allows Nvidia to be less wasteful, and sell more of what is in demand quickly.
We should all celebrate binning, as it's better for the environment, allowing us to make more final products from the same amount natural resources!
The chip designation doesn't change when it gets binned. A GA104 is still a GA104 regardless of how much of it is enabled.There are a lot of costs involved in manufacturing, and there is an order of operations that the above does not fully take this into consideration.
We also know that Nintendo likes to still try and launch their new consoles at a profit, or as close to break even as possible.
Nintendo is risk/cost averse. They will compromise and look for a solution that allows them to break even at launch.
We hear talk of 50% yields per wafer on the Samsung 8nm process.
Simplified order of operations for this example:
Scenario A: Nintendo hires Nvidia to manufacture a wholly unique chip where they are the only customer and pays all associated costs at the foundry
Scenario B: Nintendo puts an order in for a chip they were consulted on, where the die has multiple uses, and only pays based on the quantity of end product they receive
Order things happen in Scenario A Scenario B 1. The chip is taped out and tested Nintendo is the only customer, pay for 100% of this -$$$ Nvidia pays for most of this 2. the wafers are made in the fab based on this design Nintendo is only customer, they pay for 100% of the wafer -$$$ Nvidia pays and takes on risk for their customers 3. Each wafer is cut up into chips and the binning process happens. With a 50% yield, half of the chips in each wafer has defects and can not be sold as a FULL chip. Half of the chips on each wafer do not meet the minimum bar for a Nintendo T239 and as result are worthless industrial waste without a buyer. They are written off at a total loss and thrown out -$$$
Risk=HIGH as the yield is an unknown. Sometimes it will be worse. No way to calculate exactly.50% of the chips in each wafer have at least one defect, can not be sold as a high end Tegra product. As many as possible of the remaining 50% are binned and sold as other products with lower clocks and fewer TPCs. As much $$$ is extracted per wafer as possible. 4. Sorted chips are sold off Nintendo pays a lower price because:
-the die has other customers.
-they do not need the top tier clocks
-they do not need all TPCs enabled
Risk=LOW as costs per final chip are fixed
It is also very possible to have a long term plan that could involve a mix of B and A after time. A way to both keep initial costs down and save money in the long term.
For example your dev kits and console launch units could ship with a binned T234 and then have large volume orders of T239s come in and replace them as you move through the cycle.
Now I bet your telling me, that sounds crazy, sell two totally different chip designs, with different transistor counts, as the identical product, just based on the supply of these chips vs the demand!
NVIDIA Ampere Example:
NVIDIA GeForce RTX 3060 GA104
NVIDIA GeForce RTX 3060 GA106
Here we have NVIDIA selling a product to customers, with the name "NVIDIA GeForce RTX 3060", with the core specs 3584 CUDA cores, 112 tensor cores and 28 RT cores.
Inside you could have either the chip:
-GA106 with 13 Billion transistors
-GA104 with 17.4 Billion transistors
This is yet another example of how Nvidia designs chips to be subsets of other chips. It's truly incredible that they have designed them so they have the option to take a binned GA104 and sell it as a GA106!
This opens up lots of options, so that they have more flexibility! Ultimately this allows Nvidia to be less wasteful, and sell more of what is in demand quickly.
We should all celebrate binning, as it's better for the environment, allowing us to make more final products from the same amount natural resources!