• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

What do you consider more likely to happen compared to hardware specs pulled from Nvidia database? Do you have an alternative to suggest that is more reliable than Nvidia itself?
more likely is that this is still deep in development, was canned, or something of that ilk. basically I think it's more likely it just doesn't become a product in the near future
 

The Company also announced that it has begun construction of a new production line in Pyeongtaek, Korea, which is expected to be completed in the second half of 2022. The state-of-the-art facility equipped with the latest technology, P3, will produce 14-nanometer DRAM and 5-nanometer logic semiconductors, both based on extreme ultraviolet (EUV) lithography technology.

I wonder if this has anything to do with Nvidia?🤔
 
Let's not forget that Nvidia just spent $9B for securing 5nm supply. I expect Nintendo's total budget for the system to be a fraction of that.
That was to secure TSMC 5nm capacity for Lovelace(RTX 4000) and Hopper GPUs(HPC, Data Science, etc), not Samsung 5nm(Which is the one we're speculating)
 
0
In my humble opinion, if Nintendo is smart, 5nm just make more sense.

Sure, it costs more initially, but in the long run it would end up cheaper because they wouldn't have to redesign/upgrade too soon; a 5nm Switch would last at least 5 years.
 

The Company also announced that it has begun construction of a new production line in Pyeongtaek, Korea, which is expected to be completed in the second half of 2022. The state-of-the-art facility equipped with the latest technology, P3, will produce 14-nanometer DRAM and 5-nanometer logic semiconductors, both based on extreme ultraviolet (EUV) lithography technology.

I wonder if this has anything to do with Nvidia?🤔
being completed doesn't mean it will start production. that will probably take a year. though it would help
 
0
more likely is that this is still deep in development, was canned, or something of that ilk. basically I think it's more likely it just doesn't become a product in the near future
It's not canned when it's been updated as recently as this February. And why would it be deep in development when the chip it's based on is releasing this year, with boards going out in the next couple months?
 
It's not canned when it's been updated as recently as this February. And why would it be deep in development when the chip it's based on is releasing this year, with boards going out in the next couple months?
If the node change theory is correct. If they had a near finished chip at 8nm. and for some reason the teatable was upended and they went for a major redesign, it could still be deep in development.

Personally I don't believe this theory, there isn't enough evidence to support it imo.
 
Last edited:
more likely is that this is still deep in development, was canned, or something of that ilk. basically I think it's more likely it just doesn't become a product in the near future

More likely based on what evidence is what I’m curious to know.
 
I wonder if this has anything to do with Nvidia?🤔
I don't think so, since the process from planning where to build new semiconductor fabs, or expanding an existing semiconductor fab, to having a semiconductor fab go live, requires a minimum of several years in advance of planning. Just a lucky coincidence, I think.
 
I wasn't even aware of this when I responded, so thanks for that.

Is the likelihood here that Orin's tensor cores are just less power-efficient and offer more deep learning strength than any use case Nintendo would ever use them for? After all, those tensor cores in the standard Orin configs are designed for mission-critical use cases (medical robotics, autonomous vehicles, etc.) and DLSS (or frankly any AI usage for an entertainment product) doesn't demand the kind of absolute rigorous precision Orin's typical use cases would demand to achieve the desired result, so it would stand to reason in my mind that they draw more power and offer way more performance than Nintendo would ever need, while maybe also costing more.

Basically, would it make sense than they went with desktop Ampere tensor cores for the same reason we're all but certain the SoC won't use an A78AE CPU, that it doesn't make sense since it would add needless expense and/or power draw to a product meant to entertain?

Orin's tensor cores don't offer higher precision, and they don't make any claims that they're more reliable in any way, they're pretty much just the same thing, but run twice as many operations in one cycle. Their autonomous driving systems typically pair Orin with standard GPUs, anyway, so if standard Ampere weren't precise or reliable enough for autonomous driving, then that would certainly be a problem.

One possible issue is that DLSS couldn't effectively use that extra performance because of other bottlenecks (probably on the cache/memory end of things), but I'd imagine the solution there would be to alleviate those bottlenecks with increases in cache size, etc., which they've already done on Orin, and seemingly on Drake too.
I remember we were discussing this back and forth with A100 and the performance of its tensor over GA10x series of cards and the only difference maker seemed to be the larger cache memory for A100.
So I do wonder if we see the same uplift in performance with Lovelace and its massive cache increases over Ampere...
 
More likely based on what evidence is what I’m curious to know.
200.gif


tales from my ass
 
It's not just a question of "8nm" vs "5nm", there are quite a few different possible processes they could be using, and there's a big difference between Samsung 5nm and TSMC 5nm. As I see it, there are basically four different plausible manufacturing processes they could be using:

Samsung 8N
Pros:
Cheap
Same process as desktop Ampere and Orin
Nvidia will be migrating away from it soon for desktop GPUs, which should free up capacity
Presumably pretty good yields

Cons:
Power hungry, which will mean reduced clocks in both modes and potentially disabled SMs in portable mode

Samsung 5LPE/5LPP
Pros:
Around 45% smaller die area and 28% lower power consumption than 8N
Same foundry as 8N process, so possibly easier to work with architecture which was originally on 8N
Likely cheaper than TSMC equivalent
Likely easier to get capacity than on TSMC

Cons:
More expensive than 8N
Worse power consumption than equivalent TSMC processes
Reportedly low yields
Nvidia seemingly have no other products planned for Samsung 5nm

TSMC N6
Pros:
Very mature process (evolution of N7)
Nvidia already released an Ampere product (A100) on N7, so there's familiarity with the design rules
N7/N6 family is the highest-volume process currently in production
At least as good efficiency as Samsung 5nm
Very high yields

Cons:
More expensive than 8N
Higher demand than Samsung processes, although balanced against TSMC's far higher production capacity

TSMC N5/N5P
Pros:
Best density and efficiency of any plausible process (TSMC claim 40% reduced power consumption vs N7)
Rumours suggest every other Nvidia chip launched over the next couple of years will be on TSMC N5
Nvidia have paid a very large amount to secure N5 capacity from TSMC
N5 will likely overtake N7/N6 as the highest-volume process in production in the next year or two
Very high yields

Cons:
Most expensive of the realistic options
High demand, although again balanced against high and growing production capacity, and Nvidia securing a large allocation


I've left out Samsung's 7LPP process, as their 5nm processes are evolutions within the same family (similar to 10nm to 8nm), so I think their 5nm processes are more likely. Similarly with TSMC's N7 or other 7nm processes, as N6 is a direct evolution, and it seems like TSMC is heavily encouraging N6 over N7. I also left out both Samsung and TSMC's 4nm processes, as I just don't think either are particularly realistic.


Samsung 8N is the default for obvious reasons, but the 12 SM GPU would be pretty power-hungry on it, meaning not just low clocks in docked mode, but also likely having to disable SMs in portable mode to maintain battery life. There's a temptation to say that, if 8N isn't enough, then they would use Samsung's 5LPE or 5LPP, as they're moving to a better node from the same foundry, but the fact that Nvidia have nothing else planned on Samsung 5nm makes me doubt that a bit. They're not even using it for low-end Ada cards, with the entire Ada lineup reportedly on TSMC N5. This suggests to me that, even if the price per wafer is good, it's just not a good value node from Nvidia's perspective.

Yields play an important part in this, and on Samsung's 5nm processes they're supposed to be pretty bad. A report from last year suggested that yields on Samsung 5nm are only around 50%. This isn't a particularly meaningful measure without knowing the size of the chip in question, but given the largest dies manufactured on Samsung 5nm thus far are smartphone SoCs which are likely in the 100mm2 range, a 50% yield is pretty awful. A report from a couple of weeks ago suggests that Samsung is investigating whether their own yield reports were fraudulent, and that the Snapdragon 8 Gen 1, manufactured on Samsung 4LPE, has even worse yields of only 35%.

The upshot of this is that, even if Samsung's 5nm or 4nm processes are cheaper per-wafer than TSMC's equivalent processes, if yields are bad enough, they could still end up being a more expensive option per usable chip.

Meanwhile, I'd imagine that TSMC's N6 process is probably the best balance of cost and performance if you're looking at things in isolation, and obviously has the benefit of being mature and very high yield (ie a very safe option). However Nvidia haven't used N7 or N6 for anything since the A100 that launched almost two years ago. It looks like AMD will be using a mix of N6 and N5 going forward, but Nvidia seems very gung-ho in moving onto N5.

The TSMC N5 process, then, is the best option in terms of density and efficiency, but also the most expensive. I would have said this was very unlikely a few weeks ago, but now I'm not so sure. The density and efficiency are such that, if you were to start a design for an SoC for a Switch-like device on TSMC N5, then 12 SMs really wouldn't be pushing the envelope very much. Nvidia are also pushing very heavily on migrating their full lineup of desktop and server GPUs onto N5, and have committed to purchasing a very large allocation from TSMC.

Personally, my current expectation is a Samsung 8N chip, with relatively low clocks (say 800MHz docked) and 6 SMs disabled when portable. I wouldn't rule out any of the other options, though.
 
Last edited:
It's not just a question of "8nm" vs "5nm", there are quite a few different possible processes they could be using, and there's a big difference between Samsung 5nm and TSMC 5nm. As I see it, there are basically four different plausible manufacturing processes they could be using:

Samsung 8N
Pros:
Cheap
Same process as desktop Ampere and Orin
Nvidia will be migrating away from it soon for desktop GPUs, which should free up capacity
Presumably pretty good yields

Cons:
Power hungry, which will mean reduced clocks in both modes and potentially disabled SMs in portable mode

Samsung 5LPE/5LPP
Pros:
Around 45% smaller die area and 28% lower power consumption than 8N
Same foundry as 8N process, so possibly easier to work with architecture which was originally on 8N
Likely cheaper than TSMC equivalent
Likely easier to get capacity than on TSMC

Cons:
More expensive than 8N
Worse power consumption than equivalent TSMC processes
Reportedly low yields
Nvidia seemingly have no other products planned for Samsung 5nm

TSMC N6
Pros:
Very mature process (evolution of N7)
Nvidia already released an Ampere product (A100) on N7, so there's familiarity with the design rules
N7/N6 family is the highest-volume process currently in production
At least as good efficiency as Samsung 5nm
Very high yields

Cons:
More expensive than 8N
Higher demand than Samsung processes, although balanced against TSMC's far higher production capacity

TSMC N5/N5P
Pros:
Best density and efficiency of any plausible process (TSMC claim 40% reduced power consumption vs N7)
Rumours suggest every other Nvidia chip launched over the next couple of years will be on TSMC N5
Nvidia have paid a very large amount to secure N5 capacity from TSMC
N5 will likely overtake N7/N6 as the highest-volume process in production in the next year or two
Very high yields

Cons:
Most expensive of the realistic options
High demand, although again balanced against high and growing production capacity, and Nvidia securing a large allocation


I've left out Samsung's 7LPP process, as their 5nm processes are evolutions within the same family (similar to 10nm to 8nm), so I think their 5nm processes are more likely. Similarly with TSMC's N7 or other 7nm processes, as N6 is a direct evolution, and it seems like TSMC is heavily encouraging N6 over N7. I also left out both Samsung and TSMC's 4nm processes, as I just don't think either are particularly realistic.


Samsung 8N is the default for obvious reasons, but the 12 SM GPU would be pretty power-hungry on it, meaning not just low clocks in docked mode, but also likely having to disable SMs in portable mode to maintain battery life. There's a temptation to say that, if 8N isn't enough, then they would use Samsung's 5LPE or 5LPP, as they're moving to a better node from the same foundry, but the fact that Nvidia have nothing else planned on Samsung 5nm makes me doubt that a bit. They're not even using it for low-end Ada cards, with the entire Ada lineup reportedly on TSMC N5. This suggests to me that, even if the price per wafer is good, it's just not a good value node from Nvidia's perspective.

Yields play an important part in this, and on Samsung's 5nm processes they're supposed to be pretty bad. A report from last year suggested that yields on Samsung 5nm are only around 50%. This isn't a particularly meaningful measure without knowing the size of the chip in question, but given the largest dies manufactured on Samsung 5nm thus far are smartphone SoCs which are likely in the 100mm2 range, a 50% yield is pretty awful. A report from a couple of weeks ago suggests that Samsung is investigating whether their own yield reports were fraudulent, and that the Snapdragon 8 Gen 1, manufactured on Samsung 4LPE, has even worse yields of only 35%.

The upshot of this is that, even if Samsung's 5nm or 4nm processes are cheaper per-wafer than TSMC's equivalent processes, if yields are bad enough, they could still end up being a more expensive option per usable chip.

Meanwhile, I'd imagine that TSMC's N6 process is probably the best balance of cost and performance if you're looking at things in isolation, and obviously has the benefit of being mature and very high yield (ie a very safe option). However Nvidia haven't used N7 or N6 for anything since the A100 that launched almost two years ago. It looks like AMD will be using a mix of N6 and N5 going forward, but Nvidia seems very gung-ho in moving onto N5.

The TSMC N5 process, then, is the best option in terms of density and efficiency, but also the most expensive. I would have said this was very unlikely a few weeks ago, but now I'm not so sure. The density and efficiency are such that, if you were to start a design for an SoC for a Switch-like device on TSMC N5, then 12 SMs really wouldn't be pushing the envelope very much. Nvidia are also pushing very heavily on migrating their full lineup of desktop and server GPUs onto N5, and have committed to purchasing a very large allocation from TSMC.

Personally, my current expectation is a Samsung 8N chip, with relatively low clocks (say 800MHz docked) and 6 SMs disabled when portable. I wouldn't rule out any of the other options, though.
Again, I feel the idea of turning off SMs is inherently flawed as there is zero evidence indicating it outside of the clock-step mentioned in the NVN2 Driver.
Disabling SMs outright does far more than just what a clock-stepper would do and would need a completely different profile in the driver more or less as I doubt they can hot-swap NVN2 Drivers between portable and docked.

And the NVN2 Driver has 0 mentions of any SM, RT Core, or Tensor Core count below what 12SMs would bring.

If you expect them to disable the SMs in portable mode, that is more or less having to program for a completely different system in regards to CUDA count, RT core count, Tensor count (and therefore DLSS"s power), and BANDWIDTH, because the Cache itself would get disabled at the L1 level.

EDIT: Also NVIDIA did mention keeping Ampere around for a bit longer so Samsung 8nm may not be cleared out.
So at best likely it would be GA107-GA103 being active with GA102 being depricated, them depricating the GA-node GPUs as their equivalent AD-node GPUs came out for both desktop and laptops
Not to mention we do know the main Orin family is on 8nm as well so that means even less supply to fit Drake into.
 
Last edited:
The curious thing is that at a Morgan Stanley event on Monday, Nvidia's CFO mentioned the possibility of keeping the RTX 3000 series in production alongside 4000. I'm not sure about the probability of such, but it does make the assumption of the Samsung 8nm allocation being cleared up a bit murkier than before.
 
It's not just a question of "8nm" vs "5nm", there are quite a few different possible processes they could be using, and there's a big difference between Samsung 5nm and TSMC 5nm. As I see it, there are basically four different plausible manufacturing processes they could be using:

Samsung 8N
Pros:
Cheap
Same process as desktop Ampere and Orin
Nvidia will be migrating away from it soon for desktop GPUs, which should free up capacity
Presumably pretty good yields

Cons:
Power hungry, which will mean reduced clocks in both modes and potentially disabled SMs in portable mode

Samsung 5LPE/5LPP
Pros:
Around 45% smaller die area and 28% lower power consumption than 8N
Same foundry as 8N process, so possibly easier to work with architecture which was originally on 8N
Likely cheaper than TSMC equivalent
Likely easier to get capacity than on TSMC

Cons:
More expensive than 8N
Worse power consumption than equivalent TSMC processes
Reportedly low yields
Nvidia seemingly have no other products planned for Samsung 5nm

TSMC N6
Pros:
Very mature process (evolution of N7)
Nvidia already released an Ampere product (A100) on N7, so there's familiarity with the design rules
N7/N6 family is the highest-volume process currently in production
At least as good efficiency as Samsung 5nm
Very high yields

Cons:
More expensive than 8N
Higher demand than Samsung processes, although balanced against TSMC's far higher production capacity

TSMC N5/N5P
Pros:
Best density and efficiency of any plausible process (TSMC claim 40% reduced power consumption vs N7)
Rumours suggest every other Nvidia chip launched over the next couple of years will be on TSMC N5
Nvidia have paid a very large amount to secure N5 capacity from TSMC
N5 will likely overtake N7/N6 as the highest-volume process in production in the next year or two
Very high yields

Cons:
Most expensive of the realistic options
High demand, although again balanced against high and growing production capacity, and Nvidia securing a large allocation


I've left out Samsung's 7LPP process, as their 5nm processes are evolutions within the same family (similar to 10nm to 8nm), so I think their 5nm processes are more likely. Similarly with TSMC's N7 or other 7nm processes, as N6 is a direct evolution, and it seems like TSMC is heavily encouraging N6 over N7. I also left out both Samsung and TSMC's 4nm processes, as I just don't think either are particularly realistic.


Samsung 8N is the default for obvious reasons, but the 12 SM GPU would be pretty power-hungry on it, meaning not just low clocks in docked mode, but also likely having to disable SMs in portable mode to maintain battery life. There's a temptation to say that, if 8N isn't enough, then they would use Samsung's 5LPE or 5LPP, as they're moving to a better node from the same foundry, but the fact that Nvidia have nothing else planned on Samsung 5nm makes me doubt that a bit. They're not even using it for low-end Ada cards, with the entire Ada lineup reportedly on TSMC N5. This suggests to me that, even if the price per wafer is good, it's just not a good value node from Nvidia's perspective.

Yields play an important part in this, and on Samsung's 5nm processes they're supposed to be pretty bad. A report from last year suggested that yields on Samsung 5nm are only around 50%. This isn't a particularly meaningful measure without knowing the size of the chip in question, but given the largest dies manufactured on Samsung 5nm thus far are smartphone SoCs which are likely in the 100mm2 range, a 50% yield is pretty awful. A report from a couple of weeks ago suggests that Samsung is investigating whether their own yield reports were fraudulent, and that the Snapdragon 8 Gen 1, manufactured on Samsung 4LPE, has even worse yields of only 35%.

The upshot of this is that, even if Samsung's 5nm or 4nm processes are cheaper per-wafer than TSMC's equivalent processes, if yields are bad enough, they could still end up being a more expensive option per usable chip.

Meanwhile, I'd imagine that TSMC's N6 process is probably the best balance of cost and performance if you're looking at things in isolation, and obviously has the benefit of being mature and very high yield (ie a very safe option). However Nvidia haven't used N7 or N6 for anything since the A100 that launched almost two years ago. It looks like AMD will be using a mix of N6 and N5 going forward, but Nvidia seems very gung-ho in moving onto N5.

The TSMC N5 process, then, is the best option in terms of density and efficiency, but also the most expensive. I would have said this was very unlikely a few weeks ago, but now I'm not so sure. The density and efficiency are such that, if you were to start a design for an SoC for a Switch-like device on TSMC N5, then 12 SMs really wouldn't be pushing the envelope very much. Nvidia are also pushing very heavily on migrating their full lineup of desktop and server GPUs onto N5, and have committed to purchasing a very large allocation from TSMC.

Personally, my current expectation is a Samsung 8N chip, with relatively low clocks (say 800MHz docked) and 6 SMs disabled when portable. I wouldn't rule out any of the other options, though.

Maybe Nintendo had Peach bake Intel a cake and asked them really really nicely if they could share their 7nm wafers.
 
more likely is that this is still deep in development, was canned, or something of that ilk. basically I think it's more likely it just doesn't become a product in the near future
The files in the leak had literally just been updated, they aren’t being thrown out.
 
Lying to investors and/or shareholders is literally illegal. And investors and/or shareholders can literally collectively sue Nintendo for securities fraud if investors and/or shareholders reasonably suspect Nintendo's lying. (I imagine the last thing Nintendo wants to do is to anger investors and/or shareholders to the point where investors and/or shareholders decide to stop investing in Nintendo.)

The media's obviously a different story.

You guys anchor to this “illegality” argument to much. Management lies or misleads investors all the time. ALL the time.
 
You guys anchor to this “illegality” argument to much. Management lies or misleads investors all the time. ALL the time.
Citation Needed?

Weasel words sure, clear factual statements that can be audited by a simple majority vote by shareholders? No, that's a big no-no.

If you want to commit fraud, you say things are MORE profitable than they are to encourage shady investment. If you want to launder money, you say you are LESS profitable, are we accusing Nintendo execs of money laundering?

Nintendo could charge substantially more for the OLED and it would sell - scalpers demonstrate that the market would bear a much higher price. Switches are in demand and there is a severe chip shortage limiting production. To commit securities fraud to control a minor PR kerfuffle on a product you could sell for a higher price anyway is a pretty ludicrous accusation which needs... like any amount of data. The Bloomberg article didn't even have an inside source - it was an estimate by an analyst entirely based on the few announced changes, before people got their hands on the device and discovered the build and materials upgrades. The $10 number quoted is the minimum not the total.

I would not put it past Nintendo, or any company, to include one time R&D costs and manufacturing investments in the ameliorated cost of each piece of hardware. But the $10 number has come out of thin air and stuck around like a bad meme.
 
What?!??!
new ways to play using a switch really sounds like a nintendo way...this totally out of a left field.....
i do not know how this person/account is accurate!
EDIT: according to Dakhil the twitter account is not reall reliable! so ...take this infromation with big skepticism!



 
Last edited:
4 is the new Intel brand marketing for what used to be 7nm now right? And 3 is the next step denser node? Huh. That must have been some really good cake.
Intel 4 is the rebrand of old 7nm, yes.
3 should be what would've been 7nm+, or maybe even 7nm++ based on projected perf/watt gains.
20A would be the old 5nm, 18A would then be 5nm+
 
What?!??!
new ways to play using a switch really sounds like a nintendo way...this totally out of a left field.....
i do not know how this person/account is accurate!







Sounds like AR.

Like playing a game docked, then switching to your phone to use the camera system and AR to find items around your home or something.
 
Here are some shipment records (of imports) of Nintendo of America, covering the last 7 years. (source)
I can't vouch for the accuracy of the data, as it's my first time using the specific site, but their source seems to be the official US custom records so I go with that.

The records include suppliers that manufacture different parts/modules of (current and past?) Nintendo hardware.
For example Amperex Technology seems to be the manufacturer of Switch's battery, but others are not so easy to "match" to specific hardware parts, at least for me. (I'm pretty sure Foxconn is somewhere in there with a dif. name)

I tried looking for patterns in the graphs (hence the sloppy vertical "year" lines) correlating shipment spikes with "significant" events e.g. console launches, devkit shipments(!) etc, but it's running a bit late here so I'll leave it here for now.

I bet Nintendo of Japan's records would look even more interesting, but I don't know if those are publicly available for free, so NoA's records will have to do for now. Maybe someone can get something out of it.

For those interested, I recommend checking the records from the site itself as my screenshots are probably missing the 2nd page of the results, and that page might include a new/interesting supplier that's listed towards the end just because of sorting.

The two lists are sorted by:
  • Number of sea shipments
  • Weight in kg
noa_imports_byshipmenmcj6o.png
noa_imports_byweightzlku4.png
 
Last edited:
I want to clarify something I said a while back about clock gating that led to the speculation around disabling SMs in portable mode. Clock gating seems to be a very broad area of the drivers that can be used for many different components, and it's a feature common to all Nvidia SoCs/GPUs, not unique to T239/GA10F. What is unique to GA10F is that it's the only Ampere GPU to support FLCG, which stands for first-level clock gating, differentiated from second-level clock gating or SLCG. And I think the reason for this is simply because it's a new feature, with wide support only added in Ada. (While there are older references to FLCG, including one for T210, aka TX1, at least the way it's being exposed is new). There was already some hay made about how T239 would borrow Ada features, so maybe this is one of them.

It's not clear to me which is the higher/broader level of control, first or second. First-level would seem to imply the broadest layer, but there are open-source drivers from Nvidia that refer to SLCG as "master clock gating," making it sound like the higher level. In any event, it seems to me that clock gating is used more as a dynamic system to minimize power consumption, rather than a high-level feature switch you would use to turn off SMs for the duration the system was in portable mode. Even if SMs are being disabled -- which is still a possibility, as I really have no idea one way or the other if that could be done or needs to be done for Drake -- I don't think the one mention of FLCG in GA10F I brought up before is a strong reason to start thinking along those lines.
 
Are we that desperate for (leaked) information that we resort to frauds. It’s like when some people were reporting on the Nvidia leak they used SamusHunter as a source.

I'm kinda shocked how bad some of the enthusiast reporting has been on this. I have now come across multiple outlets that are stating the specs of the T234, instead of the T239 as the leaked switch hardware. Like, all you have to do is directly regurgitate the thing, there is nothing fancy to this to mess up.

The digital foundry video posted in here being the most high profile whoopsiedoodle.
 
Citation Needed?

My ears.

Weasel words sure, clear factual statements that can be audited by a simple majority vote by shareholders? No, that's a big no-no.
“Simple majority vote” 😂😭

If you want to commit fraud, you say things are MORE profitable than they are to encourage shady investment. If you want to launder money, you say you are LESS profitable, are we accusing Nintendo execs of money laundering?
.

I am not accusing Nintendo of money laundering. That’s ridiculous. Companies lie or mislead all the time - for reasons other than fraud or money laundering. Such as to manage investor expectations, avoid disclosure that’s competitively sensitive, can effect brand perception, etc…


I not put it past Nintendo, or any company, to include one time R&D costs and manufacturing investments in the ameliorated cost of each piece of hardware. But the $10 number has come out of thin air and stuck around like a bad meme.

Right. It could be accounting assumption or a few other things.

...how much of that is coming from personal experience as an investor/analyst? :p

Speaking of which, at the next meeting/presentation in which you can ask questions, is stuff relating to the Nvidia cyberattack within bounds for you to ask about?

Lots of personal experience. I’m not an outlier. Everyone lies/misleads at some point.

Yea, I can ask Nvidia whenever I talk with them. Nvidia isn’t a company of interest to me but I think someone else on my team is reaching out.
 
Just because they’re making more money doesn’t mean they have money to waste and are gonna make it rain for the sake of it. If we want to talk about actual un-Nintendo behaviour, that ranks really high up there. Making more money doesn’t make the amount you’re spending immaterial, especially if the money you spend isn’t being or will not be adequately made back commensurate to that expenditure.

Who is saying they are throwing away money?

All I’m saying is that any company spending 50% of their yearly revenue on new hardware R&D is taking a far FAR greater risk on invenstment than a company spending 5% of their yearly revenue on R&D. Doesn’t matter if the latter R&D amount is more than the former, the above is still true. This shouldn’t be a controversial statement.

One would also say, that if a company at one time was spending 50% of their revenue on R&D, and is now spending 5%, we would call that a company being far more conservative in investment spending, actually.

So, Nintendo spending on R&D for Drake is arguabley Nintendo being more conservative lol. They will actually waste less of their profits on Drake R&D than they did for Mariko.

And this is the entire point of that ZhugeX graph I was referring to and you posted.

So, this is the only reason I just dismiss anyone trying to claim R&D for Drake is somehow out of the ordinary for Nintendo or it must represent generational hardware…it doesn’t.

As for “wasted money”, as you say, it won’t be. It will pay huge dividends in terms of engagement of Switch software 2022-2026 that wouldn’t be there otherwise. Nintendo knows what can happen to the latter half of a successful consoles lifecycle when it shows age. They lived it with the Wii.

Even with Lite type sales, this Drake model will have been worth it. Think of all the millions people who would have checked out of Switch software over the next 4 years without it? Heck, I argue even the OLED model increased Switch userbase purchases in 2021/2022 that might have been a bit more stagnant without it. I know I’ve been more engaged with it.

An SoC with a Volta-based GPU would give them headroom, too, even accounting for the need to scale back the standard Xavier SoC config it would be based on to reduce die size (its base config is frankly more than what is necessary and, like the base configurations of Orin, Xavier features too many autonomous vehicle components that they could gut out and/or replace). As a matter of fact, my first post in this thread was confusion about the mention of Orin, as I expected a Xavier derivative alone to be next-gen enough for Nintendo’s next hardware when compared to the TX1 and (at the time) seemed a more realistic choice were it not for the persistent rumours of Orin.
Volta features Tensor cores that enable DLSS.

What was the Volta based SoC Nvidia was working on? I’m willing to bet that you couldn’t have a performative version of one that could offer proper gaming DLSS usage with the size and efficiency/power draw of what’s required for the Switch form factor. From what I understand, the ampere tensor cores are also different than previous gens.

Someone with much better knowledge than me can correct me on that though.

I think if that werent the case, and Nvidia was ok still making Volta architecture chips through 2027 and that was cost effective…we would have seen that. I’m guessing it isn’t. Heck, people here have even intimated the idea that maybe fewer SM’s and 8nm might not have been enough to function overall well enough.

We can agree to disagree, but I’m positive the hardware usage that they ended up with was more out of necessity to get everything DLSS related working adequately with the caveats Nintendo creates for themselves with Switch hardware.

But even if we concede to the idea that Orin was somehow the only choice or the easiest/cheapest method to get what Nintendo wanted, this is going to be a custom SoC either way, so the core count on the GPU that's being bandied about (1536, 6 times the GPU cores of the TX1) is, given what your expectations are, extravagant to the point of being grotesque.

This hardware is bordering on grotesque?
lol I think you are laying it on a bit too much :p

Look, the RTX 2060 is the lowest CUDA offering by Nvidia to attempt DLSS. That’s 1920 CUDA running at 1.37 ghz. and draws 175W.

That can play a 2019 game at 4K/60fps…but only in performance mode. Starts to suffer beyond that.

And you want me to think Drakes 1536 CUDA at 1GHz with a 25W power draw is someone bordering on grotesque overkill to play games at 4K/60fps? Why?

Go look at the 3050ti DLSS capable laptop gpu. That’s supremely grotesque compared to Drake lol

But again, that's not what we appear to be getting. There was a cheaper (and in some respects, better and more consumer-positive) way to achieve everything you're suggesting, no matter which way you look at it. The spec information we're gleaning from the leak is incredibly excessive for such requirements, with cheaper and more battery-efficient options Nintendo could easily take on a custom design, since no one (not 3rd parties, not Nintendo) are going to jump at the chance to use all that horsepower making more than the tiniest handful of exclusive titles for some iterative revision with the constant install base cap of 20% of all hardware sold that will be outmoded in 3-4 years.

The proof that Nintendo/Nvidia couldn’t make a cheaper, more efficient SoC to get 4K/60fps DLSS gaming in is in the fact that it never materialized. If there were such a possibility, it would have happened.

Your hyperbole suggests that Nvidia had the power to demand Nintendo make a chip with far more power than its intended goals, when it's the other way around; Nintendo is the client in this scenario, a business relationship Nvidia wants to keep for another 15 years, there are ways to give them what they want if your suggestion is all they wanted it to be, so if they wanted a device that just upscaled Switch games to 4K and Drake as we understand it is what they're getting from Nvidia, then Nintendo's not that good of a hardware designer and Nvidia swindled Nintendo like a bunch of rubes.

shrugs you say demand, I say incentivize.

I’m pretty sure Nvidia greatly informed Nintendo their best options to get Swifch mobile DLSS gaming workable and sustainable for 2022-2027 or so.

Nintendo went for the overall best option. They can afford to spend a bit more on a model to future proof it better. Their current success and the knowledge that investing in this same ecosystem is important to maintain that.

I’m sure Nvidia convinced Nintendo on the value of DLSS.

They weren’t bamboozled by Nvidia on anything lol

With what we are hearing about the leaks, and knowing this would be a custom SoC either way, the design they're getting is a waste of money, an unnecessary increase to power draw and too big in die size if it's only to achieve what you believe Nintendo wants this new hardware to be.

I don’t see it. You aren’t convincing me by just saying that.

Nintendo have control of what they're getting from Nvidia. So if what we are seeing from the leak is what Drake is? That's not happenstance or some design quirk, it's completely on purpose. And this purposeful design is way more than what's necessary for an iterative revision.

Eh, I’m saying something like trying it with less SM’s and on a bigger node and with older architecture probably did occur…and the prototypes weren’t cutting it (for whatever reason. Performance, heat, poor yields, etc)

So the “happenstance” would be resigning to spend a little more and change something like more SM’s or a smaller node or newer architecture…to get the resultant chip acceptable.

That’s extremely plausible. Nintendo walking in in 2018/2019demanding Nvidia give them a custom variation of their most cutting edge SoC…not so much. I’m sure there was an entire process on settling what was best

A few weeks ago I would have absolutely said that Nintendo would choose the minimal viable GPU for DLSS, but now I don't think that's the case. It's not just the size of the GPU, but their choice of tensor cores. Specifically, when Nvidia designed Orin, which is heavily focussed on machine learning, they designed tensor cores which operate at double the performance of the desktop Ampere variety. When Drake was designed they obviously would have had an option to use the same double-rate tensor cores from Orin, but we know from the leak that they didn't, and are using tensor cores with the same performance as desktop Ampere. That suggests to me that they weren't designing this around maximising tensor core performance for DLSS, but rather were designing a chip with a substantial increase in graphics performance in general, with tensor cores (and therefore DLSS) on top.

What’s the minimum amount of CUDA cores running at 1 ghz with 25W is needed to get…say…Death Stranding running at 4K/DLSS in performance mode? I genuinely don’t know.

As I mentioned above, the RTX 2060 barely does this.
 
Oh right, didn’t you find last year that based on NVidia’s inventory about the Mariko TX1 that it wasn’t dwindling down at all, Vash?
 
0
Any new info outside of the leak?

I saw the clocks comparisons vs size and from what I could take we are reaching 3050 perfomance (but obviously not quite), but I'm curious if NVN2 did not leak any other details about the system, including inputs for the controllers.
 
Quoted by: LiC
1
Any new info outside of the leak?

I saw the clocks comparisons vs size and from what I could take we are reaching 3050 perfomance (but obviously not quite), but I'm curious if NVN2 did not leak any other details about the system, including inputs for the controllers.
NVN2 is a graphics API, so it doesn't have things like that. The API lets you turn configure the GPU and tell it what shaders, models, textures, etc. to use when rendering frames. There is some level of integration with the OS and therefore the Nintendo Switch SDK, but only for graphics purposes, and that integration is not Nvidia's property and wasn't part of the leak.
 
More likely based on what evidence is what I’m curious to know.
Let @Raccoon go lol, he's just having trouble reconciling the specs with Nintendo's usual strategies. There's a lot of people that refuse to trust these specs, regardless of how sound they know the source to be. It's more of an emotional argument than a logical one, but they know that.

Even I'm not fully committing to it yet, despite the fact that I fully expect both the leaked specs and Nate's timeline to be correct. The emotional side of me is refusing to let the logical side imagine BotW 2, XC3, etc. on this hardware because it can't bear to be let down. I'm normally not even a 'tempered expectations' kinda guy, because I can avoid letting my expectations for something affect my feelings for the final product. But the difference between the regular Switch and this is just too big to avoid that.
 
The files in the leak had literally just been updated, they aren’t being thrown out.
my post was predicated on the idea that I think 8nm 12sm hybrid is basically impossible and any silly alternative is more likely

but as pointed out I don't really have any basis for the former assertion
 

HotGirlVideos69's a complete fraud if I recall correctly.
Someone's been reading my fanfic on ERA 😒
The Switch platform's two main differentiators are IMHO a) the ease of switching between modes of consumption (handheld/tabletop/TV), and b) the staunch support of multi-player gameplays in a social setting [...] the new gimmick (if any) of the next Switch should be sought to strengthen these differentiators. [...] A few possible implementations:
  1. Multiple Switches streaming to one dock simultaneously
  2. One Switch streaming to one or more Switches (license permitting)
  3. One Switch streaming to one or more mobile devices (license permitting)
  4. A premium XL model (a monitor/wireless dock/console hybrid) that can do all of the above
[...]
double-ui-pc.jpg
 
Here are some shipment records (of imports) of Nintendo of America, covering the last 7 years. (source)
I can't vouch for the accuracy of the data, as it's my first time using the specific site, but their source seems to be the official US custom records so I go with that.

The records include suppliers that manufacture different parts/modules of (current and past?) Nintendo hardware.
For example Amperex Technology seems to be the manufacturer of Switch's battery, but others are not so easy to "match" to specific hardware parts, at least for me. (I'm pretty sure Foxconn it's somewhere in there with a dif. name)

I tried looking for patterns in the graphs (hence the sloppy vertical "year" lines) correlating shipment spikes with "significant" events e.g. console launches, devkit shipments(!) etc, but it's running a bit late here so I'll leave it here for now.

I bet Nintendo of Japan's records would look even more interesting, but I don't know if those are publicly available for free, so NoA's records will have to do for now. Maybe someone can get something out of it.

For those interested, I recommend checking the records from the site itself as my screenshots are probably missing the 2nd page of the results, and that page might include a new/interesting supplier that's listed towards the end just because of sorting.

The two lists are sorted by:
  • Number of sea shipments
  • Weight in kg
noa_imports_byshipmenmcj6o.png


noa_imports_byweightzlku4.png
I have no idea how to glean any useful information from this but I just want to thank you for the hard work and let you know it's not being ignored.
 
my post was predicated on the idea that I think 8nm 12sm hybrid is basically impossible and any silly alternative is more likely

but as pointed out I don't really have any basis for the former assertion
Honestly, I don't think the former assertion is true, but it's certainly pushing the limit. There's definitely some room for a bigger chip in the Switch.
 
Honestly, I don't think the former assertion is true, but it's certainly pushing the limit. There's definitely some room for a bigger chip in the Switch.
Yeah I really think the size itself isn't really a problem.

Heat too, generally a wider die will be better at spreading heat. Especially if it's clocked low, I'd expect there to be less cooling needed, not more.

The biggest issues will be cost per die (aka dies per wafer) and power consumption required. Neither of those are things we can really do even rough estimates of.
 
Heat too, generally a wider die will be better at spreading heat. Especially if it's clocked low, I'd expect there to be less cooling needed, not more.
Heat is a question of whether they can reign in the power consumption of the chip as low (or nearly, anyway) as the TX1 without tanking the performance. That's potentially the biggest obstacle, but Nvidia and Nintendo are the only ones who have a clue as to where they are on the power curve. There's a chance it's not a problem at all.
 
A small question.

Do you play with your Nintendo Switch or you just enjoy talking about it? I mean... Are you guys even going to play with your Switch Pro/2/Super/Ultra?

😅
More towards the latter since I've recently started my job a little more than a week ago. I do try to play on my Nintendo Switch from time to time whenever I'm not too tired or busy.
 
0
A small question.

Do you play with your Nintendo Switch or you just enjoy talking about it? I mean... Are you guys even going to play with your Switch Pro/2/Super/Ultra?

😅

I've played Switch more than any Nintendo system I've ever owned. It's really not even close. However the last year or so I've been playing games that I felt suited handheld only, and it's cause I just don't like how most Switch games look blown up on the TV anymore.

So yeah, if Super Switch delivers on good resolutions and framerates you can bet I'll be playing the heck out of it.
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom