• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Do you have audio editing experience and want to help out with the Famiboards Discussion Club Podcast? If so, we're looking for help and would love to have you on the team! Just let us know in the Podcast Thread if you are interested!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

Honestly, as anyone who has ever worked retail or customer service can tell you, a lot of consumers can’t.
At the design shop I used to work at we had little boxes of stickers on the cashier's counter labeled $1 and $2, and we'd get people asking us "What are these?" and we'd say "Dollar-stickers and two-dollar stickers," and as I'm sure you've already guessed, yes, there were people who would then sincerely ask "How much for a dollar sticker?"
 
Reveal to release pipeline has gotten shorter this gen with Nintendo.

Also October Switch reveal to launch was just under 6 months. I would argue they needed to reveal in October to have the concept in people’s minds before blowing the lid off in January.

People understand the Switch concept now. There is no need to get it on people’s mind. They can focus solely on showing off why someone would want this particular version of the Switch.

And finally, if they reveal in January and launch it with Zelda that’s about 5 months, one less month than the original Switch reveal to launch window. I think they’ll be fine.
The Xbox 360 was officially announced (on a star-studded MTV special, which is the most 2005 thing ever) in May and released in November, about half a year later.

The PS4 Pro was officially announced in September and released in November. The DSi and New 3DS systems were also released very quickly after their announcements.

So there’s precedent. Nintendo’s partners know about the system in advance, and that’s the most important thing.
December 31 and January 1 are also in different year halves technically. But Switch went from initial announcement (0 months) to big blowout (2.8 months) to release (4.4 months). Something like... January 15, April 10, May 29 would match that exactly.
To be fair, time between announcement and release for consoles is shorter and shorter,
also here we don't have completely new platform while current one is dying or already is dead.
Current Switch models will keep selling and they will continue getting Nintendo support regardless (at least for some time) launch of new hardware,
so Nintendo doesnt need more than only few months from reveling to launch, it would probably be sold out hole 1st year in any case.

Current Switch was in totally different situation, when current Switch was announced Wii U was already dead and was discontinued only few months after Switch reveal. On other hand, current Switch units are still selling very good, I mean its still selling better than PS5.
To be clear, I’m not saying that the cadence is short, I’m saying that the first half of the year announce and then released within the first half sounds bizarre to me for a video game console release. It’s not the same thing as the Xbox 360, which was announced at the beginning half of the year and released on the second half of the year near the end. It’s just that it sounds weird to release a console when it is May or April or March when you think about it. Just those three months. It’s not impossible, just weird.

Granted they’ve released console device before in say June or July, so I’m not gonna exclude the possibility of them doing that, but this is so long ago I don’t know if that should be relevant in today’s Nintendo.

And it was only for one specific region.


Also I remember seeing that on TV for the 360, I was very young though but i remember because I was yelled at for being on MTV. Scolded too. But I was only flicking through channels dammit.

As someone who's worked directly with product and marketing teams on billion dollar consumer electronic products, witnessed first hand the sheer amount of time and diligence that goes into coming up with a product's name, and worked on a product whose name ended being <product> Pro (and therefore had this exact discussion in a professional context), this is just wrong. Even ignoring that, I would argue that "Switch Pro" is, as you put it, an obviously stupid ass name. There are absolutely millions of consumers who will wander into Best Buy, see a game case that says Switch Pro, equate that to PS4 Pro/iPhone Pro/etc, and think it will work on their kid's Switch. The only way to mitigate that would be to use up valuable box art real estate to say something like "ONLY on Nintendo Switch Pro", which marketing would want to avoid.

Maybe I'm wrong, and again it's Nintendo so who knows, but using "Pro" or any other term that's commonly used to designate a product as an upgraded version of a base model, goes against way too many industry rules.
If you knew how many meetings went into the naming and branding of…basically everything you come into contact with, you wouldn’t say it “literally doesn’t matter.” The name is part of the communication to the customer, and is honestly the most important part.

Like, let’s take the new iPhones, just as an example. The iPhone Pro has long come in two sizes, the 6.1-inch “regular” Pro and the 6.7-inch Pro Max. This year, the regular, non-Pro iPhone comes in a larger size for the first time too: there’s the 6.1-inch standard size and a new 6.7-inch model. So, what do they call this model? iPhone 14 Max, lining up with nomenclature used for the Pro phone of the same size? Nope! They ended up calling it the iPhone 14 Plus, harkening back to the iPhone Plus of the iPhone 6-8 era (even though that has a smaller screen than even the standard iPhone now…).

How many meetings do you think Apple had about that decision? I’m willing to bet good money that they debated it for years, ever since this model was greenlit and entered development. I’m sure that there were some people that thought it absolutely should be called iPhone Max to match the iPhone Pro Max and others that thought it should absolutely be called iPhone Plus, because Max is “above” Pro in Apple parlance and this new model slots below the Pro. I’m willing to bet there were metaphorical (and potentially literal) screaming matches about this decision. It’s not an exaggeration to say that there is literally hundreds of millions of dollars on the line with a branding decision like that.

Publishers fret endlessly over the titles of, like, individual books. “Will this title sell? Does it seem like it ‘fits’ the type of book this is? Would adding a subtitle help?” You can bet a company as large as Nintendo has been very carefully considering the name of this new hardware.


Honestly, as anyone who has ever worked retail or customer service can tell you, a lot of consumers can’t.
I don’t think the point @Gotdatmoneyy is saying was really clear, it’s not that marketing matters absolutely zero, it’s that marketing will not change the chip inside the machine.

They can market as a switch pro or they can market as a switch 2, that doesn’t mean that the chip inside is going to change all of a sudden because it’s going to be used regardless of what position they put it as. It’s going to be a device that just plays your switch games better in an official capacity, without you having to use some emulators or pirate the game or over clock to do so.

Again, I repeat, it’s not that marketing matters absolutely zero, it’s that it absolutely does not matter for the chip inside a device because it’s still going to be the same chip, regardless of what happens.


The way it is received by the public is entirely dependent on how Nintendo does it, if it fails to resonate correctly with the public that is their fault for not articulating it correctly for the public audience regardless of the name what position this device is supposed to take. If they do a good job at it, they will be rewarded with money, support, and more years of success.
 
Last edited:
It literally doesn't matter what they name it (obviously excluding stupid ass names like the Wii U) as long as the communication is simple and to the point.
General gamers not thinking likes we here. Must avoid confusion. Naming is important because it's one of the marketing. Use familiar Nintendo name likes before
 
0
They can market as a switch pro or they can market as a switch 2, that doesn’t mean that the chip inside is going to change all of a sudden because it’s going to be used regardless of what position they put it as. It’s going to be a device that just plays your switch games better in an official capacity, without you having to use some emulators or pirate the game or over clock to do so.
My argument is that they can't, or at least, shouldn't, brand/market it as a Switch "Pro", because nowadays that nomenclature conveys to consumers ideas that are contradictory to what this device actually is.
 
My argument is that they can't, or at least, shouldn't, brand/market it as a Switch "Pro", because nowadays that nomenclature conveys to consumers ideas that are contradictory to what this device actually is.
They weren’t really ever going to do Switch Pro or Switch 2 though

So I don’t really get the point of this argument lol

It’s not Nintendo’s thing. That’s Sony.


Nintendo and Microsoft make it a bigger uphill battle because they go with anything but a straightforward number thing.


Then again, one is a game maker and the other is a service company, only hardware companies know how to label things in a straightforward manner.
 
At the design shop I used to work at we had little boxes of stickers on the cashier's counter labeled $1 and $2, and we'd get people asking us "What are these?" and we'd say "Dollar-stickers and two-dollar stickers," and as I'm sure you've already guessed, yes, there were people who would then sincerely ask "How much for a dollar sticker?"

10 years in retail (mostly a grocery store) and I have had the same experiences. You try to explain that it is a buy one get one free deal and some people will think you are ripping them off.
 
Leak: In the Linux commit there was a list of possible names for T239. They are
Hidden content is only available for registered users. Sharing it outside of Famiboards is subject to moderation.
 
0
Leak: In the Linux commit there was a list of possible names for T239. They are
* Hidden text: cannot be quoted. *
Hidden content is only available for registered users. Sharing it outside of Famiboards is subject to moderation.
Woof indeed
 
Hmm, what are my current guesses for docked anyway...
First off, I'm excluding Samsung 8nm; I'm not sold that node can deliver a handheld mode with sufficient battery life given what we know of Drake.
Second, I'm excluding this year's version of TSMC's N3; basically, it sounds like a dud. TSMC won't be offering a direct migration path from N3 to N3E, so it seems to be a dead end. And really, given the way TSMC's talked about it recently, I don't expect base N3 to hang around long.

Alrighty then...
CPU clocks; this one is predominantly influenced by node. Probability descriptions will be pulled out of the rear/thought up on the spot.
Samsung 5LPP: 1.1 to 1.3 Ghz. Not an even distribution; think of a bell curve with 1.2 in the middle.
TSMC N7/N6: 1.2 to 1.4 Ghz. Bell curve with 1.3 in the middle.
TSMC N5/N5P/N4: 1.4 to 1.6 Ghz. For N5, bell curve with 1.5 in the middle. For N5P/N4, lower the odds for 1.4 by some amount and evenly shift them over to 1.5 and 1.6.

Docked GPU clocks; not as influenced by node. I'm actually more concerned with memory bandwidth here.
Base assumption of 102.4 GBps and 8 MB L3 cache: 768 to 1,024 Mhz. Not an even distribution, but not necessarily a bell curve either? Probabilities are tweaked according to the node.
-> Samsung 5LPP: start with a bell curve with 896 in the middle, then shift some from 1,024 to 768. 896 should still have the highest odds.
-> TSMC N7/N6: reverse 5LPP; start with a bell curve, then shift some from 768 to 1,024. 896 still #1.
-> TSMC N5/N5P/N4: see the preceding, but tweak such that 896 and 1,024 end up being even.

Next scenario is 120 GBps (7500 MT/s LPDDR5X) and 8 MB L3 cache: 896 to 1,152 Mhz. Lots of copy and pasting, then replacing numbers. Separately, I progressively get less sold on Samsung 5LPP on delivering a quiet docked mode.
-> Samsung 5LPP: start with a bell curve with 1,024 in the middle, then shift some from 1,152 to 896. 1,024 should still have the highest odds.
-> TSMC N7/N6: reverse 5LPP; start with a bell curve, then shift some from 896 to 1,152. 1,024 still #1.
-> TSMC N5/N5P/N4: see the preceding, but tweak such that 1,024 and 1,152 end up being even.

Then there's the ~136.5 GBps (8533 MT/s LPDDR5X) and 8 MB L3 cache scenario: 1,024 Mhz to 1,280 Mhz. Yadda yadda yadda.
-> Samsung 5LPP: start with a bell curve with 1,152 in the middle, then shift some from 1,280 to 1024. 1,152 should still have the highest odds.
-> TSMC N7/N6: reverse 5LPP; start with a bell curve, then shift some from 1,024 to 1,280. 1,152 still #1.
-> TSMC N5/N5P/N4: see the preceding, but tweak such that 1,152 and 1,280 end up being even.

As for the L3 cache, any theoretical expansion of that doesn't drastically change my expected ranges, but instead serve more to shuffle probabilities around from the lower end towards the upper end.
Awesome post... but...
For simple people, can you translate this to just "Node = X TF" for for those people that may or may not include myself? :)
 
I'm not even going to entertain the large amount of posts conflating market positioning with what the software development and inclusion cycle is going to look. I've stated it about 20 times in the last 24 hours so its not ambigious what my point is.

I will say lol at honing in on me saying the name doesn't matter to go "well actually it does" as if my point was to skip the entire marketing and advertising cycle.
Honestly, as anyone who has ever worked retail or customer service can tell you, a lot of consumers can’t.
When people say this its like they forget that B2C organizations operating on face to face sales (service industries basically) or brick and mortar retailers aint hiring and training employees for shits and giggles. People go to retail employees literally for the reason of parsing information. That's a methodolgy and a valid one as much as retail employees bitch and complain about clueless customers. Marketers and advertisers know this and rely on this, that's part (not all before someone starts doing the nit picking shit) of the value pr retailers provide.

I don't know why so many people think customers themselves are so dumb/uniformed/can't figure it out and that is why you get dumb questions. You get dumb questions because a bunch of your uninformed buying base doesn't give a fuck about actually learning compared to the value extracted from the purchase. So they let front facing employees spoon feed it to them. Yes these are actually marketing principles (yes I have infact studied marketing, specifically consumer behaviour at the masters level)

Again, not that this is the point I was making but anyhow.
 
Awesome post... but...
For simple people, can you translate this to just "Node = X TF" for for those people that may or may not include myself? :)

768 Mhz = 12 (SM count) * 128 (shader cores per SM) * 2 (floating point ops per clock cycle?) * 768 (clock rate) = 2.359 TFlops. Also, this is six times the size of OG Switch's GPU and the same docked clock, thus, six times the raw grunt, before taking into account architecture updates and features.
896 Mhz = 2.752 TFlops. Six times the size multiplied by 7/6 times the clock = seven times the raw grunt. I did gravitate towards multiples of 128 for a reason :p
1,024 Mhz = 3.145 TFlops. Six times the size, 8/6 times the clock = eight times the raw grunt.
1,152 Mhz = 3.538 TFlops. Nine times the raw grunt.
1,280 Mhz = 3.932 TFlops. Ten times the raw grunt.

So, if bandwidth == 102.4 GBps, then my range converts to 2.359-3.145 TFlops
If bandwidth == 120 GBps, then my range converts to 2.752-3.538 TFlops
If bandwidth == 136.5 GBps, then my range converts to 3.145-3.932 TFlops
For GPU clocks, I think of node more as a secondary restraint than the main one, so node's more about tilting towards which end of a given range, then determining the ranges themselves.
 
So correct me if I'm wrong, but there seems to be a consensus about the GPU being too big for 8nm considering we'll be limited by the battery (and now with those 8 A78 in the picture).

If that's the case, would TSMC's 7nm or 6nm give more sense for a GPU that big inside a portable console? (specially from Nintendo, who probably wants to give at least OG's [switch] battery life)

Or do we really need TSMC's 5nm (4N) to achieve that?

I do think the current specs from Nvidia's hack seem very solid, but I still have doubts about the node with said specs. It's really hard for me to imagine Nintendo going for that sweet 4N, so the question is if the 7nm would already make sense for that 12SM.

I'm still thinking maybe it's not about the node, but the specs. Maybe 8SM (1024 CUDA = orin NX) would make more sense for Nintendo. like I said, I do think the specs from the hack are solid... But I can't avoid thinking in the possibility of lower specs.
 
So correct me if I'm wrong, but there seems to be a consensus about the GPU being too big for 8nm considering we'll be limited by the battery (and now with those 8 A78 in the picture).

If that's the case, would TSMC's 7nm or 6nm give more sense for a GPU that big inside a portable console? (specially from Nintendo, who probably wants to give at least OG's [switch] battery life)

Or do we really need TSMC's 5nm (4N) to achieve that?

I do think the current specs from Nvidia's hack seem very solid, but I still have doubts about the node with said specs. It's really hard for me to imagine Nintendo going for that sweet 4N, so the question is if the 7nm would already make sense for that 12SM.

I'm still thinking maybe it's not about the node, but the specs. Maybe 8SM (1024 CUDA = orin NX) would make more sense for Nintendo. like I said, I do think the specs from the hack are solid... But I can't avoid thinking in the possibility of lower specs.
It can be achieved on TSMC 7nm, their extensions like 7nm+ or even 6nm.

Also can be achieved at TSMC 5nm, and their extension/derivatives like 4N (this is nVidia exclusive)


They can also work with Samsung 5nm, but nvidia has zero products there so the whole idea of that happening is so low, to me anyway.

It would be at an existing place where it’s possible with other nvidia products imo.

Just not 8nm.


To see the likeliest place, it’s one that has a clear progression path, I think 6nm is that one. It can be shrunk and introduce the Lite 2 later on with power savings and a 4N process. Also another battery saving model. Things like that.




Yes TX1 was on 20nm, but it followed with other nvidia products over time to the 16nm despite being the only product at the 20nm node from nvidia.

So Drake is strung along by where nVidia decides to go.


For food for thought, someone here did a test using ORIN and 2SMs @ 420MHz + 4 A78 @ 960MHz consumes around 10W

i don’t really think it’s 8nm

But you never knowwwwwww, we could be so off the mark that it scales better.


That said, the SoC would be between 160-180mm^2 at the 8nm node.

This post is all over the place I’m sorry 😛
 
That is my understanding. NVidia docs call them "Ampere-A" and "Ampere-B" but the only differences I can see are just how the driver needs to be implemented

Update: There is a difference! Ampere-B added AV-1 decode to NVDEC

My questions just regarding A100 in comparison to what T239 will utilize is mostly due to GA100 SM diagram not having RT cores at all and FP64 units allocated to the die.

A100 SM
931-sm-diagram.jpg


GA102 SM
930-sm-diagram.jpg

It can be achieved on TSMC 7nm, their extensions like 7nm+ or even 6nm.

Also can be achieved at TSMC 5nm, and their extension/derivatives like 4N (this is nVidia exclusive)


They can also work with Samsung 5nm, but nvidia has zero products there so the whole idea of that happening is so low, to me anyway.

It would be at an existing place where it’s possible with other nvidia products imo.

Just not 8nm.


To see the likeliest place, it’s one that has a clear progression path, I think 6nm is that one. It can be shrunk and introduce the Lite 2 later on with power savings and a 4N process. Also another battery saving model. Things like that.




Yes TX1 was on 20nm, but it followed with other nvidia products over time to the 16nm despite being the only product at the 20nm node from nvidia.

So Drake is strung along by where nVidia decides to go.


For food for thought, someone here did a test using ORIN and 2SMs @ 420MHz + 4 A78 @ 960MHz consumes around 10W

i don’t really think it’s 8nm

But you never knowwwwwww, we could be so off the mark that it scales better.


That said, the SoC would be between 160-180mm^2 at the 8nm node.

This post is all over the place I’m sorry 😛

My main concern with TSMC 7nm is that we have the Steamdeck and we know what kind of efficiency that gets on the node.
An 8 CPU core with 12SM GPU part just screams absolute terrible battery life and we know the Switch family of systems will never include a battery anywhere close in size to the Steamdeck (not factoring in Ampere being less efficient than RDNA2) ...
 
Last edited:
My questions just regarding A100 in comparison to what T239 will utilize is mostly due to GA100 SM diagram not having RT cores at all and FP64 units allocated to the die.
931-sm-diagram.jpg




My main concern with TSMC 7nm is that we have the Steamdeck and we know what kind of efficiency that gets on the node.
An 8 CPU core with 12SM GPU part just screams absolute terrible battery life and we know the Switch family of systems will never include a battery anywhere close in size to the Steamdeck (not factoring in Ampere being less efficient than RDNA2) ...
The efficiency is really about the same.
 
0
So your saying for docked clocks, and these numbers seem high so maybe I’m miss understanding, something like:

Samsung 5LPP: 1.2 = 3.5 TF
TSMC N7/N6: 1.3 = 4 TF
TSMC N5/N5P: 1.45 = 4.5 TF ???
TSMC N4: 1.6 = 5 TF ???


768 Mhz = 12 (SM count) * 128 (shader cores per SM) * 2 (floating point ops per clock cycle?) * 768 (clock rate) = 2.359 TFlops. Also, this is six times the size of OG Switch's GPU and the same docked clock, thus, six times the raw grunt, before taking into account architecture updates and features.
896 Mhz = 2.752 TFlops. Six times the size multiplied by 7/6 times the clock = seven times the raw grunt. I did gravitate towards multiples of 128 for a reason :p
1,024 Mhz = 3.145 TFlops. Six times the size, 8/6 times the clock = eight times the raw grunt.
1,152 Mhz = 3.538 TFlops. Nine times the raw grunt.
1,280 Mhz = 3.932 TFlops. Ten times the raw grunt.

So, if bandwidth == 102.4 GBps, then my range converts to 2.359-3.145 TFlops
If bandwidth == 120 GBps, then my range converts to 2.752-3.538 TFlops
If bandwidth == 136.5 GBps, then my range converts to 3.145-3.932 TFlops
For GPU clocks, I think of node more as a secondary restraint than the main one, so node's more about tilting towards which end of a given range, then determining the ranges themselves.
 
So your saying for docked clocks, and these numbers seem high so maybe I’m miss understanding, something like:

Samsung 5LPP: 1.2 = 3.5 TF
TSMC N7/N6: 1.3 = 4 TF
TSMC N5/N5P: 1.45 = 4.5 TF ???
TSMC N4: 1.6 = 5 TF ???
There's no real way to tell with any accuracy because we don't know the power curve of T239. It's all just spitballing.
 
0
They weren’t really ever going to do Switch Pro or Switch 2 though

So I don’t really get the point of this argument lol

It’s not Nintendo’s thing. That’s Sony.


Nintendo and Microsoft make it a bigger uphill battle because they go with anything but a straightforward number thing.


Then again, one is a game maker and the other is a service company, only hardware companies know how to label things in a straightforward manner.
Hardware companies aren’t the only ones & they too can’t even get it right at times, sometimes at all.
 
0
I take full credit for this new Switch. I told y’all! I bought an OLED Switch last month and this happens!!! Fuck!









I’m stoked as hell for this new Switch.
 
next-logo-paul-rand-1.jpg




How about Switch Up?
I kind of like Switch Ultra
I don't mind Switch+


But I'm just going to accept New Nintendo Switch... cause i'm a sad person who lives in a sad world

Its gonna be Switch The Fuck Up!

Or maybe I got confused with what I'd like to tell some people around here.
 
0
My main concern with TSMC 7nm is that we have the Steamdeck and we know what kind of efficiency that gets on the node.
Well, Van Gogh has four x86-64 CPU cores running at a frequency range of 2.4 - 3.5 GHz.
x86-64 CPUs are inherently less power efficient than Arm CPUs.
And I don't think Nintendo's going to have eight CPU cores on Drake running at a frequency higher than 2 GHz, especially when taking into account the CPU frequency in handheld mode and TV mode are going to be the same, like with the Nintendo Switch.

So I don't think TSMC's 7 nm** process node is the reason for the Steam Deck having mediocre battery life.

** → a marketing nomenclature used by all foundry companies
 
My main concern with TSMC 7nm is that we have the Steamdeck and we know what kind of efficiency that gets on the node.
An 8 CPU core with 12SM GPU part just screams absolute terrible battery life and we know the Switch family of systems will never include a battery anywhere close in size to the Steamdeck (not factoring in Ampere being less efficient than RDNA2) ...
As @Dakhil points out, the Steamdeck is a giant slab of beef - it's power draw is ~15W, and it is throwing an alarming amount of that power at it's operating system

My suspicion is that clocks will be keyed to battery life and chip yield rather than the other way around.
 
0
So your saying for docked clocks, and these numbers seem high so maybe I’m miss understanding, something like:

Samsung 5LPP: 1.2 = 3.5 TF
TSMC N7/N6: 1.3 = 4 TF
TSMC N5/N5P: 1.45 = 4.5 TF ???
TSMC N4: 1.6 = 5 TF ???
Oh, for the approach I'm taking, the node isn't that important for docked GPU clocks. I'm using memory bandwidth to dictate the expected range of GPU clocks/flops. Node serves more for adjusting the odds within a given range.

My rationale is that when docked, you theoretically have some leeway to just simply throw more power at raising clocks, to an extent. But memory bandwidth is a harder limit (assuming no overclocking, PC smartasses out there reading this :p). So I'm treating bandwidth as the primary/main constraint.
Then I looked at the desktop Geforce 30 series cards to get a rough grasp on how bandwidth was balanced against SM count and clock, then derived some soft guiding rails. Add an arbitrary amount set aside for the CPU, then add my bias towards multiples of 128 Mhz, stir, simmer, and voila, I get ranges for each RAM scenario.
I also organize things that way as I do think that the selection of RAM carries with it an implication of the rejection of other choices. For example, I think that a bandwidth of 102.4 GBps (128-bit regular LPDDR5 at full speed) implies intent to utilize an amount higher than ~68.26 GBps (128-bit LPDDR4X or 64-bit LPDDR5X).
 
0
I take full credit for this new Switch. I told y’all! I bought an OLED Switch last month and this happens!!! Fuck!









I’m stoked as hell for this new Switch.
I'll do you one better! I bought a Splatoon 3 Oled last week since it was 20% off. My god, the screen is beautiful and its built so solidly and I don't regret it but I can't wait for the new Switch.
 
I personally don't like the pro name either. But for a potential "Switch Ultra" or "Super Switch" exclusive, I'd imagine the logo and coloring on the box would be different. Like the Game Boy Advance and new 3DS box art (which also had "only for" on them). So it wouldnt necessarily take up more space.
Okay I'm just going to put it out there. The only appropriate name for a Nintendo Pro model other than "New" is unironically "Super Nintendo"+ (Product Name). Super Nintendo 3DS, Super Nintendo Entertainment System

And of course...

Super Nintendo Switch which sounds much more corporate and direct than "Super Switch", "Ultimate Nintendo Switch" and the other garbage names people have come up with.



On another note, DLSS 3.0 would be an absolute blessing if it made it's way to a Super Nintendo Switch. While the latency is the same as what you are interpolating from. I'd rather developers target 30fps + DLSS 3.0 rather than target 60fps and reduce visual quality unless it was a competitive game like COD or Rocket League
 
0

This information is pretty great, if we looked at possible clocks for the SoC with a power limit similar to Erista's, we'd get 5-6 watts for the SoC. The extra power here comes from battery improvements which increase by about 5% year over year for the last 6 years, we are looking at possibly a 6000mah battery vs the 4315mah one found in Switch models now (outside of Lite).

For CPU, I'd suggest the sweet spot here is 1728MHz, if that is across 7 cores it's ~2.2w, otherwise it's ~2.5w, for Erista it was 1.83w, so this is the most likely, though 1.5w is 2w is definitely on the table for 8nm, this is also why if they did shrink the die to 6nm or 4N, it would be above 2GHz, but certain people in this thread think backwards and try to eliminate options without logically thinking out why they would do so, this is the wrong way to talk about engineering.

For GPU, I think 460MHz for portable mode would be on the table at around ~3.6w for 1.41TFLOPs, this would line up with what people have heard (including Nate) that the device in portable is like a PS4 with DLSS on top. I'd simply add 5watts for the docked GPU clock, so 1GHz at ~5.3w increase makes a lot of sense, offering 3.172TFLOPs before DLSS. This would produce better graphics than Xbox Series S thanks to DLSS.

Moving down to 6nm, I would expect a ~20% increase in these clocks, so 1.7TFLOPs portable, 3.8TFLOPs docked, with a CPU at ~2GHz. 4N would offer another 20% increase IMO, so 2TFLOPs portable and ~4.5TFLOPs docked and 2.4GHz for the CPU. These are all the only real process nodes I could see being used, 5nm Samsung would offer less performance than 6nm, but still a noticeable bump up.

With these power estimates I'd suggest these theoretical clocks, the specs would look like this on 8nm:

CPU: 8*A78C @ 1728MHz with 7 cores available for developers
GPU: 1536 cuda cores, 48 tensor cores, 12 RT cores @ 460MHz portable (1.41TFLOPs) @1.032GHz docked (3.172TFLOPs)
RAM: 8GB or 12GB at 102GB/s
Storage: 128GB @ 400MB/s+
 
This information is pretty great, if we looked at possible clocks for the SoC with a power limit similar to Erista's, we'd get 5-6 watts for the SoC.
This is great info, where did you get it? This is someone using the power tools in Orin to generate these numbers? Are the coming out of tegrastats? Just wondering about the sampling rate of the PMU and the degree to which this can isolate the GPU draw

I’ll say my estimates for the TPC power draw were high!


The extra power here comes from battery improvements which increase by about 5% year over year for the last 6 years, we are looking at possibly a 6000mah battery vs the 4315mah one found in Switch models now (outside of Lite).

For CPU, I'd suggest the sweet spot here is 1728MHz, if that is across 7 cores it's ~2.2w,
You can’t really compare AE cores to C cores. I haven’t been able to find any data on the difference but it’s reasonable to assume the C cores will be more efficient without lockstep


otherwise it's ~2.5w, for Erista it was 1.83w, so this is the most likely, though 1.5w is 2w is definitely on the table for 8nm, this is also why if they did shrink the die to 6nm or 4N, it would be above 2GHz, but certain people in this thread think backwards and try to eliminate options without logically thinking out why they would do so, this is the wrong way to talk about engineering.

For GPU, I think 460MHz for portable mode would be on the table at around ~3.6w for 1.41TFLOPs, this would line up with what people have heard (including Nate) that the device in portable is like a PS4 with DLSS on top. I'd simply add 5watts for the docked GPU clock, so 1GHz at ~5.3w increase makes a lot of sense, offering 3.172TFLOPs before DLSS. This would produce better graphics than Xbox Series S thanks to DLSS.

Moving down to 6nm, I would expect a ~20% increase in these clocks, so 1.7TFLOPs portable, 3.8TFLOPs docked, with a CPU at ~2GHz. 4N would offer another 20% increase IMO, so 2TFLOPs portable and ~4.5TFLOPs docked and 2.4GHz for the CPU. These are all the only real process nodes I could see being used, 5nm Samsung would offer less performance than 6nm, but still a noticeable bump up.

With these power estimates I'd suggest these theoretical clocks, the specs would look like this on 8nm:

CPU: 8*A78C @ 1728MHz with 7 cores available for developers
GPU: 1536 cuda cores, 48 tensor cores, 12 RT cores @ 460MHz portable (1.41TFLOPs) @1.032GHz docked (3.172TFLOPs)
RAM: 8GB or 12GB at 102GB/s
Storage: 128GB @ 400MB/s+
 
This is great info, where did you get it? This is someone using the power tools in Orin to generate these numbers? Are the coming out of tegrastats? Just wondering about the sampling rate of the PMU and the degree to which this can isolate the GPU draw

I’ll say my estimates for the TPC power draw were high!



You can’t really compare AE cores to C cores. I haven’t been able to find any data on the difference but it’s reasonable to assume the C cores will be more efficient without lockstep
Tangmaster posted it in discord.
 
This information is pretty great, if we looked at possible clocks for the SoC with a power limit similar to Erista's, we'd get 5-6 watts for the SoC. The extra power here comes from battery improvements which increase by about 5% year over year for the last 6 years, we are looking at possibly a 6000mah battery vs the 4315mah one found in Switch models now (outside of Lite).

For CPU, I'd suggest the sweet spot here is 1728MHz, if that is across 7 cores it's ~2.2w, otherwise it's ~2.5w, for Erista it was 1.83w, so this is the most likely, though 1.5w is 2w is definitely on the table for 8nm, this is also why if they did shrink the die to 6nm or 4N, it would be above 2GHz, but certain people in this thread think backwards and try to eliminate options without logically thinking out why they would do so, this is the wrong way to talk about engineering.

For GPU, I think 460MHz for portable mode would be on the table at around ~3.6w for 1.41TFLOPs, this would line up with what people have heard (including Nate) that the device in portable is like a PS4 with DLSS on top. I'd simply add 5watts for the docked GPU clock, so 1GHz at ~5.3w increase makes a lot of sense, offering 3.172TFLOPs before DLSS. This would produce better graphics than Xbox Series S thanks to DLSS.

Moving down to 6nm, I would expect a ~20% increase in these clocks, so 1.7TFLOPs portable, 3.8TFLOPs docked, with a CPU at ~2GHz. 4N would offer another 20% increase IMO, so 2TFLOPs portable and ~4.5TFLOPs docked and 2.4GHz for the CPU. These are all the only real process nodes I could see being used, 5nm Samsung would offer less performance than 6nm, but still a noticeable bump up.

With these power estimates I'd suggest these theoretical clocks, the specs would look like this on 8nm:

CPU: 8*A78C @ 1728MHz with 7 cores available for developers
GPU: 1536 cuda cores, 48 tensor cores, 12 RT cores @ 460MHz portable (1.41TFLOPs) @1.032GHz docked (3.172TFLOPs)
RAM: 8GB or 12GB at 102GB/s
Storage: 128GB @ 400MB/s+
It should be noted that the pictured information is from Nvidia's power estimation tool for T234. The CPU cores on T234 are A78AE in different cluster configurations from the single cluster of A78C we believe T239 will have. And the GPU numbers were obtained by subtracting the values for a 4 SM configuration from those for a 16 SM configuration, since a 12 SM configuration wasn't available. Additionally, I think the fact that GA10F is singled out as the only Ampere GPU to support FLCG (first-level clock gating) suggests that parts of the T239 SoC design were reconsidered from T234 with improved power consumption in mind.

All that said, I think the numbers are useful for rough comparison. But all the caveats are still important to remember.
 
It should be noted that the pictured information is from Nvidia's power estimation tool for T234. The CPU cores on T234 are A78AE in different cluster configurations from the single cluster of A78C we believe T239 will have. And the GPU numbers were obtained by subtracting the values for a 4 SM configuration from those for a 16 SM configuration, since a 12 SM configuration wasn't available. Additionally, I think the fact that GA10F is singled out as the only Ampere GPU to support FLCG (first-level clock gating) suggests that parts of the T239 SoC design were reconsidered from T234 with improved power consumption in mind.

All that said, I think the numbers are useful for rough comparison. But all the caveats are still important to remember.
Yes, these are estimations of clocks, but they should be in the ball park, if the estimation from Nvidia's tools are accurate, T239 should be more power efficient per clock than T234, so the limit of ~6w for the SoC is with T234's configuration and should be below with T239.
 
CPU: 8*A78C @ 1728MHz with 7 cores available for developers
GPU: 1536 cuda cores, 48 tensor cores, 12 RT cores @ 460MHz portable (1.41TFLOPs) @1.032GHz docked (3.172TFLOPs)
RAM: 8GB or 12GB at 102GB/s
Storage: 128GB @ 400MB/s+
I imagine the I/O rate, RAM frequency, and memory bandwidth are probably going to be reduced in handheld mode, like with the Nintendo Switch.
Perhaps 5500 MT/s at 2750 MHz at 88 GB/s in handheld mode?
 
This information is pretty great, if we looked at possible clocks for the SoC with a power limit similar to Erista's, we'd get 5-6 watts for the SoC. The extra power here comes from battery improvements which increase by about 5% year over year for the last 6 years, we are looking at possibly a 6000mah battery vs the 4315mah one found in Switch models now (outside of Lite).

For CPU, I'd suggest the sweet spot here is 1728MHz, if that is across 7 cores it's ~2.2w, otherwise it's ~2.5w, for Erista it was 1.83w, so this is the most likely, though 1.5w is 2w is definitely on the table for 8nm, this is also why if they did shrink the die to 6nm or 4N, it would be above 2GHz, but certain people in this thread think backwards and try to eliminate options without logically thinking out why they would do so, this is the wrong way to talk about engineering.

For GPU, I think 460MHz for portable mode would be on the table at around ~3.6w for 1.41TFLOPs, this would line up with what people have heard (including Nate) that the device in portable is like a PS4 with DLSS on top. I'd simply add 5watts for the docked GPU clock, so 1GHz at ~5.3w increase makes a lot of sense, offering 3.172TFLOPs before DLSS. This would produce better graphics than Xbox Series S thanks to DLSS.

Moving down to 6nm, I would expect a ~20% increase in these clocks, so 1.7TFLOPs portable, 3.8TFLOPs docked, with a CPU at ~2GHz. 4N would offer another 20% increase IMO, so 2TFLOPs portable and ~4.5TFLOPs docked and 2.4GHz for the CPU. These are all the only real process nodes I could see being used, 5nm Samsung would offer less performance than 6nm, but still a noticeable bump up.

With these power estimates I'd suggest these theoretical clocks, the specs would look like this on 8nm:

CPU: 8*A78C @ 1728MHz with 7 cores available for developers
GPU: 1536 cuda cores, 48 tensor cores, 12 RT cores @ 460MHz portable (1.41TFLOPs) @1.032GHz docked (3.172TFLOPs)
RAM: 8GB or 12GB at 102GB/s
Storage: 128GB @ 400MB/s+
Well this makes me think 8nm is the most likely scenario and I aint even mad. What do you think the battery life on 8nm would be? Which do you think is more likely 8 sammy or 5 TSMC? Also I thought 8 sammy was much less performant than even a higher nm node TSMC, is that not true?

But to put your numbers in my monkey-brain chart:

8 nm Samsung = 3.1 TF
7 nm TSMC = 3.5 TF
5 nm Samsung = 3.6 TF
6 nm TSMC = 3.8 TF
5 nm TSMC = 4.1 TF
4 nm TSMC = 4.5 TF
3 nm TSMC = ???
 
0
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom