• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Do you have audio editing experience and want to help out with the Famiboards Discussion Club Podcast? If so, we're looking for help and would love to have you on the team! Just let us know in the Podcast Thread if you are interested!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

This information is pretty great, if we looked at possible clocks for the SoC with a power limit similar to Erista's, we'd get 5-6 watts for the SoC. The extra power here comes from battery improvements which increase by about 5% year over year for the last 6 years, we are looking at possibly a 6000mah battery vs the 4315mah one found in Switch models now (outside of Lite).

For CPU, I'd suggest the sweet spot here is 1728MHz, if that is across 7 cores it's ~2.2w, otherwise it's ~2.5w, for Erista it was 1.83w, so this is the most likely, though 1.5w is 2w is definitely on the table for 8nm, this is also why if they did shrink the die to 6nm or 4N, it would be above 2GHz, but certain people in this thread think backwards and try to eliminate options without logically thinking out why they would do so, this is the wrong way to talk about engineering.

For GPU, I think 460MHz for portable mode would be on the table at around ~3.6w for 1.41TFLOPs, this would line up with what people have heard (including Nate) that the device in portable is like a PS4 with DLSS on top. I'd simply add 5watts for the docked GPU clock, so 1GHz at ~5.3w increase makes a lot of sense, offering 3.172TFLOPs before DLSS. This would produce better graphics than Xbox Series S thanks to DLSS.

Moving down to 6nm, I would expect a ~20% increase in these clocks, so 1.7TFLOPs portable, 3.8TFLOPs docked, with a CPU at ~2GHz. 4N would offer another 20% increase IMO, so 2TFLOPs portable and ~4.5TFLOPs docked and 2.4GHz for the CPU. These are all the only real process nodes I could see being used, 5nm Samsung would offer less performance than 6nm, but still a noticeable bump up.

With these power estimates I'd suggest these theoretical clocks, the specs would look like this on 8nm:

CPU: 8*A78C @ 1728MHz with 7 cores available for developers
GPU: 1536 cuda cores, 48 tensor cores, 12 RT cores @ 460MHz portable (1.41TFLOPs) @1.032GHz docked (3.172TFLOPs)
RAM: 8GB or 12GB at 102GB/s
Storage: 128GB @ 400MB/s+
The only thing left that it needs is DLSS 3.0 to complete the package
 
The only thing left that it needs is DLSS 3.0 to complete the package
Can you imagine? Something around 3 TF x "4" = 12 TF... This thing pumping out visuals on par or, gasp, surpassing a PS5?

Nonsense! Impossible! It'll never happen!

dumb-and-dumber-jimcarrey.gif
 
Can you imagine? Something around 3 TF x "4" = 12 TF... This thing pumping out visuals on par or, gasp, surpassing a PS5?

Nonsense! Impossible! It'll never happen!

dumb-and-dumber-jimcarrey.gif
Well the rumours are saying it will "have a few Lovelace features". It's possible if this is true it could be DLSS 3. Don't put your hopes up, but god it would be so perfect. You literally upscale 480p to 720p, then interpolate to 60fps to increase fidelity in handheld mode.


Also just the interpolation feature alone would destroy the XBSS even without upscaling.

DLSS 3.0 looks so good, especially in Digital Foundry's demo of Marvel's Spider-Man:

 
"If" aren't just technical stuff, you know. For example there's the question if Nintendo sees the need to something like this, contrary to hobbyists who have an active desire to do it.

Then there's stuff like how DLSS is actually integrated in Nintendo's pipeline and so on.

Just because some people in their free time can do something doesn't mean companies doing the same.

All i say is, don't expect Switch games without patches seeing any big improvements or any at all, outside of those with unlocked framerate who benefit from the power jump alone.
We know (through Nate’s podcast) that they already have to put in a fair amount of work for backwards compatibility in the first place, so intentionally avoiding a far easier task to gain the marketing benefit for new hardware (and existing Switch software for new hardware owners, keep in mind) that is “your Switch games run in 4K” while doing the much harder work they already have to in order to convert the Maxwell driver stack that’s found in the code of every game to work with a custom Orin chip doesn’t seem sensible. So every Switch game is going to have a patch of some kind regardless, it’s just a matter of how (a universal Maxwell stack converter, conversion on a per-game basis or a combo of both).
If backwards compatibility was a labourless endeavour for Nintendo, you’d have a point, but it’s absolutely not. Adding a minor labour component to the current mandatory labour effort to achieve BC for another marketing benefit to drive interest in new hardware and pre-existing software is the biggest “no duh” ever.
They’re willing to visually optimize and provide online play to their emulated back-catalogue that did not previously exist, which is a hell of a lot more complex than what’s been proposed, and it’s a feature no one was expecting to exist and a small few were likely asking for to begin with. But a feature that customers would reasonably ask for that’s far easier to achieve is a no?
So again, no “if” to it.
There is a reason Apple launches their phones the way they do. Time between announcement and purchase is short to minimize the window of depressed sales. Launches are on a regular clock, so buyers aren't surprised by launch announcements. Previous gen is repositioned as an introductory model, accessing a new line of customers at the same time that margins on the old product are improving.
iPhones are not dedicated game hardware, for reasons that you acknowledge for yourself later on. The biggest being that Apple are not trying to sell exclusive software along with their new hardware that requires its own promotion cycle.
Well, it'll be holiday season in the US in 64 days, and right now there are people in this very thread - the most informed place on the matter in the English speaking Internet - saying "2024 at the earliest", so I'm not sure that's particularly relevant.

Nintendo is unlikely to make marketing decisions on a dime. They've got partners who have been briefed on marketing plans, software developers with products to launch who have been promised a marketing push, retailers who have allocated physical space for marquees that still need to be printed and shipped to them. If a major leak puts the device in the news such that it does surface, Nintendo is still unlikely to move the announcement from, say, January to October because of it.

Here is what I believe. The primary driver of when Nintendo announces isn't the holiday season, or leaks about the device, or anything public at all but internal information about production and the readiness of partners.
Saying this is the “most informed place on the matter” isn’t saying much, really, especially when one considers how little has actually been leaked thus far. We’re literally trying to spin gold from trash and leavings right now. And such a statement makes an implication that there’s no more (and more definitive) information that is possible to come through credible leaks that are disseminated through channels with MUCH further reach than Famiboards or ResetEra.
Back to the prior point, the average timeline from Nintendo/3rd party game announcement in a Direct to release is longer than 3-4 months (leaving out oddballs like TotK, Prime 4 and Bayo3, in the interest of fairness), settling somewhere closer to 5. Nintendo Switch benefitted from 2 of its launch window games already being mostly blown out prior to the hardware announcement due to being Wii U carry-overs (MK8DX only had to show off Inklings and the new battle mode). ARMS and Splatoon 2 released 5 and 6 months after announcement, respectively, never mind Odyssey’s 9 months from announcement (and this is without acknowledging that Odyssey was teased in October 2016, which I also ignore in the interest of fairness). And this is without discussing 3rd-parties. Unless we’re predicting a similarly anemic first 3 months of software as we saw with Switch, we must acknowledge that it’s not just hardware that needs room made for it. So if we’re discussing “readiness of partners”, it seems odd to grossly short-change 3rd-parties in terms of marketing time.
If Nvidia wasn't going to be able to buy ARM, like hell they'd be able to sell to Samsung.
 
This information is pretty great, if we looked at possible clocks for the SoC with a power limit similar to Erista's, we'd get 5-6 watts for the SoC. The extra power here comes from battery improvements which increase by about 5% year over year for the last 6 years, we are looking at possibly a 6000mah battery vs the 4315mah one found in Switch models now (outside of Lite).

For CPU, I'd suggest the sweet spot here is 1728MHz, if that is across 7 cores it's ~2.2w, otherwise it's ~2.5w, for Erista it was 1.83w, so this is the most likely, though 1.5w is 2w is definitely on the table for 8nm, this is also why if they did shrink the die to 6nm or 4N, it would be above 2GHz, but certain people in this thread think backwards and try to eliminate options without logically thinking out why they would do so, this is the wrong way to talk about engineering.

For GPU, I think 460MHz for portable mode would be on the table at around ~3.6w for 1.41TFLOPs, this would line up with what people have heard (including Nate) that the device in portable is like a PS4 with DLSS on top. I'd simply add 5watts for the docked GPU clock, so 1GHz at ~5.3w increase makes a lot of sense, offering 3.172TFLOPs before DLSS. This would produce better graphics than Xbox Series S thanks to DLSS.

Moving down to 6nm, I would expect a ~20% increase in these clocks, so 1.7TFLOPs portable, 3.8TFLOPs docked, with a CPU at ~2GHz. 4N would offer another 20% increase IMO, so 2TFLOPs portable and ~4.5TFLOPs docked and 2.4GHz for the CPU. These are all the only real process nodes I could see being used, 5nm Samsung would offer less performance than 6nm, but still a noticeable bump up.

With these power estimates I'd suggest these theoretical clocks, the specs would look like this on 8nm:

CPU: 8*A78C @ 1728MHz with 7 cores available for developers
GPU: 1536 cuda cores, 48 tensor cores, 12 RT cores @ 460MHz portable (1.41TFLOPs) @1.032GHz docked (3.172TFLOPs)
RAM: 8GB or 12GB at 102GB/s
Storage: 128GB @ 400MB/s+
Huh, that is very interesting information. And very impressive work by the engineers working on Orin. They got way more out of the A78AE cores than I expected, compared to Arm's claims on more advanced nodes (A77 on N7 and A78 on N5). Even the Ampere SMs seem to be more efficient compared to Thraktor's 30xx card (3070?).
 
0
So Samsung 8nm is back on the menu? Very interesting, but does the fact its a dead end node with few options to shrink later on make a difference to nvidia node choice for Drake? They will be making these chips for some time and Nintendo is going to want a die shrunk version at some point.

Either way, seems like no matter what node, Drake is going to be seriously impressive.
 
0
Not really, Switch 2 will not be completely different platform or clear cut to current Switch models, Nintendo will keep releasing games for current Switch models for at least 2 years after "Drake" Switch launch, so they could keep selling also older hardware that would also runs those games.
Just releasing most expensive model ($400+) and stop selling current and more affordable models would not be best bussines decision,
not only that this "Drake" hardware will be very hard to find in 1st year (it would be sold out in any case) but also point that Nintendo always looks to have different price options on market (especially now when they have only one platform), so they will probably keep current Switch models in sales until they start releasing Drake revisions ("Drake Lite" model for instance).
So an imminent new product completely fades from memory after the quarter it was announced in? Seems... unlikely. And summer quarters always tend to fall off the later years of hardware. Even now, Switch has dipped to #2 on NPD HW units sold for the first time in a LONG time this past August.

Besides, even if I accept that this would cause a major sales collapse, with how far ahead hardware launches have to be mapped out, it's not like they can wait all that long to announce new hardware anyways. Either you don't risk sales in the holiday period and cut your marketing time as thin as possible or you take the risk and give yourself some breathing room to drive interest in new hardware. So even if I were to believe there'd be a sales collapse, I don't consider their current hardware sales to be something inviolable to begin with.

Because the long-term success of their future hardware might mean more to them than undisturbed sales the last holiday period of their current hardware so long as software sales stay solid.
I'm really not sure why that possibility isn't in consideration.

You bring up a decent point: new hardware might basically be all-but-known in the holiday season regardless, with or without an official announcement, and then what good does holding the announcement do?

Who says they're rushing out tentpoles?
And apparently BC isn't a thing, neither is offering DLC for games released on old hardware on the new hardware. Besides, if DLC shouldn't have been put forward due to new hardware arriving and Nintendo were apparently now required to wait for their DLC pipeline to be exhausted, we'd be looking at a 2024 release date for new hardware at the earliest, and the info we have already basically means we can chuck such notions into the bin.

Who says they've not held anything back?

1280px-NES-101-Console-Set.jpg
1280px-SNES-Model-2-Set.jpg


GBA SP was introduced the year prior to DS launch and sold alongside the DS at retail, where GBA actually outsold the DS in both 2004 and 2005 worldwide, but that says more about the GBA, since DS WW sales were no slouch, even before the launch of the DS Lite.

Nintendo has NO qualms about selling hardware from their last cycle alongside new hardware. None. It's honestly expected behaviour from them. These are just examples of "slim" versions of imminently-succeeded hardware released close to a new hardware release cycle.

Also, as an aside, every handheld since GBA has seen a Pokemon release at the dawn of a new hardware cycle (FireRed/LeafGreen for the GBA-to-DS transition, B2/W2 released AFTER the 3DS launch, Sun/Moon released 4 months before the launch of Switch). Just some food for thought.

Looking at first-week sales of Splatoon 2 vs. first-week sales of Splatoon 3, I could think of one reason not to release a live game as a launch window exclusive on new hardware. Especially when the option to patch/enhance the game for said new hardware is on the table.

If it were a handful of dlc releasing next year, I could see it. But there's a LOT of Nintendo dlc dropping in 2023. M+R2, Xenoblade 3, Splatoon 3, Pokemon SV, Mario Kart, Mario Strikers, and potentially Switch Sports, as well as possibly unannounced titles. Even with BC, that seems like a line-up they want to focus on before moving to an entire new generation. Seems like their software is enough to carry them through next year.

Another point is the fact that we know GB/GBA was/is being worked on for the current Switch. I don't see them holding that back for the Switch 2, and with it not out yet it makes more sense to release before new hardware.
 
This information is pretty great, if we looked at possible clocks for the SoC with a power limit similar to Erista's, we'd get 5-6 watts for the SoC. The extra power here comes from battery improvements which increase by about 5% year over year for the last 6 years, we are looking at possibly a 6000mah battery vs the 4315mah one found in Switch models now (outside of Lite).

For CPU, I'd suggest the sweet spot here is 1728MHz, if that is across 7 cores it's ~2.2w, otherwise it's ~2.5w, for Erista it was 1.83w, so this is the most likely, though 1.5w is 2w is definitely on the table for 8nm, this is also why if they did shrink the die to 6nm or 4N, it would be above 2GHz, but certain people in this thread think backwards and try to eliminate options without logically thinking out why they would do so, this is the wrong way to talk about engineering.

For GPU, I think 460MHz for portable mode would be on the table at around ~3.6w for 1.41TFLOPs, this would line up with what people have heard (including Nate) that the device in portable is like a PS4 with DLSS on top. I'd simply add 5watts for the docked GPU clock, so 1GHz at ~5.3w increase makes a lot of sense, offering 3.172TFLOPs before DLSS. This would produce better graphics than Xbox Series S thanks to DLSS.

Moving down to 6nm, I would expect a ~20% increase in these clocks, so 1.7TFLOPs portable, 3.8TFLOPs docked, with a CPU at ~2GHz. 4N would offer another 20% increase IMO, so 2TFLOPs portable and ~4.5TFLOPs docked and 2.4GHz for the CPU. These are all the only real process nodes I could see being used, 5nm Samsung would offer less performance than 6nm, but still a noticeable bump up.

With these power estimates I'd suggest these theoretical clocks, the specs would look like this on 8nm:

CPU: 8*A78C @ 1728MHz with 7 cores available for developers
GPU: 1536 cuda cores, 48 tensor cores, 12 RT cores @ 460MHz portable (1.41TFLOPs) @1.032GHz docked (3.172TFLOPs)
RAM: 8GB or 12GB at 102GB/s
Storage: 128GB @ 400MB/s+
I'm already imagining the level of damage MonolithSoft can do with that kind of kit.
Or heck, just what Mario's next big adventure would look like running on that.
 
Referring to memory speed, 102 gb/s seems so low, considering even the PS4 was 176 gb/s. CPU and GPU power aside, won’t this low speed be a pretty big bottleneck in getting PS5/XSX ports? Or even some PS4 ports? I’m not very tech literate so hopefully someone can explain if I’m wrong or not.

edit

Speed? Bandwidth? I’m not even sure which is which. 😳
 
Referring to memory speed, 102 gb/s seems so low, considering even the PS4 was 176 gb/s. CPU and GPU power aside, won’t this low speed be a pretty big bottleneck in getting PS5/XSX ports? Or even some PS4 ports? I’m not very tech literate so hopefully someone can explain if I’m wrong or not.

edit

Speed? Bandwidth? I’m not even sure which is which. 😳

It's always going to be the biggest bottleneck due to the limits of operating with mobile hardware.

I doubt it will be a barrier though, nvidia is pretty efficient with its usage of available memory bandwidth and older architectures like the one the old HD twins were based on (GCN) were particularly bandwidth hungry.

The switch only had 25 GBPS but still got plenty of ps4 ports, Drake will likely have at least triple that amount and better architecture in terms of bandwidth efficiency so it will be better positioned than the switch was if anything.

Then with regard to memory bandwidth we have DLSS as well, inferring a higher resolution frame should be less memory intensive than actually having to render at that higher resolution so this will lower the bar in terms of requirements further.
 
tabletop mode is good

split joy-con is good

please stop trying to wish all of the fun out of nintendo switch

Not sure what prompted this but if anything Nintendo needs to figure out more ways to “Switch” without breaking or dropping anything they’ve already got.

OLED model has given tabletop mode life. Splatoon 3 is a great experience on and off TV now.
 
Not sure what prompted this but if anything Nintendo needs to figure out more ways to “Switch” without breaking or dropping anything they’ve already got.

OLED model has given tabletop mode life. Splatoon 3 is a great experience on and off TV now.

I've thought about this. One thing they could do in order to enable off TV play but still have a TV experience is to create a Dock that has a second WiFi chip in it in the future, either WiFi 6E or WiFi 7.

They could then create a glasses style device that that streams video from the dock, all that hardware would need is the displays, the WiFi chip and hardware to Decode compressed video.
 
0
If it were a handful of dlc releasing next year, I could see it. But there's a LOT of Nintendo dlc dropping in 2023. M+R2, Xenoblade 3, Splatoon 3, Pokemon SV, Mario Kart, Mario Strikers, and potentially Switch Sports, as well as possibly unannounced titles. Even with BC, that seems like a line-up they want to focus on before moving to an entire new generation. Seems like their software is enough to carry them through next year.

Another point is the fact that we know GB/GBA was/is being worked on for the current Switch. I don't see them holding that back for the Switch 2, and with it not out yet it makes more sense to release before new hardware.
Evergreens don't wither at the presence of new hardware, just like they don't wither at the presence of newer software releases as Switch more than adequately proves. DLC/live services do not stop progression, Nintendo can walk and chew gum at the same time, just as they have with the stream of releases while supporting DLC for existing titles.

The only way this argument has teeth is if you believe new hardware hurts software sales for old hardware. While one can argue new hardware announcements slow sales of existing hardware, that does not track for software coming off of successful and thriving hardware lifecycles in the slightest. GBA software outsold DS software in FY2005 (the fiscal year of the DS launch) AND FY2006. DS software outsold 3DS software in FY2011 (the fiscal year of the 3DS launch) AND FY2012, and the numbers weren't even close, FY2012 saw twice as much DS software sold as 3DS.
Any YoY software sales decline on the preceding hardware is far more attributable to progressively fewer software releases overall following a new hardware release than decline in software demand.

So all that DLC and Splatoon 3's live service output will absolutely have its audience in 2023, there's legitimately no reason to think it won't.
 
If it's a Switch 2 they're not gonna sell the other Switch consoles alongside it. That defeats the entire purpose. You want people to buy the new hardware, not the old hardware. It's different when it's just another model like a Lite or an OLED, those aren't aiming to replace their predecessors.

Funnily enough, you can still walk into game stores worldwide and pick up a PS4 and Sony continue to release major games on it as well. Maybe you haven’t thought this through?
 
They weren’t really ever going to do Switch Pro or Switch 2 though

So I don’t really get the point of this argument lol

It’s not Nintendo’s thing. That’s Sony.

Nintendo and Microsoft make it a bigger uphill battle because they go with anything but a straightforward number thing.

Then again, one is a game maker and the other is a service company, only hardware companies know how to label things in a straightforward manner.

But this technically is Switch 2 (next gen Switch), and iteration naming is best thing Nintendo could do, with appropriate marketing,
naming it "Switch 2", than later you have "Switch 2 Lite", than Switch 2 "something", in 6-7 years we have new chip and new technology in new hardware naming it Switch 3 (counting Nintendo stick to hybrid platform)...its all very clear and simple.

What Sony and Apple are doing with their naming is basically best thing, its very clear, simple and informative,
on other hand Nintendo and Microsoft more-less struggles with their console namings.
Its too late for Microsoft to start now with iteration naming (for instance just a Xbox 2),
but Nintendo has new and great brand (Switch), and there is no better time for Nintendo to start with iteration naming.
 
Referring to memory speed, 102 gb/s seems so low, considering even the PS4 was 176 gb/s. CPU and GPU power aside, won’t this low speed be a pretty big bottleneck in getting PS5/XSX ports? Or even some PS4 ports? I’m not very tech literate so hopefully someone can explain if I’m wrong or not.

edit

Speed? Bandwidth? I’m not even sure which is which. 😳

Yes, 102 gb/s (memory bandwith) will probably be again bottleneck, but not as big like with current Switch,
hole Drake hardware should be much more round up compared to current Switch that for instance had quite low memory bandwidth and weak CPU.

So there will not be problems with PS4 ports, so that means no problems with PS5/XSX cross gen games also (Drake Switch will most likely getting PS4 versions ports),
but who knows how exactly situation will be when devs stop making cross gen games and start really pushing next gen PS5/XSX hardware.
 
But this technically is Switch 2 (next gen Switch), and iteration naming is best thing Nintendo could do, with appropriate marketing,
naming it "Switch 2", than later you have "Switch 2 Lite", than Switch 2 "something", in 6-7 years we have new chip and new technology in new hardware naming it Switch 3 (counting Nintendo stick to hybrid platform)...its all very clear and simple.

What Sony and Apple are doing with their naming is basically best thing, its very clear, simple and informative,
on other hand Nintendo and Microsoft more-less struggles with their console namings.
Its too late for Microsoft to start now with iteration naming (for instance just a Xbox 2),
but Nintendo has new and great brand (Switch), and there is no better time for Nintendo to start with iteration naming.

Agree completely. Nintendo’s naming of consoles has been very poor in past years in terms of getting a message out there as to what the hardware actually is.

Many people thought the 3DS was just a DS with a 3D screen at first and the Wii U was just a controller for the Wii. This wasn’t helped by the fact that both consoles look very similar to their predecessors at a basic level.
 
It's always going to be the biggest bottleneck due to the limits of operating with mobile hardware.

I doubt it will be a barrier though, nvidia is pretty efficient with its usage of available memory bandwidth and older architectures like the one the old HD twins were based on (GCN) were particularly bandwidth hungry.

The switch only had 25 GBPS but still got plenty of ps4 ports, Drake will likely have at least triple that amount and better architecture in terms of bandwidth efficiency so it will be better positioned than the switch was if anything.

Then with regard to memory bandwidth we have DLSS as well, inferring a higher resolution frame should be less memory intensive than actually having to render at that higher resolution so this will lower the bar in terms of requirements further.

Yes, 102 gb/s (memory bandwith) will probably be again bottleneck, but not as big like with current Switch,
hole Drake hardware should be much more round up compared to current Switch that for instance had quite low memory bandwidth and weak CPU.

So there will not be problems with PS4 ports, so that means no problems with PS5/XSX cross gen games also (Drake Switch will most likely getting PS4 versions ports),
but who knows how exactly situation will be when devs stop making cross gen games and start really pushing next gen PS5/XSX hardware.

Thanks for the explanation. I’m glad it’ll still be able to get PS4/cross gen ports. The one next gen game I’m really hoping comes to Drake is Elder Scrolls 6. 🤞Skyrim was great on the go. I’ll bet ES6 is the same.
 
Thanks for the explanation. I’m glad it’ll still be able to get PS4/cross gen ports. The one next gen game I’m really hoping comes to Drake is Elder Scrolls 6. 🤞Skyrim was great on the go. I’ll bet ES6 is the same.

Yes, I also think thats very impressive that we can get PS4/PS4 Pro level games from mobile hardware.
 
Last edited:
0
Thanks for the explanation. I’m glad it’ll still be able to get PS4/cross gen ports. The one next gen game I’m really hoping comes to Drake is Elder Scrolls 6. 🤞Skyrim was great on the go. I’ll bet ES6 is the same.

I don't think TES6 is leaving the Xbox ecosystem for reasons unrelated to power.
 
Thanks for the explanation. I’m glad it’ll still be able to get PS4/cross gen ports. The one next gen game I’m really hoping comes to Drake is Elder Scrolls 6. 🤞Skyrim was great on the go. I’ll bet ES6 is the same.

ES6 being Xbox exclusive is more likely going to be a blocker than Switch 2 performance I’m guessing. Otherwise 100% with you. I’ve been thinking for quite some time that around PS4 Pro quality on Switch is my dream hardware for at least another 5-10 years.
 
Aren't these the peak power consumption for the PVA and DLA? are both a requirement for drake?
What exactly do you mean by "other settings"? Also, what about the remaining components on a drake-based switch such as the display, speakers, etc... ?
that is jetson orin's power estimation tools, since Drake doesn't use those components, you only want to pay attention to the CPU and GPU clock and their effect on Max power draw, the actual power draw that would be used here in something like Zelda where nothing is loaded at 100% 100% of the time, is that average draw, but building a device to use the max power available, is the only way to manage heat and power draw of the components to not exceed their power limits.
 
I don't think TES6 is leaving the Xbox ecosystem for reasons unrelated to power.

ES6 being Xbox exclusive is more likely going to be a blocker than Switch 2 performance I’m guessing. Otherwise 100% with you. I’ve been thinking for quite some time that around PS4 Pro quality on Switch is my dream hardware for at least another 5-10 years.
Oh jeez I forgot Bethesda was acquired by MS. 😔 I need to go to bed.
 
This information is pretty great, if we looked at possible clocks for the SoC with a power limit similar to Erista's, we'd get 5-6 watts for the SoC. The extra power here comes from battery improvements which increase by about 5% year over year for the last 6 years, we are looking at possibly a 6000mah battery vs the 4315mah one found in Switch models now (outside of Lite).

For CPU, I'd suggest the sweet spot here is 1728MHz, if that is across 7 cores it's ~2.2w, otherwise it's ~2.5w, for Erista it was 1.83w, so this is the most likely, though 1.5w is 2w is definitely on the table for 8nm, this is also why if they did shrink the die to 6nm or 4N, it would be above 2GHz, but certain people in this thread think backwards and try to eliminate options without logically thinking out why they would do so, this is the wrong way to talk about engineering.

For GPU, I think 460MHz for portable mode would be on the table at around ~3.6w for 1.41TFLOPs, this would line up with what people have heard (including Nate) that the device in portable is like a PS4 with DLSS on top. I'd simply add 5watts for the docked GPU clock, so 1GHz at ~5.3w increase makes a lot of sense, offering 3.172TFLOPs before DLSS. This would produce better graphics than Xbox Series S thanks to DLSS.

Moving down to 6nm, I would expect a ~20% increase in these clocks, so 1.7TFLOPs portable, 3.8TFLOPs docked, with a CPU at ~2GHz. 4N would offer another 20% increase IMO, so 2TFLOPs portable and ~4.5TFLOPs docked and 2.4GHz for the CPU. These are all the only real process nodes I could see being used, 5nm Samsung would offer less performance than 6nm, but still a noticeable bump up.

With these power estimates I'd suggest these theoretical clocks, the specs would look like this on 8nm:

CPU: 8*A78C @ 1728MHz with 7 cores available for developers
GPU: 1536 cuda cores, 48 tensor cores, 12 RT cores @ 460MHz portable (1.41TFLOPs) @1.032GHz docked (3.172TFLOPs)
RAM: 8GB or 12GB at 102GB/s
Storage: 128GB @ 400MB/s+

But why do we deduce this information and why do we think it can be so? What clues do we have?
I want to ask the reasons before getting excited, since I don't want to be disappointed later:cry:.
 
We know (through Nate’s podcast) that they already have to put in a fair amount of work for backwards compatibility in the first place, so intentionally avoiding a far easier task to gain the marketing benefit for new hardware (and existing Switch software for new hardware owners, keep in mind) that is “your Switch games run in 4K” while doing the much harder work they already have to in order to convert the Maxwell driver stack that’s found in the code of every game to work with a custom Orin chip doesn’t seem sensible. So every Switch game is going to have a patch of some kind regardless, it’s just a matter of how (a universal Maxwell stack converter, conversion on a per-game basis or a combo of both).
If backwards compatibility was a labourless endeavour for Nintendo, you’d have a point, but it’s absolutely not. Adding a minor labour component to the current mandatory labour effort to achieve BC for another marketing benefit to drive interest in new hardware and pre-existing software is the biggest “no duh” ever.
They’re willing to visually optimize and provide online play to their emulated back-catalogue that did not previously exist, which is a hell of a lot more complex than what’s been proposed, and it’s a feature no one was expecting to exist and a small few were likely asking for to begin with. But a feature that customers would reasonably ask for that’s far easier to achieve is a no?
So again, no “if” to it.

I wouldn't take Nintendos NSO efforts as a example for what they do or not. After all, it's pretty much the main selling point for their sub outside of playing online and Cloud Save. And there's a full team dedicated for this stuff.

I don't get what's the issue here, reading your post we both agree patches are needed. I say they're not doing many outside of their evergreens as there's pretty much no financial incentive to do it for games that aren't selling anymore, as long as they run without issue on Drake they will likely say "good enough".

Ending this topic for me now, ultimately we'll soon see what they do and what not.
 
I wouldn't take Nintendos NSO efforts as a example for what they do or not. After all, it's pretty much the main selling point for their sub outside of playing online and Cloud Save. And there's a full team dedicated for this stuff.
And... what, new hardware and its associated software capabilities are being developed by some ragtag group of nothings inside Nintendo? Or is it more likely that they have every available engineer on deck right now?
Besides, you work against your own argument by calling online play in retro games a selling point because... what do you think enhancements to Switch games would be for new hardware, if not a selling point? So clearly, if development of something drives consumer interest to part with their cash, they'll do it, especially if the labour to achieve it is known to be relatively minimal in comparison to other endeavours they've engaged in for the same reason.
I don't get what's the issue here, reading your post we both agree patches are needed. I say they're not doing many outside of their evergreens as there's pretty much no financial incentive to do it for games that aren't selling anymore, as long as they run without issue on Drake they will likely say "good enough".
Few things:
Calling a handful of kilobytes a "patch" is overselling it.
What I said was that was the absolute minimum possible effort that could be done to achieve the goal, but do not rule out that the company building the hardware this is to be achieved on could do something more. For example, even with what was proposed, instead of patching every game individually, given how these rescaling profiles take up very minimal bytes, if they are able to, say, synthesize a "rescaling profile" for every game released on Switch, they could simply include that as a reference document in the OS itself. Et voila, no patches. But for all I know, since I'm not a hardware or software engineer, there could be an even more elegant solution available. And we should expect the people who make that hardware and know the software inside out would be incredibly capable of finding such a solution.
And I'd think driving early adoption to new hardware (so as to generate more opportunity to sell more software exclusives in its earlier years) counts as a "financial incentive".
 
8nm is seriously dissapointing.

Wait what? Do i need a "while you were asleep" update? What happened?

And... what, new hardware and its associated software capabilities are being developed by some ragtag group of nothings inside Nintendo? Or is it more likely that they have every available engineer on deck right now?
Besides, you work against your own argument by calling online play in retro games a selling point because... what do you think enhancements to Switch games would be for new hardware, if not a selling point? So clearly, if development of something drives consumer interest to part with their cash, they'll do it, especially if the labour to achieve it is known to be relatively minimal in comparison to other endeavours they've engaged in for the same reason.

Few things:
Calling a handful of kilobytes a "patch" is overselling it.
What I said was that was the absolute minimum possible effort that could be done to achieve the goal, but do not rule out that the company building the hardware this is to be achieved on could do something more. For example, even with what was proposed, instead of patching every game individually, given how these rescaling profiles take up very minimal bytes, if they are able to, say, synthesize a "rescaling profile" for every game released on Switch, they could simply include that as a reference document in the OS itself. Et voila, no patches. But for all I know, since I'm not a hardware or software engineer, there could be an even more elegant solution available. And we should expect the people who make that hardware and know the software inside out would be incredibly capable of finding such a solution.
And I'd think driving early adoption to new hardware (so as to generate more opportunity to sell more software exclusives in its earlier years) counts as a "financial incentive".

Online play as a selling point was generally, as you need NSO to play any Switch game online. Outside of F2P iirc.

Also, the early adoption for this thing is driven purely by "it's a more powerful Switch" as the first people to fight scalpers for this are enthusiasts. And the part of that group that needs more than just "its more powerful" will prolly have a new 1st party title (remake or new game) and a couple of Drake exclusive 3rd party games, maybe even some that are already available as Cloud games right now.

Finally, the biggest incentive for Drake has already been announced, titled and given a date as of last week.

I also think doing patches for evergreen games and maybe more recent releases like Xenoblade 3 or Bayonetta 3 are more than enough for the average customer.

Let's agree to disagree and move on.
 
Referring to memory speed, 102 gb/s seems so low, considering even the PS4 was 176 gb/s. CPU and GPU power aside, won’t this low speed be a pretty big bottleneck in getting PS5/XSX ports? Or even some PS4 ports? I’m not very tech literate so hopefully someone can explain if I’m wrong or not.

edit

Speed? Bandwidth? I’m not even sure which is which. 😳
The Steam Deck is 88 GB/s and Apple's best-in-industry M2 is 100GB/s. 102GB/s is not low at all. The PS4 is a 200W console.
 
This information is pretty great, if we looked at possible clocks for the SoC with a power limit similar to Erista's, we'd get 5-6 watts for the SoC. The extra power here comes from battery improvements which increase by about 5% year over year for the last 6 years, we are looking at possibly a 6000mah battery vs the 4315mah one found in Switch models now (outside of Lite).

For CPU, I'd suggest the sweet spot here is 1728MHz, if that is across 7 cores it's ~2.2w, otherwise it's ~2.5w, for Erista it was 1.83w, so this is the most likely, though 1.5w is 2w is definitely on the table for 8nm, this is also why if they did shrink the die to 6nm or 4N, it would be above 2GHz, but certain people in this thread think backwards and try to eliminate options without logically thinking out why they would do so, this is the wrong way to talk about engineering.

For GPU, I think 460MHz for portable mode would be on the table at around ~3.6w for 1.41TFLOPs, this would line up with what people have heard (including Nate) that the device in portable is like a PS4 with DLSS on top. I'd simply add 5watts for the docked GPU clock, so 1GHz at ~5.3w increase makes a lot of sense, offering 3.172TFLOPs before DLSS. This would produce better graphics than Xbox Series S thanks to DLSS.

Moving down to 6nm, I would expect a ~20% increase in these clocks, so 1.7TFLOPs portable, 3.8TFLOPs docked, with a CPU at ~2GHz. 4N would offer another 20% increase IMO, so 2TFLOPs portable and ~4.5TFLOPs docked and 2.4GHz for the CPU. These are all the only real process nodes I could see being used, 5nm Samsung would offer less performance than 6nm, but still a noticeable bump up.

With these power estimates I'd suggest these theoretical clocks, the specs would look like this on 8nm:

CPU: 8*A78C @ 1728MHz with 7 cores available for developers
GPU: 1536 cuda cores, 48 tensor cores, 12 RT cores @ 460MHz portable (1.41TFLOPs) @1.032GHz docked (3.172TFLOPs)
RAM: 8GB or 12GB at 102GB/s
Storage: 128GB @ 400MB/s+
I'd really be wary to assume they'll increase the battery size. I think it's safer to stick with the ~4-5W range Erista had for estimations.

This Switch should see improvements in screen power efficiency at the very least. I'm not sure about RAM or storage efficiency.

Just pessimism it seems, nothing happened.
Not quite true, look at the post I quoted.
 
We know (through Nate’s podcast) that they already have to put in a fair amount of work for backwards compatibility in the first place, so intentionally avoiding a far easier task to gain the marketing benefit for new hardware (and existing Switch software for new hardware owners, keep in mind) that is “your Switch games run in 4K” while doing the much harder work they already have to in order to convert the Maxwell driver stack that’s found in the code of every game to work with a custom Orin chip doesn’t seem sensible. So every Switch game is going to have a patch of some kind regardless, it’s just a matter of how (a universal Maxwell stack converter, conversion on a per-game basis or a combo of both).
All I can tell you is that your understanding is incomplete. Even with BC requiring extra work, it's not nearly that simple to run games at higher resolutions/framerates than intended. It's just asking for trouble and a whole host of issues you'd have never ran into otherwise.

The best case scenario is game specific patches and something equivalent to the PS4 Pro's boost mode for everything else.
 
Aren't these the peak power consumption for the PVA and DLA? are both a requirement for drake?
What exactly do you mean by "other settings"? Also, what about the remaining components on a drake-based switch such as the display, speakers, etc... ?
These are PVA and DLA peak power consumption. They can be fully disabled. Other settings is USB, PCIE, Ethernet, camera, video decode and video encode hardware parts.

And there is +/- 13W between Orin Nano power consumption and AGX board power consumption with Orin Nano settings.
 
Referring to memory speed, 102 gb/s seems so low, considering even the PS4 was 176 gb/s. CPU and GPU power aside, won’t this low speed be a pretty big bottleneck in getting PS5/XSX ports? Or even some PS4 ports? I’m not very tech literate so hopefully someone can explain if I’m wrong or not.

edit

Speed? Bandwidth? I’m not even sure which is which. 😳
PS4 is on 2011 technology, the CPU wasn't very bandwidth efficient, the GPU was even worse... With ARM A78C, 102GB/s is one of the best bandwidths available for this CPU, it will sip that bandwidth and leave a lot of room for the 2020 Nvidia GPU architecture here, then there are tools like DLSS which should use less bandwidth than rendering in the higher resolutions it will output natively... There is also the idea that Drake will have fast storage attach to it and use RTX I/O to manage max speeds from it, giving the device an extra memory space to address if needed... I'd also like to suggest that Tegra cache space could be particularly large, since the GPU and CPU share cache, we could see something crazy like 9-20MB L2/L3 cache available, which would ease a lot of dependents on the main memory bandwidth. Even if it's just 102GB/s though, I'd say it would be similar to PS4's bandwidth, with an edge to Drake thanks to GPU tricks that have been added over the past 11 years since PS4's GPU was designed.
But why do we deduce this information and why do we think it can be so? What clues do we have?
I want to ask the reasons before getting excited, since I don't want to be disappointed later:cry:.
A brand new poster found T239 in linux's public kernal that gave us the CPU type and number of cores, we've had the GPU type, number of cores and memory bandwidth since March 1st, as they were in the Nvidia hack that leaked DLSS source code... We also got confirmation that the chip is complete, since Linux kernal update in public wouldn't exist for a virtual chip... Finally we got Orin's power draw for CPU/GPU are various clocks... and we know Erista's original power draw for the SoC... Basically we have everything we need to estimate clocks and we know the architectures, thus these specs should be close to Switch "2" actual spec, like very close.
8nm is seriously dissapointing.
We don't know that it is 8nm, but these clocks being available is actually a big surprise, it's what we've been talking about for specs for a while and is above our minimum 2.36TFLOPs docked we were working with, 3TFLOPs+ looks very likely on 8nm.
I'd really be wary to assume they'll increase the battery size. I think it's safer to stick with the ~4-5W range Erista had for estimations.

This Switch should see improvements in screen power efficiency at the very least. I'm not sure about RAM or storage efficiency.


Not quite true, look at the post I quoted.
I didn't really explain this well, but I'm using 6w as the max power draw for the SoC, this is 100% CPU and 100% GPU usage at the same time, Erista's ~5.3w was during Zelda botw, Erista should draw at least 6w if the CPU and GPU are both being used to 100%. Sorry, I guess the post was getting long and didn't think I needed to add this because the average power draw was in the list for those clocks, which should be closer to Erista's power draw during Zelda. Which would be ~5.2w for the average power draw for Orin with those clocks, remember Drake should be a little more power efficient too, so this is about 5w for these clocks on average.
These are PVA and DLA peak power consumption. They can be fully disabled. Other settings is USB, PCIE, Ethernet, camera, video decode and video encode hardware parts.

And there is +/- 13W between Orin Nano power consumption and AGX board power consumption with Orin Nano settings.
Thanks again for posting the orin power chart stuff, made specs easy to come to.
 
This information is pretty great, if we looked at possible clocks for the SoC with a power limit similar to Erista's, we'd get 5-6 watts for the SoC. The extra power here comes from battery improvements which increase by about 5% year over year for the last 6 years, we are looking at possibly a 6000mah battery vs the 4315mah one found in Switch models now (outside of Lite).

For CPU, I'd suggest the sweet spot here is 1728MHz, if that is across 7 cores it's ~2.2w, otherwise it's ~2.5w, for Erista it was 1.83w, so this is the most likely, though 1.5w is 2w is definitely on the table for 8nm, this is also why if they did shrink the die to 6nm or 4N, it would be above 2GHz, but certain people in this thread think backwards and try to eliminate options without logically thinking out why they would do so, this is the wrong way to talk about engineering.

For GPU, I think 460MHz for portable mode would be on the table at around ~3.6w for 1.41TFLOPs, this would line up with what people have heard (including Nate) that the device in portable is like a PS4 with DLSS on top. I'd simply add 5watts for the docked GPU clock, so 1GHz at ~5.3w increase makes a lot of sense, offering 3.172TFLOPs before DLSS. This would produce better graphics than Xbox Series S thanks to DLSS.

Moving down to 6nm, I would expect a ~20% increase in these clocks, so 1.7TFLOPs portable, 3.8TFLOPs docked, with a CPU at ~2GHz. 4N would offer another 20% increase IMO, so 2TFLOPs portable and ~4.5TFLOPs docked and 2.4GHz for the CPU. These are all the only real process nodes I could see being used, 5nm Samsung would offer less performance than 6nm, but still a noticeable bump up.

With these power estimates I'd suggest these theoretical clocks, the specs would look like this on 8nm:

CPU: 8*A78C @ 1728MHz with 7 cores available for developers
GPU: 1536 cuda cores, 48 tensor cores, 12 RT cores @ 460MHz portable (1.41TFLOPs) @1.032GHz docked (3.172TFLOPs)
RAM: 8GB or 12GB at 102GB/s
Storage: 128GB @ 400MB/s+
It should be noted that the pictured information is from Nvidia's power estimation tool for T234. The CPU cores on T234 are A78AE in different cluster configurations from the single cluster of A78C we believe T239 will have. And the GPU numbers were obtained by subtracting the values for a 4 SM configuration from those for a 16 SM configuration, since a 12 SM configuration wasn't available. Additionally, I think the fact that GA10F is singled out as the only Ampere GPU to support FLCG (first-level clock gating) suggests that parts of the T239 SoC design were reconsidered from T234 with improved power consumption in mind.

All that said, I think the numbers are useful for rough comparison. But all the caveats are still important to remember.

Thanks for posting these, the power consumption is lower than I'd thought. However, I don't think "subtracting the values for a 4 SM configuration from those for a 16 SM configuration" is quite right to get an estimated power consumption for a 12 SM configuration, as you're subtracting out everything that doesn't scale with the SM count, including the command processor, ROPs, etc. It would probably make more sense to take the halfway point between the 16 SM power consumption and the 8 SM power consumption (if available). This would still include some extra ROPs over T239, and some extra GPC logic I believe, but useful for a rough comparison.
 
All I can tell you is that your understanding is incomplete. Even with BC requiring extra work, it's not nearly that simple to run games at higher resolutions/framerates than intended. It's just asking for trouble and a whole host of issues you'd have never ran into otherwise.

The best case scenario is game specific patches and something equivalent to the PS4 Pro's boost mode for everything else.
So Yuzu devs are expected to clown Nintendo? Huh. Neat.
 
So the long awaited (by me at least) Jetson Nano board turned out to be just another binned Orin chip.. should have known.
The memory is curiously downtuned to 34GB/S for the 64 bit version and 68GB/S for the 128 bit bus version.

edit: don't mind me, I'm just catching up on things
 
Last edited:
0
Thanks for posting these, the power consumption is lower than I'd thought. However, I don't think "subtracting the values for a 4 SM configuration from those for a 16 SM configuration" is quite right to get an estimated power consumption for a 12 SM configuration, as you're subtracting out everything that doesn't scale with the SM count, including the command processor, ROPs, etc. It would probably make more sense to take the halfway point between the 16 SM power consumption and the 8 SM power consumption (if available). This would still include some extra ROPs over T239, and some extra GPC logic I believe, but useful for a rough comparison.
This is actually why I used the max power draw, since the CPU and GPU won't be 100% usage at the same time, and Drake should be more power efficient than Orin as well, not to mention a single cluster should also consume less power than 2 clusters... With 6w as a limit, and the GPU taking up only ~3.6w of the 460MHz at max load, it should be safe for a rough estimate. I would also like to point out that the average power draw for the CPU at 1728MHz is 1.8w, 4 A57 cores on 20nm was 1.83w in anandtech's testing, though that was under load, however this is with all 8 cores clocked at 1728MHz, where one will be reserved for the OS and could be clocked much lower, freeing up ~0.3w.
 
This is actually why I used the max power draw, since the CPU and GPU won't be 100% usage at the same time, and Drake should be more power efficient than Orin as well, not to mention a single cluster should also consume less power than 2 clusters... With 6w as a limit, and the GPU taking up only ~3.6w of the 460MHz at max load, it should be safe for a rough estimate. I would also like to point out that the average power draw for the CPU at 1728MHz is 1.8w, 4 A57 cores on 20nm was 1.83w in anandtech's testing, though that was under load, however this is with all 8 cores clocked at 1728MHz, where one will be reserved for the OS and could be clocked much lower, freeing up ~0.3w.

Really enjoy reading your posts. So thanks for your insight.

We are currently working on the assumption that the docked power draw will be similar to erista, but how likely is it that Nintendo could have a more efficient cooling solution and push higher wattage in docked? Or are they already at the limit given the switches form factor?

I think when there was the Splatoon model funcle leak, the other backplate being tested had different vent placement so this implies that the cooling has to have been redesigned to some degree. Just thinking if they can get a better cooling solution they could squeeze a bit more out of that big wide GPU.
 
Last edited:
Really enjoy reading your posts. So thanks for your insight.

We are currently working on the assumption that the docked power draw will be similar to erista, but how likely is it that Nintendo could have a more efficient cooling solution and push higher wattage in docked? Or are they already at the limit given the switches form factor?
That depends on how fast they want you to charge the battery, there is room for say a little over 30 watts TDP, but if they allow you to charge the battery while gaming at 15watts, that would be the entire TDP budget... They could limit it to 5 watts for charging like the current Switch, but they probably wouldn't use the entire TDP in that case and you'd be left with maybe 20 watt power draw when docked, however because of memory bandwidth, pushing past say 4TFLOPs is only going to get you so far, CPU is locked behind portable mode too, so that shouldn't increase really... We will see, but I'd suggest they will probably just increase power charging when docked to 15 Watts, and keep the system drawing 15 Watts or less.
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom