• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

I've tried to comprehend all th tech details but I'm somewhat lost.

Do the experts here still believe this leak is stronger than what was expected or are expectations now down again as it might not hit a good performance and can't use DLSS to it's fullest due to presumably low clockspeeds?
The silicon seems to be much larger. 8SM were the most optimistic projections, we are getting 12SM. About the power, we need information about the clocks, but the silicon seems to be much better than expected.
 
0
the only reason I'd want 1080p is if it'd make it easier for nintendo to enable a technology like VRR which would be both a performance and battery saving measure as apparently there hasn't been 720p VRR screens up to this point (tho that doesnt mean it's impossible either).

I also doubt Nintendo is seriously considering VR at this point as it would be a massive undertaking to support beyond the gimmicky releases they've made so far (Labo and additional modes for first party titles)
 
Then again the Steamdeck is showing us what's possible with SD cards by having a faster I/O bus to fully utilize the storage medium and what its capable of...
Don't expect that to last for the next 5 years, though. In fact, considering how Steam Deck puts limiters and warnings on what will run on it based on optimal usage, expect a number of future games to only be deemed fit to run on faster internal storage.
 
0
In theory, could the 12 SM point towards a VR-device that would have all its guts embedded in it (as in, no supplementary device is needed?)
If that is the case, I would like to see the hardware side of things being leaked (Sorry for being a dick) so we can see the screen's resolution (or if there is a screen at all).

If there is a screen, and if it is 4K (or 1440p), then that would be a clear indication that Nintendo is gunning for VR. DLSS is already confirmed so they might be tempted to go this way. A 4K or 1440p screen would not be a problem for displaying games at a native 720p in non-VR scenarios because it's a multiple of 720p. So no scaling artefact.
 
0
I don't think VR will be much of a consideration for the screen choice when it comes to this new device.

If Nintendo wants to seriously attempt VR I think they would release an accessory to support it rather than go via labo VR again.

There was that job listing a while ago that I think Dahkil posted and Thraktor commented on that alluded to Nintendo playing with wireless video streaming again. Maybe they could produce a VR add on using something akin to WiiU wireless video streaming but obviously much higher fidelity. You really need cameras or sensors to do VR well hence I believe they would just produce hardware specific for it in the form of a switch 2 VR pack or something.

For screen choice I am team 720p OLED. There's not much to gain from going up to 1080p in handheld but there is a lot more to lose in the form of battery life, cost to produce the unit, ease of development across the two switch SKUs etc.

You could even argue a loss of visual fidelity also as the extra grunt required to do 1080p could be better used pushing effect, model and texture quality.
 
Last edited:
As far as I'm concerned, Series S version with minimal/no downgrades besides being 540~480p before DLSS to 720p is the sweet spot for portable. If there's headroom for more, then I'd rather them use it to boost CPU and storage speed, to reduce bottlenecks.

If they want to take VR beyond Labo VR, they should make a standalone headset model rather than strap a tablet to our faces.
 
0
I guess it’s safe to say that it will be 1440/60 or 4K/30 with DLSS?

Will devs go with a performance mode (1440p/60), quality mode (1440p/30) and resolution mode (4K/30)?

Also is it possible for Nintendo to do what Microsoft is doing with BC running Switch games above 1080p/30? Or is it impossible due to patents and unwillingness to do so?
 
I guess it’s safe to say that it will be 1440/60 or 4K/30 with DLSS?

Will devs go with a performance mode (1440p/60), quality mode (1440p/30) and resolution mode (4K/30)?

Also is it possible for Nintendo to do what Microsoft is doing with BC running Switch games above 1080p/30? Or is it impossible due to patents and unwillingness to do so?
It’s not safe to say any regarding DLSS performance just yet.


Enhanced BC on Xbox is just patching games.
 
I guess it’s safe to say that it will be 1440/60 or 4K/30 with DLSS?

Will devs go with a performance mode (1440p/60), quality mode (1440p/30) and resolution mode (4K/30)?
I think it can do 4k/60 using Ultra Performance/Performance DLSS.

It’s not safe to say any regarding DLSS performance just yet.
True, but I don't think DLSS is as inflexible regarding resolution upscaling. The end image quality will be affected however.
 
0
I guess it’s safe to say that it will be 1440/60 or 4K/30 with DLSS?

Will devs go with a performance mode (1440p/60), quality mode (1440p/30) and resolution mode (4K/30)?

Also is it possible for Nintendo to do what Microsoft is doing with BC running Switch games above 1080p/30? Or is it impossible due to patents and unwillingness to do so?
If you are referring to fps boost, dont think we should expect as fully featured BC as MS. Thats a tough act to follow.

At best, framerate and dynamic res will max out at their targets, for unpatched games.

Features like fps boost/ auto hdr are probably technically possible if Nintendo really wants to invest in it, but MS really went above and beyond and thats not something that should be expected.
 
0
Which TOPS number in that one post is the relevant one for DLSS? Because 20 was the napkin math number Digital Foundry came up with last year for a 4k Switch and that was when we assumed the system had a much smaller GPU. So if the TOPS number thats almost 50 is the important one then using that DF video as reference it would mean 4k 60 would be very doable.
 
Which TOPS number in that one post is the relevant one for DLSS? Because 20 was the napkin math number Digital Foundry came up with last year for a 4k Switch and that was when we assumed the system had a much smaller GPU. So if the TOPS number thats almost 50 is the important one then using that DF video as reference it would mean 4k 60 would be very doable.
It is the near 50 TOPS value for DLSS, since it uses INT8 and INT4. And at those frequencies, 4K60 would be possible.
 
Qualcomm has apparently validated Samsung's LPDDR5X for use on future Snapdragon SoCs. So whether or not LPDDR5X is used on the DLSS model* or not depends on how soon Micron or Samsung start mass manufacturing of LPDDR5X. (I'm leaning towards LPDDR5 being used for the DLSS model* so far.)

On an off topic note, if anyone has any thread title change suggestion, please send me a conversation since I won't have much time to check every post here.
 
Speaking from my technical noob perspective, is it maybe possible that this next Switch has to be slightly larger to fit everything in? Also it sounds to me that it would be heavier.
OLED = Switch Wide
New = Switch Thicc

If they want to maintain compatibility with current docks, it doesn't look like they can do much though, looking at my original dock at least. Like maybe an extra millimeter or two of thickness between the rails on the back side. Course when something is only so thick in the first place, every millimeter counts.
Yeah, as was mentioned by @fwd-bwd, we're not talking about some exotic storage method, we're talking about the removable card version of the storage solution used in half of the Android smartphones released in the past few years.
Also, there's no gambling here, there's already a market for UFS Cards and their readers, and it's the auto industry. Nvidia's Jetson AGX Xavier and up include a dedicated UPHY I/O lane (lane #9 in the Orin model, specifically) on the default carrier board, so auto makers can install on-board computer software and store black box data on a removable card. This option has been available to that industry since 2020 and no one would go through with adding an I/O option that won't be used by a sizeable part of their customers.

If Nintendo wants to do with UFS Card 3.0 what Sony did with DVD and Blu-Ray (make it mass marketable by providing one of the first major use cases for a new technology) but for a whole hell of a lot less R&D and manufacturing investment, who are we to say no? We can't lambast Nintendo for not being up on technology and then wince when it's suggested that they implement a functional technology that ticks every box (small form factor, fast read/write, rock-bottom power consumption, great price per GB for the consumer, royalty-free tech) to improve user experience for the better.
UFS cards would be a more believable option if the one party that made them, that also happens to make ~200M+ phones a year, actually made an effort to push and support them.
 
UFS cards would be a more believable option if the one party that made them, that also happens to make ~200M+ phones a year, actually made an effort to push and support them.
UFS is an open standard. So long as you follow the spec and certify it, anyone could make the cards and card readers. If Sandisk felt they made a really good amount of money being Nintendo's officially-licensed microSD card supplier and they wanted to go UFS, you best believe they'll see money to be made and produce UFS cards.
Never mind that Western Digital, the parent company of Sandisk, has been rapidly increasing embedded UFS production; it's not a stretch that they could fabricate UFS cards in the 3.0 spec, as well, if they knew what the size of the market for them was and if it justified doing so (which they would know).

Also, the market for UFS 1.0 cards =/= the market for UFS 3.0 cards, so comparing those 2 things is rather like comparing the 1st-gen SD card market to what the SD card market was 3 years later when major speed improvements were made and it was more widely desired by manufacturers.
 
0
If Nintendo is going to have a substantial period of cross-gen games, I feel like it would make a lot of sense for them to push the faster loading angle, in addition to graphics upgrades. As great as BotW is, the loading screens are definitely one of the biggest negatives and can break up the flow of the game. Imagine a Super Switch BotW 2 with loading screens that are a couple seconds long instead of 10.
 
Talking about internal memory and memory cards, I am pretty sure that this new Nintendo hardware will also have MicroSD support for memory expand,
and I am willing to bet that internal memory will be 128GB.
 
What are the spec differences between orin and drake? both are tegra chips right?
IIRC Orin has 16 SMs while Drake has 12. Orin will also use A78AE CPU cores which are large and primarily for automotive purposes, Drake will probably use A78C or just plain A78s.
 
This thing is sounding like a beast. I think the screen should be 720p but considering how much more powerful this device seems to be than expected + DLSS, I wouldn't be surprised if Nintendo pushed for a 1440p screen in order to do VR better. If Nintendo does decide to go the $500+ price point route for a sku, the VR capabilities will be a great way to justify the price increase. They would then be able to market four form factors (docked,handheld,table top and vr mode).

$399 Switch 4K/2-720P screen
$599 Switch 4K VR/2 VR-1440P screen, VR face mask, more storage, Pro joycons etc.
 
I do agree Samsung's advanced process nodes (Samsung's 7LPP process node and more advanced) are the more likely choice in comparison to TSMC's advanced process nodes (TSMC's N6 process node and more advanced). Another reason I've mentioned TSMC's N6 process node is because assuming Drake's a custom variant of Orin, I imagine Nintendo and Nvidia probably needs to pay more money to redesign Drake with EUV lithography in mind, regardless of which foundry company Nintendo and Nvidia chooses to work with for the fabrication of Drake, considering Orin's probably fabricated using Samsung's 8N process node.

Huh, interesting. I previously thought Nvidia could use TSMC's N5 process node for the high-end and mid-range Ada GPUs and Samsung's 5LPP process node for the entry-level Ada GPUs, especially with demand for TSMC's N5 process node being absurdly high. But with kopite7kimi saying that all Ada GPUs are fabricated using TSMC's N5 process node, I don't know if the capacity Nvidia managed to secure for TSMC's N5 process node is even enough, even with all the premiums Nvidia has to pay to TSMC.


And smaller form factor M.2 NVMe SSDs also consume more power in comparison to UFS and SD Express cards.

Yeah, I had expected the same, with lower-end Ada GPUs going to Samsung, similar to Pascal, but apparently not. I'd still expect there to be a slow roll out of 4000 series cards so they can keep selling 3000 series for as long as possible. I'd say the initial release will probably only be the 4080/4080Ti/4090, which will be absurdly expensive, and they'll keep the 3000 lineup around for sub-$1000 cards for a while yet, only gradually releasing the rest of the 4000 range.

Thanks. I'm aware of that, and the fact that Orin also contains RT cores. The question is whether these RT cores would be useful enough within the power envelope that Nintendo would be targeting. So there's a (hopefully small) chance of them not using it to the full extent but limit it to 3D audio, for example.

I'm a lot more confident of usable RT now than I was before. Firstly, obviously it's a lot more RT cores than I'd expected, even at a very low clock. Secondly it seems like we're getting a big increase in GPU L2 cache. There's a reason AMD introduced their "infinity cache" at the same time as they introduced ray tracing support, as BVH traversal, unlike pretty much anything else on a GPU, is heavily latency-bound when it comes to memory. Having a big cache can significantly reduce stalls while waiting for chunks of BVH data to be returned from RAM, so you get better utilisation of your RT hardware. Of course the claimed 4MB of L2 isn't nearly as much as AMD's RX6000 series, but it's not small either, it's the same amount of L2 as the RTX 3070Ti, and almost as much as the Xbox Series X's GPU (5MB).

In fact, do we have any reliable info on the GPU caches on Drake? I'm sure I've seen someone post 4MB L2 and ~2.3MB L1 somewhere in this thread, but was that speculation, or something that's leaked?

Not sure if this has been discussed before. The OLED model's dock is capable to supplying more power to the Switch console than the previous dock. As you can see in the photo below, the old dock takes in 39w and passes 18w to the Switch; the remaining 21w is reserved for the USB ports. The new dock, however, is capable to passing the full 39w to the console—USB be damned.

Y3sV2nu.png


Since the OLED model doesn't require more power than the OG or red box Switch, this seems to be a future proofing measure. It might indicate that Nintendo is considering a higher power envelope for the next Switch model, and/or a fast charge feature. 12 SMs? High energy density battery? Sure, why not. 🤤

Well spotted! I'm still of the opinion that the new model will use the same dock as the OLED model, and this definitely points in that direction, and also suggests a higher power draw.

So do we know if the 12 SMs on the T239 are going to actually be used by the system itself? The API wouldn't even bother mentioning 12 if they weren't all accessible and usable, right?

Or is it likely Nintendo will have some of them disabled?

This is a good point. Actually we don't know how many SMs will be enabled, because as far as I'm aware these drivers only deal with full GPU dies (eg GA102, GA104, etc.) and the actual products based on those dies often have SMs disabled. I hadn't really considered this before, because I was expecting a chip of a similar size to the original TX1, but with 12 SMs we're possibly looking at a large enough die that they may have to disable a couple of SMs for yields. The Series S, for example, also has a GPU with 1536 cores on the die, but disables a few for yields, leaving it with 1280 usable. We could see a similar situation here, with 2 SMs disabled and 10 usable. Of course this depends on manufacturing process, and as we don't even know the clock speeds it's hard to say how much impact this would have on actual performance.

It is the near 50 TOPS value for DLSS, since it uses INT8 and INT4. And at those frequencies, 4K60 would be possible.

Do we have confirmation of this? I don't think Nvidia have ever confirmed what precision is used for DLSS.
 
Great to read that the specs are better than anticipated.
I do hope it also means this new system (successor or Pro) will release at least this year or early 2023.

Late 2023/2024 would downplay those specs as it would be a little dated once it releases.
 
0
This thing is sounding like a beast. I think the screen should be 720p but considering how much more powerful this device seems to be than expected + DLSS, I wouldn't be surprised if Nintendo pushed for a 1440p screen in order to do VR better. If Nintendo does decide to go the $500+ price point route for a sku, the VR capabilities will be a great way to justify the price increase. They would then be able to market four form factors (docked,handheld,table top and vr mode).

$399 Switch 4K/2-720P screen
$599 Switch 4K VR/2 VR-1440P screen, VR face mask, more storage, Pro joycons etc.
I feel like tis would complicate development for portable mode though. Having 2 screen targets to worry about.
 
IIRC Orin has 16 SMs while Drake has 12. Orin will also use A78AE CPU cores which are large and primarily for automotive purposes, Drake will probably use A78C or just plain A78s.
Just to add, there's practically no difference between the Cortex-A78 and the Cortex-A78C, outside of the Cortex-A78 supporting a max of 4 CPU cores per cluster vs a max of 8 CPU cores per cluster for the Cortex-A78C, and the Cortex-A78 having a max of 4 MB of L3 cache vs a max of 8 MB of L3 cache for the Cortex-A78C.

In fact, do we have any reliable info on the GPU caches on Drake? I'm sure I've seen someone post 4MB L2 and ~2.3MB L1 somewhere in this thread, but was that speculation, or something that's leaked?
I think that's based on Nvidia's diagram of Orin's GPU from the Jetson AGX Orin technical brief.
9SPnXFS.png
 
Do we have confirmation of this? I don't think Nvidia have ever confirmed what precision is used for DLSS.
I haven't found anything official, I deduced it from Dictators video on Switch DLSS where he uses INT-8 performance of RTX 2060 as a case.
 
0
Part of me is now wondering if the system might have a screen higher res than 720p, both because it’s a more capable device than previously thought and also because a higher res screen could potentially be used for better VR (which the system would be capable of now).

I'm of the mind we already have the screen for the switch 2 in the open. I feel like the main purpose of the switch OLED screen was securing screens for the next switch with a great deal while it was on the table, and switch OLED the product was a secondary result from that while the main feature is prepared.
 
This is a good point. Actually we don't know how many SMs will be enabled, because as far as I'm aware these drivers only deal with full GPU dies (eg GA102, GA104, etc.) and the actual products based on those dies often have SMs disabled. I hadn't really considered this before, because I was expecting a chip of a similar size to the original TX1, but with 12 SMs we're possibly looking at a large enough die that they may have to disable a couple of SMs for yields. The Series S, for example, also has a GPU with 1536 cores on the die, but disables a few for yields, leaving it with 1280 usable. We could see a similar situation here, with 2 SMs disabled and 10 usable. Of course this depends on manufacturing process, and as we don't even know the clock speeds it's hard to say how much impact this would have on actual performance.



Do we have confirmation of this? I don't think Nvidia have ever confirmed what precision is used for DLSS.

1.5Ghz for Series S is still a pretty high clock on 7nm though, would Nintendo even need to disable SM's if the GPU were clocked well under the theoretical maximum clock?

Also I would have to dig through the whitepapers again but I thought I remembered Nvidia discussing that the ultra performance DLSS modes were made possible because of Sparsity calculations.
 
In fact, do we have any reliable info on the GPU caches on Drake? I'm sure I've seen someone post 4MB L2 and ~2.3MB L1 somewhere in this thread, but was that speculation, or something that's leaked?
4 MB L2 is stated in the leak, although it's from a device modeling/simulation layer and not a spec sheet or anything. The L1 is not stated anywhere that I've seen but I guess was based on Orin.
This is a good point. Actually we don't know how many SMs will be enabled, because as far as I'm aware these drivers only deal with full GPU dies (eg GA102, GA104, etc.) and the actual products based on those dies often have SMs disabled. I hadn't really considered this before, because I was expecting a chip of a similar size to the original TX1, but with 12 SMs we're possibly looking at a large enough die that they may have to disable a couple of SMs for yields. The Series S, for example, also has a GPU with 1536 cores on the die, but disables a few for yields, leaving it with 1280 usable. We could see a similar situation here, with 2 SMs disabled and 10 usable. Of course this depends on manufacturing process, and as we don't even know the clock speeds it's hard to say how much impact this would have on actual performance.
If they were going to disable SMs for yields, would all the drivers be using a constant of 12? I guess it's not impossible they adjust those down later if manufacturing goes a certain way, but the GPC/TPC/SM/etc. numbers show up often and consistently in the leak.
 
If they were going to disable SMs for yields, would all the drivers be using a constant of 12? I guess it's not impossible they adjust those down later if manufacturing goes a certain way, but the GPC/TPC/SM/etc. numbers show up often and consistently in the leak.

No idea. Although gpc should never change, it's just 1. Say they go by thraktors example of like disable 2 sm's. Isn't this a random odds thing? Like they don't control which sm's are lost to manufacturing defects, and just draw the line at the biggest yield at an acceptable performance level? Like toss any that have more than 2 defective sm's, still keep 10's and 11's set the rest of the 12's and 11's to 10 sm for parity?

So it could be 2 sm's on the same tpc resulting in 5 functional TPC's, but it could also be one sm on two different TPC's that end up defective so there would still be 6 of them.

Or maybe instead of tossing them save them for a portable only switch 2 lite.
 
No idea. Although gpc should never change, it's just 1. Say they go by thraktors example of like disable 2 sm's. Isn't this a random odds thing? Like they don't control which sm's are lost to manufacturing defects, and just draw the line at the biggest yield at an acceptable performance level? Like toss any that have more than 2 defective sm's, still keep 10's and 11's set the rest of the 12's and 11's to 10 sm for parity?

So it could be 2 sm's on the same tpc resulting in 5 functional TPC's, but it could also be one sm on two different TPC's that end up defective so there would still be 6 of them.

Or maybe instead of tossing them save them for a portable only switch 2 lite.
Wouldn't the software and documentation then simply refer to 10SMs? The source code itself probably isn't able to refer to each specific SM on a die (like A1 through D3 or however they're arranged).
 
Wouldn't the software and documentation then simply refer to 10SMs? The source code itself probably isn't able to refer to each specific SM on a die (like A1 through D3 or however they're arranged).

I would think so. But I don't even know what manufacturer or node they are going with, or the expected yield for whatever manufacturer or node it's gonna be.

Maybe they don't quite know yet either?
 
0
My argument against a higher screen res is image quality of games in backward compatibility mode. Even in docked profile, many games top out at 720p, and playing those on such a screen will suffer from visual artifacts due to upscaling. Of course some developers will go back and patch their current Switch titles to take advantage of the new hardware, but it's hardly guaranteed.

There's also the option of a half-step up in resolution. 1600x900 at the size of the OLED's screen would mean pixel perfect mode BC renders at about the same size as the Switch Lite's screen, and would be just 1.5x~ pixels to draw for new games to reach native res. With an OLED screen it would work really well, since the parts of the screen not in use would just look like a large bezel rather than dark grey backlight city.

I agree 720p is enough and the most likely, but if they do want to upgrade the resolution 900p seems like a good compromise over going all the way to 1080p.
 
0
I feel like tis would complicate development for portable mode though. Having 2 screen targets to worry about.

I expect the devs to just target 720p since it upscales evenly to a 1440P screen.

Also, I wouldn't be surprised if in 2024 Nintendo Launched a stationary console using the new specs as a cheaper option or to allow for the full extent of the chip to be used for more performance.
 
I'm much much more in favor of 1080p for the new model, but for VR 1080p is also really low. Also anything that holds a ~Switch in front of the face is going to be more front-heavy than is comfortable.

Some day soon all those helium balloon powered weight mitigating mounts for face worn electronics patent's I'm squatting on are gonna pay off!!!!
 
0
I'm much much more in favor of 1080p for the new model, but for VR 1080p is also really low. Also anything that holds a ~Switch in front of the face is going to be more front-heavy than is comfortable.

There are third party headsets that exit for switch already for VR purposes, so hopefully Nintendo would find a way to make one that is comfortable for most. Metroid Prime Remake with VR support would be a great way to sell the VR focused sku as well.
 
Yeah I think we need to get rid of this idea that somehow we're gonna strap a tablet to our faces for Nintendo VR
It's too heavy and uncomfortable and Nintendo knows this... that's why labo VR makes you hold it up to your face with your hands.
The switch unit itself is not going to be used for VR.

If nintendo wanted to explore further with VR they have many other options for that including some things others have already mentioned.
Standalone headset with hardware built in
Wireless streaming to the headset
Wired headset connecting to the dock

I think now that VR has sort of been around for a bit they could make a cheaper headset, new joycons could be used for input and they could have VR modes for their big titles and really do some experimenting with nintendoland style VR titles.

I think nintendo doing VR is an interesting concept and if they're ready to take the next step in VR development I think they would just go ahead and make a headset.
 
Last edited:

I think the difference between A78 and A78C is basically academic in this case, as even the cluster and L3 cache size limitations are only going to have an impact if you use ARM's core interconnect, cluster and cache implementation, which Nvidia likely aren't using (they didn't on TX2, TX2 or Xavier, and although there isn't any public confirmation on Orin, I'd suspect it's a custom interconnect too). The distinction between A78 and A78AE is even a bit blurry, as Nvidia aren't using the main feature of A78AE (the ability to run pairs of cores in lock-step), and they seem to switch back and forth between calling Orin's CPU cores A78 or A78AE.

1.5Ghz for Series S is still a pretty high clock on 7nm though, would Nintendo even need to disable SM's if the GPU were clocked well under the theoretical maximum clock?

Also I would have to dig through the whitepapers again but I thought I remembered Nvidia discussing that the ultra performance DLSS modes were made possible because of Sparsity calculations.

I wouldn't say 1.5GHz is that high for RDNA2, as the desktop cards all comfortably boost over 2GHz, and even the PS5 can hit up to 2.2GHz. Although I'm more thinking about binning for physical defects, rather than binning for clocks.

4 MB L2 is stated in the leak, although it's from a device modeling/simulation layer and not a spec sheet or anything. The L1 is not stated anywhere that I've seen but I guess was based on Orin.

If they were going to disable SMs for yields, would all the drivers be using a constant of 12? I guess it's not impossible they adjust those down later if manufacturing goes a certain way, but the GPC/TPC/SM/etc. numbers show up often and consistently in the leak.

Thanks. Yeah, we can't necessarily guarantee L1 would be the same as Orin (192KB/SM), but it'll likely be either that or the 128KB/SM of desktop Ampere. The L2 is more what I was curious about, and 4MB is really good.

On the number of SMs in drivers, I'm basing this on the data we're seeing about Ada GPUs, where all the info is about full GPU chips like AD102, AD104, etc., not actual consumer GPU products based on binned versions of those chips, like RTX 4080, etc. I assume one of the reasons why the drivers would be developed in this way is to maximise flexibility in choosing bins based on yields once the chips go into production, rather than deciding them before you know what the yields look like.
 
So what exactly can tensor cores do for AI applications that wouldn't be doable (or easily doable) otherwise? More and more I feel like Nintendo is going to experiment with AI stuff, what type of features and applications could we reasonably expect? Do we think it'll heavily tie into AR/cameras?
 
I think the difference between A78 and A78C is basically academic in this case, as even the cluster and L3 cache size limitations are only going to have an impact if you use ARM's core interconnect, cluster and cache implementation, which Nvidia likely aren't using (they didn't on TX2, TX2 or Xavier, and although there isn't any public confirmation on Orin, I'd suspect it's a custom interconnect too). The distinction between A78 and A78AE is even a bit blurry, as Nvidia aren't using the main feature of A78AE (the ability to run pairs of cores in lock-step), and they seem to switch back and forth between calling Orin's CPU cores A78 or A78AE.



I wouldn't say 1.5GHz is that high for RDNA2, as the desktop cards all comfortably boost over 2GHz, and even the PS5 can hit up to 2.2GHz. Although I'm more thinking about binning for physical defects, rather than binning for clocks.



Thanks. Yeah, we can't necessarily guarantee L1 would be the same as Orin (192KB/SM), but it'll likely be either that or the 128KB/SM of desktop Ampere. The L2 is more what I was curious about, and 4MB is really good.

On the number of SMs in drivers, I'm basing this on the data we're seeing about Ada GPUs, where all the info is about full GPU chips like AD102, AD104, etc., not actual consumer GPU products based on binned versions of those chips, like RTX 4080, etc. I assume one of the reasons why the drivers would be developed in this way is to maximise flexibility in choosing bins based on yields once the chips go into production, rather than deciding them before you know what the yields look like.
Well I doubt they'd pre-binn Drake down from 12 to anything less than 8
 
So what exactly can tensor cores do for AI applications that wouldn't be doable (or easily doable) otherwise? More and more I feel like Nintendo is going to experiment with AI stuff, what type of features and applications could we reasonably expect? Do we think it'll heavily tie into AR/cameras?
AR cameras would be my first guess as orin chips primarily handle camera input for all that car stuff. Seems like a natural fit.
 
AR cameras would be my first guess as orin chips primarily handle camera input for all that car stuff. Seems like a natural fit.
So my question is like, what could a theoretical Face Raiders 2 do with tensor cores better than the original on 3DS did? What kind of AI enhancements could be made?
 
Quoted by: MP!
1
Yeah I think we need to get rid of this idea that somehow we're gonna strap a tablet to our faces for Nintendo VR
It's too heavy and uncomfortable and Nintendo knows this... that's why labo VR makes you hold it up to your face with your hands.
The switch unit itself is not going to be used for VR.

If nintendo wanted to explore further with VR they have many other options for that including some things others have already mentioned.
Standalone headset with hardware built in
Wireless streaming to the headset
Wired headset connecting to the dock

I think now that VR is sort of been around for a bit they could make a cheaper headset, new joycons could be used for input and they could have VR modes for their big titles and really do some experimenting with nintendoland style VR titles.

I think nintendo doing VR is an interesting concept and if they're ready to take the next step in VR development I think they would just go ahead and make a headset.
How I've always envisioned a VR Switch is just that... it's a VR headset with a dock that allows it to switch to a regular TV console. So.. the same as the current Switch but "handheld mode" now means VR.

Seems like a great idea to me. You'll get people getting their 2nd (or 3rd?) Switch to experience VR and people who don't have one but may be interested for a killer VR app (Metroid? Zelda?) get the value proposition of not just a new VR headset but a new Nintendo Console.

I also wonder if after this Switch 2 or Pro launches Nintendo will start dabbling again with asynchronous gameplay like we got with Nintendoland and some other Wii U games. Would be great for VR as well.
 
How I've always envisioned a VR Switch is just that... it's a VR headset with a dock that allows it to switch to a regular TV console. So.. the same as the current Switch but "handheld mode" now means VR.

Seems like a great idea to me. You'll get people getting their 2nd (or 3rd?) Switch to experience VR and people who don't have one but may be interested for a killer VR app (Metroid? Zelda?) get the value proposition of not just a new VR headset but a new Nintendo Console.

I also wonder if after this Switch 2 or Pro launches Nintendo will start dabbling again with asynchronous gameplay like we got with Nintendoland and some other Wii U games. Would be great for VR as well.
Problem with async gaming is that it's next to impossible to port forward unless you have hardware capable of the exact same stuff

Which Nintendo themselves said the eShop library for each account will carry over to future systems
 
Apologies if this seems like a silly question, since I’m not a techy person:

To allow older switch models to be able to play more intensive games made for the Pro/2, maybe they’d have the portable mode of more intensive games play in docked mode.

This would cause an issue with the Lite, but maybe Nintendo would allow developers to overclock the Lite or something.

Would that be possible?

Battery life would become an issue for the Lite, but at least the option would be there.
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom