• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

My favourite (infomercial) tech channel did a video with the RTX A500 :p.
RTX A500 has been mentioned in this thread before, so I'll just put the details in a spoiler;

  • Ampere (8nm)
  • 2048 Cuda Cores
  • 64 Tensor Cores
  • 16 RT Cores
  • 4GB GDDR6
  • 25W TGP
  • Base Clock: 435 MHz, Boost Clock: 1335 MHz



This lil (compute) eGPU is bandwidth-constrained as it cannot output directly to an external monitor so it has to be done over the Thunderbolt/USB-C connection (~40.5 GiB/s ~= 43 GB/s as per Cuda-Z in the video). But hey some numbers at least, although you'll see that it gets quite constrained in some titles.

Overall, it's just some snippets of tests (video starts at 4:58), so don't take too much conclusions with it, moreover quite barebones with the settings/details. I'll just share the most interesting result 😅 ;
  • Shadow of the tomb raider benchmark @ 1080p medium (preset) ~93 FPS
    • He says that it's an uplift of ~35FPS compared to the Z1E
Maybe if he just tested at 720p it could've run better, although he does enable upscaling, so I figure that even that'll be too constrained. So I'd just skim through the video if you're interested, that's what I often do :p.

Also happy holidays.
 
This is helpful information to view and understand how die shrinks don't scale fully when cache and I/O are included. Makes me wonder how much die surface area the cache will have on Drake assuming its on 4N. Tegra X1 is already a small chip, maybe Drake can be under 100mm^2
1. Cowboys for life, conference traitor
2. Others have made those calculations, and it'd be in the 90 mm² range.
 
1. Cowboys for life, conference traitor
2. Others have made those calculations, and it'd be in the 90 mm² range.
At first, I thought you were talking about Dallas, but then you meant OSU. I didn't want OU to leave the Big XII, but it's all due to money. I didn't realize the calculations were made on Drake assuming 4N, appreciate the info.
 
Nintendos current approach for voice chat is total bullshit. And it’s not like they aren’t aware of the limitations - they had an official Splatoon branded monstrosity for using phone and switch audio at the same time.

I’m hoping they’ve wisened up to system level voice chat (even if it’s friends only) or discord, and just opted to not pivot for the current generation. It would absolutely be a selling point for me on a new system. It’s just so effortless to jump into a channel with my close friends halfway across the world on Xbox (or cross platform via Discord).
Agreed. It's crazy; it's been a step down from Wii and Wii U.

Though Wii Speak on wii was trash. That headbanger headset was fire though. I get the whole protection of kids thing but there has to be a way around this. Parental controls is one way. But parents aren't gonna be with their kids 24/7, so shit is gonna happen. It's not like kids don't have xbox or Microsoft consoles either.
Controversial take: Unless it’s a Sim-based Racer, analog triggers aren’t needed.

Don’t get me wrong, I think analog triggers provide more flexibility and such, but if we’re talking about arcade style racers, and typical shooters, digital triggers work fine.
F-Zero GX, the ultimate arcade racer was great with it
This is helpful information to view and understand how die shrinks don't scale fully when cache and I/O are included. Makes me wonder how much die surface area the cache will have on Drake assuming its on 4N. Tegra X1 is already a small chip, maybe Drake can be under 100mm^2
Agreed. Interestingly th at 7nm SD SOC area is 166.05mm^2, while the 6nm revision is 132.65mm^2. Going from 7nm to 6nm has an 18% increase in logic density, but the overall reduction in area is 25%. The uploader who does the breakdown thinks that the CVIP cores were removed to account for this.


Hopefully, Switch 2/T239 can be around 100mm^2 with a an eight mb cache (L3) and 512kb cache of L2, or up to SD Oled's area if it needs that much to accommodate the cache... I can't wait for a die teardown of Switch 2!
 
Ironically

F-Zero GX, the ultimate arcade racer was great with it

Agreed. 7nm SD SOC area is 166.05mm^2, while the 6nm revision is 132.65mm^2. Interestingly, going from 7nm to 6nm has an 18% increase in logic density, but the overall reduction in area is 25%. The uploader who does the breakdown thinks that the CVIP cores were removed to account for this.


Hopefully Switch 2/T239 can be around 100mm^2 with a an eight mb cache (L3) and 512kb cache of L2 or up to SD Oled's area... I can't wait for a die teardown of Switch 2!
I can't wait for the teardown also. How much do you think the cache size will be for the Ampere GPU? I saw a document on Orin that the L2 is 4MB, but idk if Nintendo will go with this size
 
Really wish sarcasm is left for April fools instead of Christmas/Christmas Eve. With English as 2nd language it pains me to comprehend them sometimes.
Take it as if he's saying/doing this:

oh-sure-john-candy.gif
 
I can't wait for the teardown also. How much do you think the cache size will be for the Ampere GPU? I saw a document on Orin that the L2 is 4MB, but idk if Nintendo will go with this size

TQBH I don't know how it works with the cache. Because the CPU will have its own cache, but the GPU can have its own dedicated cache as well.


The Tegra Orion AGX cores use A78AE cores, which are much bigger than the A78C cores, and according to Nvidia's website/spec sheet, the 64GB AGX, 32GB AGX, and 16GB NX have 3, 2, and 2mb of L2 cache for the CPU.
But apparently the AGX have 4MB for the AGX models for the GPU (page 5). I can't find how much cache is dedicated for the GPU on the NX models.


A78C only goes from 256 to 512KB for the L2 cache on CPU, but it offers more L3 cache (up to 8GB) than A78AE cores (which offer 4-6mb on the Orion AGx models) I don't now how much cache they could put on the GPU, but cache in general takes up a lot of space and can be very expensive. I'm not expecting something like 4MB on the T239 for the GPU
 
Last edited:
My favourite (infomercial) tech channel did a video with the RTX A500 :p.
RTX A500 has been mentioned in this thread before, so I'll just put the details in a spoiler;

  • Ampere (8nm)
  • 2048 Cuda Cores
  • 64 Tensor Cores
  • 16 RT Cores
  • 4GB GDDR6
  • 25W TGP
  • Base Clock: 435 MHz, Boost Clock: 1335 MHz



This lil (compute) eGPU is bandwidth-constrained as it cannot output directly to an external monitor so it has to be done over the Thunderbolt/USB-C connection (~40.5 GiB/s ~= 43 GB/s as per Cuda-Z in the video). But hey some numbers at least, although you'll see that it gets quite constrained in some titles.

Overall, it's just some snippets of tests (video starts at 4:58), so don't take too much conclusions with it, moreover quite barebones with the settings/details. I'll just share the most interesting result 😅 ;
  • Shadow of the tomb raider benchmark @ 1080p medium (preset) ~93 FPS
    • He says that it's an uplift of ~35FPS compared to the Z1E
Maybe if he just tested at 720p it could've run better, although he does enable upscaling, so I figure that even that'll be too constrained. So I'd just skim through the video if you're interested, that's what I often do :p.

Also happy holidays.



This really kind of shows how AMD's marketing of the Z1E's GPU performance is untruthful, or at least not telling the whole truth. They claim 8.6TFlops, yet this pocket GPU with an RTX A500 with supposedly up in the 7 TFlops range is able to provide an uplift of ~35 fps in SotTR over the Z1E. The 8.6TFlop value which most of us understand comes from dual-issue execution, but nothing said by AMD clarifies how/when dual-issue can be used. Only a handful of instructions make use of it, and it's not something that is controllable as the compiler handles when it can be used, which it does a very poor job at. And this pocket GPU's RAM looks to be heavily bandwidth-starved too with having only around 112GB/s for that 7TFlop performance number.
 
I edited my post a couple times and found it on the AGX models. But I don't see one for Orion NX modules.
In regards to the diagram that you just posted, what does the "4 MB System Cache" refer to?

Oh i remember those conversations with the NVN2. Yeah its pretty confusing. I'm assuming it's referring to GPU cache, right?
 
Last edited:
I edited my post a couple times and found it on the AGX models. But I don't see one for Orion NX modules.

Oh i remember those conversations with the NVN2. Do those refer to the GPU or is it some system cache?
Although I couldn't find a block diagram for Jetson Orin NX, I did find a blog post from Nvidia where Nvidia posted a block diagram for Jetson Orin Nano, which is the most cut down variant of Orin, and which shows 4 MB of L2 cache for the GPU.
Block-diagram-of-Jetson-Orin-Nano.png


I believe LiC's findings on NVN2 were referring to the GPU's L2 cache.
 
Although I couldn't find a block diagram for Jetson Orin NX, I did find a blog post from Nvidia where Nvidia posted a block diagram for Jetson Orin Nano, which is the most cut down variant of Orin, and which shows 4 MB of L2 cache for the GPU.
Block-diagram-of-Jetson-Orin-Nano.png


I believe LiC's findings on NVN2 were referring to the GPU's L2 cache.
That's a nice find!

Dumb question: what does the "4 MB System Cache" refer to? I can see its a shared cache between CPU and GPU, but what kind of cache is it referring to? L1 L2, or L3?
 
is wrong for me to assume, Switch sucessor games began development in 2019, 2020 e 2021?

I assume Nintendo quickly saw the value of having titles like Breath of the Wild and Odyssey near launch, so having projects starting 5-6 years in advance would put them at 2019/20. Seems reasonable.
 
is wrong for me to assume, Switch sucessor games began development in 2019, 2020 e 2021?

3D Mario definitely. Nintendo planning for TOTK to release on Switch probably meant they didn't think it was worth putting a new 3D Mario on Switch and that they wanted to make sure a big ip would be available at launch.
 
I assume Nintendo quickly saw the value of having titles like Breath of the Wild and Odyssey near launch, so having projects starting 5-6 years in advance would put them at 2019/20. Seems reasonable.
Didn't T239 start around that time too, given it was the release year of Tegra X1+?

Talk about integrated hardware-software platform. I imagine the HD to UHD development transition won't be as disastrous as SD to HD was, with them planning around it from the start, having experience from the first transition and the transition to 3D.
 
Didn't T239 start around that time too, given it was the release year of Tegra X1+?

Talk about integrated hardware-software platform. I imagine the HD to UHD development transition won't be as disastrous as SD to HD was, with them planning around it from the start, having experience from the first transition and the transition to 3D.
T239 was defined around the end of 2019. so they already knew what to make
 
is wrong for me to assume, Switch sucessor games began development in 2019, 2020 e 2021?
Given how long game development takes these days, some probably started before that--even if they didn't know at the time it was a Switch 2 game they were working on.
 
Last couple weeks? What? This thing is going to continue selling well into most of 2024 and the successor is a holiday 2024 system at the latest, September at the earliest.
I believe she means that whenever in the new year the new thing gets announced, current Switch will effectively be coexisting/competing with a new phantom sibling.
 
Doesn't the Tegra X1 have 2MB L2 cache? I would think it odd for the T239 to have anything less than that.
Yeah, its a little bit confusing. Tegra x1 has 2mb of cache (and I think 128KB as well) for its CPU.

According to Techspot, A78C can have 256KB or 512KB of cache per core, so theoretically, it will have at least 2mb of l2 cache or up to 4mb L2 cache total for 8 cores for the CPU.


When you add in the factor of GPU having its own cache, then it makes it more complicated. TX1 has 256kb of cache for the GPU, but so does Tegra Orion AGX. For the latter it has 16 SM, and (someone correct me if I'm wrong), I believe its 256KB per SM.. So it makes sense that the Orion AGX (the one that has 2048 cores) has 4MB of total L2 cache for the GPU. And if that's the case then I believe TX1 has 4 SMs and 1mb of cache for the GPU total?

Now if the T239 really does have 12 SMs, and its 256KB per SM, perhaps we could get 3MB of L2 cache total for the GPU? But that wouldn't line up with what dakhil mentioned about T239 having either 1mb or 4mb of cache here
Last level cache, which is also known as system level cache (SLC). In Orin's case, the 4 MB of system level cache is detected by the CPU as L4 cache, and by the GPU as L3 cache.
l4 is pretty uncommon from what I've read. In some cases it basically acts as DRAM.
 
Last edited:
F-Zero GX, the ultimate arcade racer was great with it

You’re not wrong in that, though the original F-Zero had the same feature, and that was digital triggers.

I have dabbled with F-Zero GX on Dolphin with the Wii U Pro controller (it has become my primary PC controller), and while you do lose the “finesse” of the GCN Controller with L & R, you can still Drift-turn with it, and effectively I think.

One particular case where the feature of the analog trigger worked effectively with a Nintendo IP was Mario Sunshine. And it worked in a similar vain like how MGS2 works with aiming/shooting with the pressure sensitive Buttons.

Again, there are great use cases for analog triggers, and if Switch 2 has them, I won’t be complaining.
 
"Listen listen, I know I know, you're tired of these switch 2 rumors. I am too, believe me! But I have to report the news."

Then there's the safe bets.

"I knew about this for weeks, but I tried to stay quiet. I just got confirmation.... the Switch 2 had buttons. Yes, yes, I know, it is crazy!"
RGT my favourite Nintendo YouTuber covering the new Nintendo hardware news 😁
 
You’re not wrong in that, though the original F-Zero had the same feature, and that was digital triggers.

I have dabbled with F-Zero GX on Dolphin with the Wii U Pro controller (it has become my primary PC controller), and while you do lose the “finesse” of the GCN Controller with L & R, you can still Drift-turn with it, and effectively I think.

One particular case where the feature of the analog trigger worked effectively with a Nintendo IP was Mario Sunshine. And it worked in a similar vain like how MGS2 works with aiming/shooting with the pressure sensitive Buttons.

Again, there are great use cases for analog triggers, and if Switch 2 has them, I won’t be complaining.
If they're analogue triggers in the traditional sense, I absolutely will. Pressure sensitive, sure, but Splatoon needs that instant click, as do a lot of shooters.
 
Shooters seem to be fine on PlayStation and they have analog triggers.
PlayStation doesn't have Splatoon. And really, they kinda aren't? I see all the time on other platforms, remap fire to RB/R1 for better responsiveness, because it's digital, or "Elite" controllers with short stops on the triggers for exactly that reason.

Shooters do "fine" with analogue triggers- they function, but they're not desirable, and in a game like Splatoon where you're already dealing with multiple dimensions of analogue input, with two analogue sticks and a gyro input, having the two most common actions of "swim" and "shoot" be on nice solid clicky buttons works well.

Why would they make that work... Worse? Especially when it would take up space in a space constrained controller, doesn't benefit their first party catalogue, and increases complexity? If they want an analogue input up there among the shoulder buttons, pressure sensitivity AFTER the click, on LR or ZLZR, is an option that has fewer of these caveats.
 
PlayStation doesn't have Splatoon. And really, they kinda aren't? I see all the time on other platforms, remap fire to RB/R1 for better responsiveness, because it's digital, or "Elite" controllers with short stops on the triggers for exactly that reason.

Shooters do "fine" with analogue triggers- they function, but they're not desirable, and in a game like Splatoon where you're already dealing with multiple dimensions of analogue input, with two analogue sticks and a gyro input, having the two most common actions of "swim" and "shoot" be on nice solid clicky buttons works well.

Why would they make that work... Worse? Especially when it would take up space in a space constrained controller, doesn't benefit their first party catalogue, and increases complexity? If they want an analogue input up there among the shoulder buttons, pressure sensitivity AFTER the click, on LR or ZLZR, is an option that has fewer of these caveats.

I haven't seen anyone complain about this except you. Yes, my DualSense Edge does include the short-stop feature, but I've never used it so far, and never felt a need for it even in shooters - especially with stuff like Ratchet & Clank and Returnal where the analogue triggers can be exploited in gameplay for things like alt-fire modes. The analogue trigger functionality in something like GT7, on the other hand, is utterly critical to the gameplay, far more than digital triggers are to shooters.

The fact that the Joy-Con are too small is a better argument, but I honestly like the idea of being able to play a Mario Kart with those kinds of triggers. It would be nice if they could just go with the full short-stop on every Joy-Con.
 
PlayStation doesn't have Splatoon. And really, they kinda aren't? I see all the time on other platforms, remap fire to RB/R1 for better responsiveness, because it's digital, or "Elite" controllers with short stops on the triggers for exactly that reason.

Shooters do "fine" with analogue triggers- they function, but they're not desirable, and in a game like Splatoon where you're already dealing with multiple dimensions of analogue input, with two analogue sticks and a gyro input, having the two most common actions of "swim" and "shoot" be on nice solid clicky buttons works well.

Why would they make that work... Worse? Especially when it would take up space in a space constrained controller, doesn't benefit their first party catalogue, and increases complexity? If they want an analogue input up there among the shoulder buttons, pressure sensitivity AFTER the click, on LR or ZLZR, is an option that has fewer of these caveats.
Splatoon can change itself to suit the controller.
 
I wonder if you could feasibly put in the Dualsense Edge's short-stop feature into a joycon and not have it cost the same amount as a Switch Lite. You could certainly do it in a pro controller.
 
0
I usually find fan mockups a hit or miss, but this fanmade Switch 2 UI is actually really nice, and I wouldn't mind if it looked like this.


The mockup is well-made, and if Nintendo asked me to help them market the Switch 2, I'd probably consider using the same Apple-like approach that this video uses. Though there is one unrealistic thing about this mockup that I'm surprised that I haven't seen anyone else bring up: I doubt the Switch 2's Joycons will be the same size and shape as the previous ones, I'm predicting that Nintendo will want people to have zero difficulty whatsoever distinguishing the old and new consoles at a glance, and that would require the "Gen 2 Joycons" to be a different shape.
 
0
This thing skyrocketted ds sales, but sink software sales, in the last year of the switch this thing could really help keep selling the system. And put some speed to the new hardware release.
See that’s what I was under the assumption of. Well I didn’t know how much R4 carts kept older systems alive but I did expect it to sell well to those aware. It’s why the “cringe” reaction didn’t make too much sense to me.
 
New hardware reference is in Grubb's tweet
No I just mentioned new hardware being news that anyone would find exciting unironically. The exciting news reference was in the original tweet about the flash cart. Jeff was just joking about news coming because of this.
 
No I just mentioned new hardware being news that anyone would find exciting unironically. The exciting news reference was in the original tweet about the flash cart. Jeff was just joking about news coming because of this.
Yes, but the user posted Grubb's tweet thinking it was a serious reference.
 
0
Yeah, its a little bit confusing. Tegra x1 has 2mb of cache (and I think 128KB as well) for its CPU.

According to Techspot, A78C can have 256KB or 512KB of cache per core, so theoretically, it will have at least 2mb of l2 cache or up to 4mb L2 cache total for 8 cores for the CPU.


When you add in the factor of GPU having its own cache, then it makes it more complicated. TX1 has 256kb of cache for the GPU, but so does Tegra Orion AGX. For the latter it has 16 SM, and (someone correct me if I'm wrong), I believe its 256KB per SM.. So it makes sense that the Orion AGX (the one that has 2048 cores) has 4MB of total L2 cache for the GPU. And if that's the case then I believe TX1 has 4 SMs and 1mb of cache for the GPU total?

Now if the T239 really does have 12 SMs, and its 256KB per SM, perhaps we could get 3MB of L2 cache total for the GPU? But that wouldn't line up with what dakhil mentioned about T239 having either 1mb or 4mb of cache here

l4 is pretty uncommon from what I've read. In some cases it basically acts as DRAM.
Ampere GPUs have both a L1 and L2 cache. The L2 cache for Orin will be 4MB from what dakhil posted earlier. Tegra X1 has 2 SMs, (2SM = 256 Cuda Cores) which makes T239 fascinating for having a big GPU for a mobile/tablet APU (12 SM = 1536 Cuda Cores, 48 Tensor Cores, 12 RT Cores). T239 will definitely have more cache on the die compared to X1 based on having a 8-core cpu and 12 SM Ampere GPU at the minimum.
 
0
See that’s what I was under the assumption of. Well I didn’t know how much R4 carts kept older systems alive but I did expect it to sell well to those aware. It’s why the “cringe” reaction didn’t make too much sense to me.
It doesn't make sense to you for software sales to tank or be shittier than it should be because it kept hardware from dying a little bit longer?
 
0
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom