StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

So


What now
restless.gif
 
Prediciting Switch 2 performance, part 2: The AMD raster advantage.

Here is a similar graph from before, except now it's for AMD cards. Same benchmark suite, same CPU setup.

Card6500 XT66006600 XT6700 XT
Average44.882.499.2120
TFLOPS5.78.910.613.2
FPS/TFLOP7.8596491239.2584269669.3584905669.090909091

You'll notice a couple things. First off, AMD starts similarly consistent in it's FPS/TFLOP numbers, but totally falls apart at the bottom of the stack. Something about the 6500 XT is wrong and it is vastly less efficient than the other cards in the stack.

However, these performance numbers are much higher than Nvidia's. If you've heard about the AMD raster advantage, there it is. With the exception of the obviously broken 6500 XT, AMD is performing 38% better than Nvidia. This is why you can't compare Series S to Switch 2 just by TFLOPS.

An interesting but open question - does the Series S have the same problem affecting the 6500 XT? In making my performance predictions I assume it does not. Going by that assumption, even if we get 4 TFLOPS of performance out of Switch 2, it'll be substantially behind the Series S, and require dropping res or frame rate by a significant amount, with DLSS needing to fill in the gap.

But we've got some significant advantages over the Series S that reverse this situation, in some cases. That's the next post.
My understanding with AMD's rasterization advantage as compared to nvidia starting with the 20xx series card is nvidia starting with those cards devoted some of the silicon to tensor cores for DLSS while nvidia did not. is that a fair understanding?
 
Happy New year to the Most Sane, neutral, and Tech focused thread.
 
The most straightforward non-techy reason why I never believed in an 8nm is simply because it doesn't match up with the overall "theme" of this console. All the parts that have been procured have been solid "middle of the road" or "above-average" specs. 256gigs of UFS 3.1 which is second only to UFS 4.0. 12gigs of the newer LDRR5X RAM -- smaller than 16gigs, but of a newer type. Probably a high-quality LCD that's better than what the OG Switch got.

Nothing particularly too premium, but nothing garden variety or bland. If anything, with every reveal, people were positively surprised when Nintendo went higher than expected. The decompression hardware, a custom SoC with backported features from Lovelace, the tensor cores, and so-on.

5nm simply lines up with the rest of the console. It's a solid "middle of the road" process that's neither a bleeding-edge 4nm, but neither the outdated 8nm. 8nm is neither solid or middle of the road. It's virtually abandoned because we're in-between Nvidia producing Ada Lovelace/Geforce4000-series cards which are on 5nm and Blackwell/Geforce5000 series which presumably will go bigger (or rather, smaller).

I don't want to delve into the techy reasons because that's been discussed ad nauseam. I'm just saying: all the specs on the Switch 2 feel well "above average". 5nm simply feels like it fits in with the big picture and a 8nm just doesn't.

In addition to all this, the final nail in the coffin for me personally was finding out how old the 8nm process actually was. I mean, in terms of nanotech, that thing is practically ancient. Nintendo may not necessarily always be cutting edge, but with every console, they used a fairly recent node, even the dang Wii!
 
Last edited:
My understanding with AMD's rasterization advantage as compared to nvidia starting with the 20xx series card is nvidia starting with those cards devoted some of the silicon to tensor cores for DLSS while nvidia did not. is that a fair understanding?
That's my guess - that Nvidia spent die area on tensor and RT cores, while AMD spent it on more ROPS and Infinity Cache
 
Not to console wars butt

How do series s and switch 2 stack up?

Can someone give their best comparison?

Processor: 8-core Custom Zen 2 CPU at 3.6 GHz

GPU: 4 TFLOPS, 20 CUs at 1.565 GHz Custom RDNA 2 GPU

Memory: 10 GB GDDR6 with 8 GB at 224 GB/s and 2 GB at 56 GB/s

Storage: 512 GB or 1 TB Custom NVME SSD

Video: 1440p gaming resolution, up to 120 FPS performance, HDMI 2.1 port, Auto Low Latency Mode, HDMI Variable Refresh Rate, AMD FreeSync

Sound: L-PCM, up to 7.1, Dolby Digital 5.1, DTS 5.1, Dolby TrueHD with Atmos

Ports: 3x USB 3.1 Gen 1 ports, 802.11ac dual band wireless, 802.3 10/100/1000 Ethernet

Design: 6.5 cm x 15.1 cm x 27.5 cm, 4.25 lbs

Features: HDR, all-digital, Quick Resume, DirectX ray tracing
How do the 2 devices compare?
 
Not to console wars butt

How do series s and switch 2 stack up?

Can someone give their best comparison?

Processor: 8-core Custom Zen 2 CPU at 3.6 GHz

GPU: 4 TFLOPS, 20 CUs at 1.565 GHz Custom RDNA 2 GPU

Memory: 10 GB GDDR6 with 8 GB at 224 GB/s and 2 GB at 56 GB/s

Storage: 512 GB or 1 TB Custom NVME SSD

Video: 1440p gaming resolution, up to 120 FPS performance, HDMI 2.1 port, Auto Low Latency Mode, HDMI Variable Refresh Rate, AMD FreeSync

Sound: L-PCM, up to 7.1, Dolby Digital 5.1, DTS 5.1, Dolby TrueHD with Atmos

Ports: 3x USB 3.1 Gen 1 ports, 802.11ac dual band wireless, 802.3 10/100/1000 Ethernet

Design: 6.5 cm x 15.1 cm x 27.5 cm, 4.25 lbs

Features: HDR, all-digital, Quick Resume, DirectX ray tracing
How do the 2 devices compare?
The main disadvantage the Switch 2 will face compared to the Series S is the CPU performance. However, it will have more RAM as well as a more robust upscaling/ray-tracing implementation.
 
In addition to all this, the final nail in the coffin for me personally was finding out how old the 8nm process actually was. I mean, in terms of nanotech, that thing is practically ancient. Nintendo may not necessarily always be cutting edge, but with every console, they used a fairly recent node, even the dang Wii!

Honestly, I think we've become a bit desensitized as to how cutting-edge the Switch 2 is in it's own way. I mean, DLSS and the whole venture of AI upscaling, Ray-Tracing, and the like is still pretty hecking new tech. DLSS v1 was in 2019. In a few short years, it's become so radically powerful that rasterization is in the beginnings of going out the way of the dinosaur. Now, Nintendo is the first company to have it built-in to their own console. Not Sony or Microsoft, but the House of Mario. They knew as early as only two years into the Switch's lifespan, when they were raking in the big money, that Nvidia was onto something. As early as 2019, they were thinking about the future when they could have (rightfully) just focused on swimming in a giant pile of money like Scrooge McDuck.

You'd think this sort of cutting-edge adoption would be in PS5 Pro and, indeed, it has it's own version of upscaling. It could possibly be that it's strictly because Sony knows that Nintendo will use DLSS which has become a bit of an open secret. Regardless, the fact that Nintendo is using DLSS at all should frankly be the reason to shut down any and all "Because Nintendo" arguments.

They didn't have to use tensor cores or the very modern technology Nvidia has rapidly built up.
They could have gone for 128gigs of storage,
8gigs of RAM,
no decompression hardware,
no tensor cores or RT cores,
an off-the-shelf chip like the Tegra X1.

...A "Switch Pro" like everyone expected.

They really, really, could have done that if they wanted to be cheapskates and save money.

Instead they are the first company to use an extremely modern and powerful new form of rendering right in the guts of their hardware. They've given it the same world-class features that the other big boys have like decompression hardware for super-fast load times, I/O with speeds comparable to an SSD, a beefier storage space, a completely custom SoC made exclusively for them unlike the off-the-shelf chip like last time, working with Samsung for the MicroSD Express cards; when you take a few steps back and look at the forest instead of the trees, that all seems like a lot of effort if they were looking to cut corners.

That doesn't invalidate they have "cut corners" by not splurging on bleeding edge tech everywhere in sight. No 16gigs of RAM or 512gigs of storage space. I doubt there's an OLED screen. Who knows, maybe there really is an 8nm. But it goes back to my initial point: the more you let it sit just how cutting-edge the Switch 2 actually is, the less any "Because Nintendo" arguments make sense.
 
Not to console wars butt

How do series s and switch 2 stack up?

Can someone give their best comparison?

Processor: 8-core Custom Zen 2 CPU at 3.6 GHz

GPU: 4 TFLOPS, 20 CUs at 1.565 GHz Custom RDNA 2 GPU

Memory: 10 GB GDDR6 with 8 GB at 224 GB/s and 2 GB at 56 GB/s

Storage: 512 GB or 1 TB Custom NVME SSD

Video: 1440p gaming resolution, up to 120 FPS performance, HDMI 2.1 port, Auto Low Latency Mode, HDMI Variable Refresh Rate, AMD FreeSync

Sound: L-PCM, up to 7.1, Dolby Digital 5.1, DTS 5.1, Dolby TrueHD with Atmos

Ports: 3x USB 3.1 Gen 1 ports, 802.11ac dual band wireless, 802.3 10/100/1000 Ethernet

Design: 6.5 cm x 15.1 cm x 27.5 cm, 4.25 lbs

Features: HDR, all-digital, Quick Resume, DirectX ray tracing
How do the 2 devices compare?
A lot of this we don't exactly know (eg we lack CPU and GPU clocks)
as far as our inferences go, CPU is probably like 50-75% of the current gen consoles (we know it's 8 cores, ARM A78C)
GPU probably weaker somewhat for regular rasterization; even if it's 4 TFLOPS, that'd be about 30-ish% less performance, but it'd take a far lesser hit from ray tracing so when that's on it'd close the gap (both cause of a different GPU architecture; we know it has 1536 cores, 12 RT cores, and 48 tensor cores)
storage is 256 GB UFS 3.1, also has a decompression block to take the load off CPU and aid loading speeds (i believe the series S has that too)
RAM is 12 GB LPDDR5X which gives 120GB/s bandwidth (as far as I know this shouldn't cause any issues; different CPU/GPU architectures need different amounts of bandwidth to work efficiently and as far as we know this should be good)
as far as features go we have no clue for the most part lol. It does support ray tracing, and is built to run DLSS for upscaling (that's on the tensor cores i mentioned, and another thing that could maybe help close the GPU gap with the series S)
We know a surprising amount for a console that hasn't been revealed but it's really not everything. We'll have to see for the most part so there isn't a super solid answer for a lot of this

How will more RAM but less CPU translate into porting games?
Different games have different limitations so it'd depend I think. For a game that's already giving the PS5/series X/S CPUs trouble, it might get tough to get the game running on switch 2. That's not a ton of games to my knowledge, but it does matter
More RAM is really nice though; to my knowledge the amount + odd configuration on series S is a common gripe developers have when making a series S version. more RAM helps with texture quality, having assets loaded (i believe this was a problem for baldur's gate 3 on xbox), and ray tracing (RT tends to eat up RAM). I'd guess that this should stop ports from being a gigantic headache, i am just guessing on that though
 
Of course it does - it means there's been a 4-year period ahead of Switch 2's launch where multiplatform developers have been targeting a baseline that is already in Switch 2's ballpark. That completely changes how business decisions are made from "will paying for engineering efforts to get a PS5/Series X game running on Switch 2 provide a return on investment?" to "we've already spent all this money to fit within Series S, with a little more we can also reach Switch 2 and which will certainly be a larger customer base".
They aren't targeting anything. Series S doesn't affect the switch.
 
I'm not sure I follow you.
I think the argument is "IF it will run on X-box then it will run on switch 2"
Remember all series games are required to Run on both series consoles. Microsoft inadvertently helped Nintendo in this regard.

Check out @Goodtwin's take on that here
I agree with Goodtwin, I suspect going forward, there would be occurrences of this actually being the other way around: Switch 2 would end up being the "base target", then from there the game would be "up-ported" for XSX and PS5 ("side-ported" to XSS). Or they can ignore Switch 2 again at their own peril and risk missing out on making money yet again, even if Switch 2 userbase ends up only being as half big as Switch's was.
 
Check out @Goodtwin's take on that here
I agree with Goodtwin, I suspect going forward, there would be occurrences of this actually being the other way around: Switch 2 would end up being the "base target", then from there the game would be "up-ported" for XSX and PS5 ("side-ported" to XSS). Or they can ignore Switch 2 again at their own peril and risk missing out on making money yet again, even if Switch 2 userbase ends up only being as half big as Switch's was.
That's a solid take. I didn't think of this before but If Microsoft was smart then they would leverage gamepass ports to finally find a market in Japan.
For an example. A game like Forza or Starfield would never get big sales in Japan. On X-box hardware. Now the game lands on a console with a 70-100k install base. I didn't see it before but it seems the switch 2 will help Microsoft stay relevant rather than the other way.
 
Predicting Switch Performance Part 4: What if Big Though?
So, I'm one of the thread pessimists. I generally assume that Nintendo will clock their hardware down for power and heat reasons, giving us a nice, compact experience. But secretly, I've got an outrageous situation I've been modeling. Let's take a look.

Card305030603060 Ti30703070 TiT239/3.5T239/4
TFLOPS9.112.716.220.321.73.54
Bandwidth224360448448608120120
L22344411
Bandwidth/TFLOP24.628.327.622.028.034.230
Cache/TFLOP0.210.230.240.190.180.280.25
kilo Cache/Width8.98.38.98.96.58.38.3

We've got T239, at both 3.5 and 4 TFLOPS, up against the low to mid-range cards in the RTX 30 stack. And something looks off, and if you're a long follower of the thread you'll see what.

Bandwidth/TFLOP is crazy high, even on the 4 TFLOP scenario. And we have an explanation of this - some of that bandwidth needs to serve the CPU, so we need to be over provisioned on bandwidth. And that makes sense, and if you're like me, you turn away from the data and ignore the huge bandwidth situation for a while.

But let's look at that weird metric at the bottom. kilo cache per gigabyte of bandwidth. This is a dumb way of naming the metric, but it's basically how much cache you get for all the bandwidth you've got. And the T239 number looks totally normal.

Except, it shouldn't because some of that bandwidth isn't being used by the GPU. It's being used by the CPU which has it's own cache! T239's cache is unusually beefy for how small the GPU is. And maybe it's just... beefy! Maybe that's it.

But what if it's not? Look at the similarly weird cache/tflop. The number is unusually high. Once again, we see too much cache. We don't know how much bandwidth is used by the CPU, but what if we dialed up the TFLOPS until the cache number made sense? Well, you'd wind up with 5 TFLOPS of performance.

Which is too high! It's stressing the bandwidth out too much. But it does imply that the memory subsystem can handle more performance than we've been estimating. I play primarily handheld, so none of this effects me directly, but if someone suggested 4.3-4.5 TFLOPS, I'd have to admit that the GPU could handle it.

Doesn’t Nintendo have a history of being bandwidth starved with their hardware?

Not to mention, with DLSS, and RT enabled, don’t those start to eat up on memory, plus available bandwidth? Maybe it’s Nvidia, and Nintendo’s way of designing a chip that can adequately feed everything you throw at it.

Plus, if in docked mode you’re targeting 4K resolution, whether native, or upscaled, I know that’ll also eat up into your memory pool.

Then again, there could be something else at play we haven’t considered yet. As you mentioned, the amount of cache per TFLOP seemed unusually high. Would more bandwidth available also help increase power efficiency, and thus may allow the chip, and the whole system run overall with potentially better battery life?

Unlike the other RTX cards, T239 is designed very specifically for low-power (~15watts docked possibly). Even the 2050 is still targeting 30-45 watts give or take.

Just a thought I had though. Nothing concrete, or definitive.
 
I'm seeing BEE printed on thereπŸ‘€ Holy smokes what a good start to the new year. Hope the big tech heads here have fun diving into it all and siftin through what's visible
 
I'm seeing BEE printed on thereπŸ‘€ Holy smokes what a good start to the new year. Hope the big tech heads here have fun diving into it all and siftin through what's visible
I'll let the smart folks here chime in to make sure this is the real deal - but I can immediately see "BEE-CPU-01" in there which was what I was looking for.

Comparing this w/the prototype photos (CMB-CPU-X6) now

I'm out of the loop, what does "bee" represent here?
 
I'm seeing a HAC chip on there too. B2349 GCBRG HAC STD T2010423. Wonder if that's one of the chips(forgive me if my lingo isn't correct here!) we expect to be reused from the Switch 1. I forget which one but feel I recall one of the HAC pieces being assumed to be shared with BEE units as well

Edit: also to note that the whole motherboard is around 14cm wide. The person who took the photo kindly put a ruler below the full pic
 
Seems the card readers in place here. Really incredible seeing the prototype kit next to what's hopefully the retail motherboard!
 
0
Hidden content is only available for registered users. Sharing it outside of Famiboards is subject to moderation.


 

ILikeWaffles64 posted this link over in the speculation thread. Switch 2 motherboard leaked?




What information can we get from this leak? Famitechies Assemble!!!

giphy.gif

Censored info on the NVIDIA chip is a red flag, no? Conveniently left the codename of the console (that we've been knowing) visible though.
 
0
I'll let the smart folks here chime in to make sure this is the real deal - I see "BEE-CPU-01" in there which was what I was looking for.

Comparing this w/the prototype photos (CMB-CPU-X6) now


94J3vPr.jpeg

LJ4bedk.jpeg


wny9YD2.jpeg

Is letter "G" next to pixelated area saying GML?

For reference, CMB-CPU-X6 prototype photo:
3MlOLvS.png
This is fully confirmation of 4nm, It simple cant be samsung 8nm
 
Happy new year! Looks big to me, but I’ll let the experts chime in.

Still would love to know the hardware β€œgimmick” this time around. Nintendo always has something up their sleeve.

Using the ruler in the first pic, and taking my literal nail file as a very crude attempt at making something somewhat accurate, the chip looks to be about 15mm x 10mm, which would be 150mm^2.

That would indicate to me it's NOT 8nm.


EDIT: Should be clear my non-scientific attempt at measuring could be littered with errors, but using the two screw mount locations as a guide, which almost appear to be about 33mm apart, the long side of the chip appears at most half that, so I went from there.

Someone with some better detective work can go deeper into the weeds.
 
Last edited:
Would it be possible to fake a motherboard with that code?
It would have to be a very elaborate fake - I cannot rule out that possibility.

There have been analysis done here on Fami on the CMB-CPU-X6 prototype photos - there's way too many details included in there that would be obscure knowledge but did get "right".

This BEE-CPU-01 looks pretty legitimate to me right now.
 
Prediciting Switch 2 performance, part 2: The AMD raster advantage.

Here is a similar graph from before, except now it's for AMD cards. Same benchmark suite, same CPU setup.

Card6500 XT66006600 XT6700 XT
Average44.882.499.2120
TFLOPS5.78.910.613.2
FPS/TFLOP7.8596491239.2584269669.3584905669.090909091

You'll notice a couple things. First off, AMD starts similarly consistent in it's FPS/TFLOP numbers, but totally falls apart at the bottom of the stack. Something about the 6500 XT is wrong and it is vastly less efficient than the other cards in the stack.

However, these performance numbers are much higher than Nvidia's. If you've heard about the AMD raster advantage, there it is. With the exception of the obviously broken 6500 XT, AMD is performing 38% better than Nvidia. This is why you can't compare Series S to Switch 2 just by TFLOPS.

An interesting but open question - does the Series S have the same problem affecting the 6500 XT? In making my performance predictions I assume it does not. Going by that assumption, even if we get 4 TFLOPS of performance out of Switch 2, it'll be substantially behind the Series S, and require dropping res or frame rate by a significant amount, with DLSS needing to fill in the gap.

But we've got some significant advantages over the Series S that reverse this situation, in some cases. That's the next post.
6500XT is badly hobbled by that 64-bit bus and 4GB memory, so some games tend to perform worse than expected which contributes to a lower than expected average. Also that 16MB Infinity Cache does not help at all.
 
It would have to be a very elaborate fake - I cannot rule out that possibility.

There have been analysis done here on Fami on the CMB-CPU-X6 prototype photos - there's way too many details included in there that would be obscure knowledge but did get "right".

This BEE-CPU-01 looks pretty legitimate to me right now.
It’s not even the new year in my time zone yet. This is insane.
 
0
It would have to be a very elaborate fake - I cannot rule out that possibility.

There have been analysis done here on Fami on the CMB-CPU-X6 prototype photos - there's way too many details included in there that would be obscure knowledge but did get "right".

This BEE-CPU-01 looks pretty legitimate to me right now.
This would be more elaborate than any other elaborate gaming hoax in history. Its real.
 
Please read this new, consolidated staff post before posting.
Last edited:


Back
Top Bottom