• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Furukawa Speaks! We discuss the announcement of the Nintendo Switch Successor and our June Direct Predictions on the new episode of the Famiboards Discussion Club! Check it out here!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

They are switching from a tablet GPU to a car GPU, which gives a substantial advantage in docked mode because a car GPU is designed to use more power, meaning they can increase the gap between handheld and docked.
They aren’t actually using a “car” GPU, the config follows the desktop GPU more than the “car” one

They are still all ampere and truthfully there’s no car or tablet GPU.
 
As good as something like Rift Apart on PS5 looks it’s still constrained by running on a 10teraflop gpu. The 3090ti is around 40teraflops. By the time PS6 comes around it will be around 50teraflops.

There’s still so much visually that companies like Sony and Rockstar (who are willing to push the envelop and spend crazy amounts of money on development) can achieve. Everything from character models, the geometric complexity of Worlds, lighting to texture details to material shading quality to real time ray tracing for all lighting, shadows and reflections. Then crazy sized Worlds like AC Valhalla with full BotW like physics and micro details of a GTA or RDR game.

I said not long ago that if you up the textures then the resolution of their current games and get the 30fps games like BotW/Xenoblade up to 60fps and maybe add 120fps for multiplayer heavy game like Kart, Smash and Splatoon then I think Nintendo are good on their current visual front for a while just because most of the games they publish do not go for realism or massive open photo realistic Worlds like GTA or Spider-Man. Super Mario Odyssey looks astounding at 4k on YouTube for instance.

We do not need 50 teraflops dedicated to rendering when technology like Nanite and RTXGI exist.

As a developer, I can say that we have reached an inflection point with technology like Nanite where needing more power to render more complex geometry is a thing of the past. Nanite assets are not lacking in this area and it would be a waste of resources to render detail beyond the human eye's ability to resolve. We are also fast approaching a similar inflection point with dynamic diffuse-interreflection and other lighting solutions. Same with virtualized texturing.

Going forward, more power should be going to physics (especially for soft body and granular/fibrous assets), simulations, and AI. And even then, there is only so much the consumer will be able to appreciate.

What we need are better energy efficiency solutions, better storage solutions, and better compression techniques, as those are still far behind where we need them to be.

EDIT:

We need to do away with the notion that more power is always better. It's important to focus on efficiency. We should only stive for more power when it's absolutely necessary and we're quickly reaching a point where more power will not only be unnecessary but also wasteful.
 
They aren’t actually using a “car” GPU, the config follows the desktop GPU more than the “car” one

They are still all ampere and truthfully there’s no car or tablet GPU.
I mean you're really splitting hairs here. It's true the GPU was not solely made for cars, they use Orin in other things as well, but the point is that orin was designed to use a lot more power than a tablet GPU is, that's why it can have features like DLSS and RT cores which a tablet probably isn't gonna have.
 
I think that people who are trying to temper expectations are simply allowing pessimism and fear of disappointment to blind them from the actual data. Unless Nintendo severely underclocks this (as in, much more than the X1) or they decided to cut features last minute there's no reason to think this thing wont run laps around the switch. There's two things to remember here when you consider the advantage we see with the Drake
1)The X1 was already 2 years old when Nintendo launched their console. This was because Nintendo bought a largely off the shelf product with only very minor modifications, because they were broke and got a good deal. So there will be more hardware advancement than you would normally expect from a console leap just from this.

2)They are switching from a tablet GPU to a car GPU, which gives a substantial advantage in docked mode because a car GPU is designed to use more power, meaning they can increase the gap between handheld and docked.

These two reasons are why it's natural to expect a larger than normal improvement in hardware.

Also, just to be clear the massive leap I am expecting is only in docked mode. I expect a much more standard leap in handheld mode. This is primarily because I expect docked resolution to improve from 900p on average to 4k. This is a 5.7x increase. Even if handheld increases from 720p to 1080p, which I believe to be unnecessary, that's still only around 2x higher. Also docked probably makes more use of the RT cores. So while the gap between handheld and docked on the switch was about 70% or so, the gap between handheld and docked on the switch 2 should be more like 3x as much.
1) when Nintendo selected the X1 is was top of the line. Even when it launched 2yrs later there was only a few things that could even match it that Nintendo could have picked.
Broke? How? What source do you have there?
Also it wasn’t very minor customization since it mainly went to the software side of the chip. And, it’s been +5yrs in the mobile sector which has only advanced more & more. That is the reason why the next chip is gonna perform better then whatever other reason we have.
 
I mean you're really splitting hairs here. It's true the GPU was not solely made for cars, they use Orin in other things as well, but the point is that orin was designed to use a lot more power than a tablet GPU is, that's why it can have features like DLSS and RT cores which a tablet probably isn't gonna have.
that's not how things are designed. Drake is a bespoke chip
 
Based on rumors and speculation, how does the Switch Pro compare to the Steam Deck in terms of power?
In docked, much stronger, which is unsurprising both because of the addition of DLSS and because the steam deck will be around 2 years old when it comes out. In handheld it's hard to say, but the steam deck probably wont be hugely behind.
 
We do not need 50 teraflops dedicated to rendering when technology like Nanite and RTXGI exist.

As a developer, I can say that we have reached an inflection point with technology like Nanite where needing more power to render more complex geometry is a thing of the past. Nanite assets are not lacking in this area and it would be a waste of resources to render detail beyond the human eye's ability to resolve. We are also fast approaching a similar inflection point with dynamic diffuse-interreflection and other lighting solutions. Same with virtualized texturing.

Going forward, more power should be going to physics (especially for soft body and granular/fibrous assets), simulations, and AI. And even then, there is only so much the consumer will be able to appreciate.

What we need are better energy efficiency solutions, better storage solutions, and better compression techniques, as those are still far behind where we need them to be.

EDIT:

We need to do away with the notion that more power is always better. It's important to focus on efficiency. We should only stive for more power when it's absolutely necessary and we're quickly reaching a point where more power will not only be unnecessary but also wasteful.
Honestly I feel this mentality is likely the thing that got Nintendo on-board with Drake in the first place to be willing to go all the way to 12SMs.

As, with my TFLOP calculations here
I have done it
I HAVE FOUND A WAY TO CONVERT TFLOPS ACROSS uARCHS


Code:
---GPU FLOP Comparison Method---
Ampere: 2(SM Count * (128 * Clock speed))

Turing/Vega/GCN 2(SM/CU Count * (64 * Clock Speed))

RDNA1/IC-Less RDNA2 converted to GCN TFLOPs (2(CU Count * (64 * Clock Speed)))+25%

Calculate the FP32/Cycle of a GPU with the equation then look at the TFLOP value of that GPU as rated by the manufacturer, the "Efficiency" of the TFLOPs will be exposed by the difference in the FP32/Cycle result

Factor in % Additions to extrapolate back to a weaker/older uArch if the % difference is properly known
(EX: RDNA1/IC-Less RDNA2 is 25% better in IPC than GCN, so it would be the GCN Equation + 25%)

And with this and some extrapolation Z0m3le and I have more or less determined Desktop Ampere to be equivalent to Polaris and therefore not so far off from the rest of GCN in regards to FLOP Efficiency

And that is not considering all the features that even Ampere has over GCN like Tile-Based Rasterization, Mixed Precision FP, The Tensor and RT cores, Primitive and Mesh Shaders, Variable Rate Shading (which can boost effective GPU perf up to 20% in some reports)

Which, considering Drake is running pretty much "Ampere+" with the extra L2 Cache, and RDNA2 reporting massive IPC uplifts with infinity Cache over the 25% from GCN to RDNA1, can likely give us determination on what clocks Drake needs to hit to match the PS4 Pro or even the Series S assuming different IPC uplifts over Ampere via us being able to covert IC-Less RDNA2 to GCN and Ampere and GCN being similar FLOP-to-FLOP

(Note, this is actually lowballing Ampere as modern games that take advantage of Ampere's features will outperform GCN, but this gives us a more level comparison between Ampere, GCN (PS4 Pro/One X), and IC-less RDNA2 (Series S). So technically these numbers are a lowball for modern effectiveness)


So for example

Code:
-PS4 Pro (GCN): 2(36*(64*0.911) = 4197.888 FP32/Cycle rated at 4.2 GCN TFLOPs

-Series S (IC-Less RDNA2): (2(20(64*1.565)) = 4006.4 FP32/Cycle rated at 4 RDNA"2" TFLOPs + 25% = 5008 FP32/Cycle or 5 GCN TFLOPS

---Drake based on Default Ampere for Reference---
-Drake (Pure Ampere, OG Switch Docked Clocks): (2(12(128*0.768)) = 2359.296 FP32/Cycle or 2.35 Ampere TFLOPs
-Drake (Pure Ampere, 1Ghz): (2(12(128*1)) = 3072 FP32/Cycle or 3 Ampere TFLOPs
-Drake (Pure Ampere, 1.5Ghz):  (2(12(128*1.5)) = 4608 FP32/Cycle or 4.3 Ampere TFLOPs
-Drake (Pure Ampere, 1.63Ghz aka Matching Series S): (2(12(128*1.63)) = 5007.36 FP32/Cycle or 5 Ampere TFLOps.
----------------------------------------------------------

Now, assuming even just a marginal 10% increase in IPC over Ampere those values effectively become
-Drake (768Mhz): 2.6 Ampere TFLOPs
-Drake (1Ghz): 3.3 Ampere TFLOps
-Drake (1.5Ghz): 5 Ampere TFLOps
-Drake (1.63Ghz): 5.5 Ampere TFLOps
Even a 10% increase over Ampere due to that Cache is enough to make 1.5Ghz Drake match the Series S's GCN Equivalent!

Now, Assuming Drake gets the 25% boost AMD did just from going GCN to RDNA1
Code:
--Drake: 25% better than Ampere calculation--
-Drake (768Mhz): 2.95 Ampere TFLOPs
-Drake (1Ghz): 3.8 Ampere TFLOps
-Drake (1.5Ghz): 5.7 Ampere TFLOps
-Drake (1.63Ghz): 6.2 Ampere TFLOps

You'd only need to hit 1.1Ghz to match the PS4 Pro if they pull out a 25% IPC increase through the cache (Which, considering reports of the 4070 with the exact same core count or less than the 3090 and the only major difference in raster perf seemingly being the Cache reporting an up to 30% increase over the 3090, that may very well be the case)

And only 1.3Ghz to match the Series S!
And my very loose RT TFLOP Extrapolation here

Hey, it's just math 😉

But yeah, also in a similar vein, but looser, I did calculate the RTFLOP performance as we know the effective RTFLOPs of the Series X and NVIDIA states the RTFLOps of the Ampere cards.


And the math per RT core / per Ic-less Ray accelerator does work out to be that 12SM drake would be stronger than the PS5

Series X is 13RTFLOPs (They say over 25TFLOPs when ray tracing, so 12FP32 TFLOPs + 13RTFLOPs (at best) = 25TFLOPs total)
That works out to 13/52=0.25 RTFLOP/Ray Accelerator for IC-less RDNA2

0.25*20 = 5 RTFLOPs for Series S
0.25*36 = 9 RTFLOps for PS5

Ampere's RTFLOP calc is far easier as NVIDIA just gives us the pure-RT workload that Ampere cards can do in RTFLOPs (Aka path tracing)

EX:
RTX 3090: 69RTFLOPs/ 82RTCores = 0.84 RTFLOPs/A-RT Core

So Ampere is 0.84 RTFLOPs/RT core versus IC-less RDNA2 at 0.25 RTFLOPs/RT core.

So Drake at 12SMs would be 10RTFLOps.

So yes. it ray traces (in raw acceleration) better than the PS5 and that's before DLSS or any prediction of how performance can be improved if an RT method really likes Ampere's BVH Traversal.
It is very likely that Nintnedo and NVIDIA looked to find the "Min spec" for what modern system could use DLSS, RTXGI, VRS, Mesh Shading, and Virutal Textures.etc to their most efficient degree for the Hybrid Form Factor.


While a world where I could run a game with graphical quality akin to R&C Rift Apart on my smartphone may exist in the future, that day isn't now, and there is a floor at the moment to be able to sustain the "Punch way above it's weight due to smart and efficient rendering techniques" route, and 12SMs seemed to be where NVIDIA and Nintendo figured out it would be at the moment.

No need to commend on that btw (But TBH it is public info after the hack so the 12SM GPU is the concrete thing and I am just mathing out the potential processing power of the Drake SoC here as I have no clue what clocks it will be at or what IPC improvements it will have over Ampere, thus the ranges)
 
Based on rumors and speculation, how does the Switch Pro compare to the Steam Deck in terms of power?
my expectations are still as powerful as the Deck in docked mode
Well....
I have done it
I HAVE FOUND A WAY TO CONVERT TFLOPS ACROSS uARCHS


Code:
---GPU FLOP Comparison Method---
Ampere: 2(SM Count * (128 * Clock speed))

Turing/Vega/GCN 2(SM/CU Count * (64 * Clock Speed))

RDNA1/IC-Less RDNA2 converted to GCN TFLOPs (2(CU Count * (64 * Clock Speed)))+25%

Calculate the FP32/Cycle of a GPU with the equation then look at the TFLOP value of that GPU as rated by the manufacturer, the "Efficiency" of the TFLOPs will be exposed by the difference in the FP32/Cycle result

Factor in % Additions to extrapolate back to a weaker/older uArch if the % difference is properly known
(EX: RDNA1/IC-Less RDNA2 is 25% better in IPC than GCN, so it would be the GCN Equation + 25%)

And with this and some extrapolation Z0m3le and I have more or less determined Desktop Ampere to be equivalent to Polaris and therefore not so far off from the rest of GCN in regards to FLOP Efficiency

And that is not considering all the features that even Ampere has over GCN like Tile-Based Rasterization, Mixed Precision FP, The Tensor and RT cores, Primitive and Mesh Shaders, Variable Rate Shading (which can boost effective GPU perf up to 20% in some reports)

Which, considering Drake is running pretty much "Ampere+" with the extra L2 Cache, and RDNA2 reporting massive IPC uplifts with infinity Cache over the 25% from GCN to RDNA1, can likely give us determination on what clocks Drake needs to hit to match the PS4 Pro or even the Series S assuming different IPC uplifts over Ampere via us being able to covert IC-Less RDNA2 to GCN and Ampere and GCN being similar FLOP-to-FLOP

(Note, this is actually lowballing Ampere as modern games that take advantage of Ampere's features will outperform GCN, but this gives us a more level comparison between Ampere, GCN (PS4 Pro/One X), and IC-less RDNA2 (Series S). So technically these numbers are a lowball for modern effectiveness)


So for example

Code:
-PS4 Pro (GCN): 2(36*(64*0.911) = 4197.888 FP32/Cycle rated at 4.2 GCN TFLOPs

-Series S (IC-Less RDNA2): (2(20(64*1.565)) = 4006.4 FP32/Cycle rated at 4 RDNA"2" TFLOPs + 25% = 5008 FP32/Cycle or 5 GCN TFLOPS

---Drake based on Default Ampere for Reference---
-Drake (Pure Ampere, OG Switch Docked Clocks): (2(12(128*0.768)) = 2359.296 FP32/Cycle or 2.35 Ampere TFLOPs
-Drake (Pure Ampere, 1Ghz): (2(12(128*1)) = 3072 FP32/Cycle or 3 Ampere TFLOPs
-Drake (Pure Ampere, 1.5Ghz):  (2(12(128*1.5)) = 4608 FP32/Cycle or 4.3 Ampere TFLOPs
-Drake (Pure Ampere, 1.63Ghz aka Matching Series S): (2(12(128*1.63)) = 5007.36 FP32/Cycle or 5 Ampere TFLOps.
----------------------------------------------------------

Now, assuming even just a marginal 10% increase in IPC over Ampere those values effectively become
-Drake (768Mhz): 2.6 Ampere TFLOPs
-Drake (1Ghz): 3.3 Ampere TFLOps
-Drake (1.5Ghz): 5 Ampere TFLOps
-Drake (1.63Ghz): 5.5 Ampere TFLOps
Even a 10% increase over Ampere due to that Cache is enough to make 1.5Ghz Drake match the Series S's GCN Equivalent!

Now, Assuming Drake gets the 25% boost AMD did just from going GCN to RDNA1
Code:
--Drake: 25% better than Ampere calculation--
-Drake (768Mhz): 2.95 Ampere TFLOPs
-Drake (1Ghz): 3.8 Ampere TFLOps
-Drake (1.5Ghz): 5.7 Ampere TFLOps
-Drake (1.63Ghz): 6.2 Ampere TFLOps

You'd only need to hit 1.1Ghz to match the PS4 Pro if they pull out a 25% IPC increase through the cache (Which, considering reports of the 4070 with the exact same core count or less than the 3090 and the only major difference in raster perf seemingly being the Cache reporting an up to 30% increase over the 3090, that may very well be the case)

And only 1.3Ghz to match the Series S!
TL;DR Anywhere between "around 1.5x the PS4 before DLSS" at worst, to "Matching the Series S" at best depending on the clock speeds, utilization of features the PS4/PS4 Pro never had even outside DLSS, and whatever Architectural improvements the system has over Desktop Ampere.

As for where it relates to Steam Deck, actually funnily enough we can calculate it here with the same method In my quoted post.

The Steam Deck's GPU is Infinity Cacheless (Shortening that to "Console") RDNA2 with 8 CUs rated between 1GHz and 1.6GHz
So plugging that into the equation

2(8(64*1.0) = 1024 FP32/Cycle or 1TFLOP Console RDNA2
to
2(8(64*1.6) = 1638.4 FP32/Cycle or 1.6 TFLOP Console RDNA2

With the equation, you can just add 25% to that number to get the GCN/Ampere TFLOP equivalent therefore just how much stronger Drake will be versus Steam Deck when docked

So 1024 + 25% = 1.2 GCN/Ampere TFLOPs or 1638.4 + 25% = 2.04 GCN/Ampere TFLOPs

Meaning Drake when docked will absolutely kick the Steam Deck's shins in, and that is assuming it's bog-standard Ampere, even at OG Switch Clocks which is the absolute minimum it can go because Drake has to match OG Swtich Docked clocks for B/C on the GPU.
 
How do we think Nintendo will announce it? Twitter drop or will they give a short heads up to tune in for a livestream reveal. Maybe a 30-60 min show?
Typically Nintendo is extremely low key about announcing their consoles. Now unveiling they'll give a proper press conference, but usually they'll just drop a "we're working on it and we'll talk about it later" at an investor meeting or something.
 
0
1)The X1 was already 2 years old when Nintendo launched their console. This was because Nintendo bought a largely off the shelf product with only very minor modifications, because they were broke and got a good deal. So there will be more hardware advancement than you would normally expect from a console leap just from this.
There's evidence that Nintendo and Nvidia were in the talks about using the Tegra X1 since at least 2013. So I don't know if Nintendo deciding to use the Tegra X1 since Nvidia gave Nintendo a very good deal necessarily holds.

Senior HW Applications Engineer
Jul 2013 - Dec 2014 · 1 yr 6 mos
Santa Clara, CA, USA
-Worked with Google on Project Tango tablet from concept to production
-Engaged with customer on system architect, vendor and component selection, schematic review, PCB layout review, power delivery network simulations, system thermal stack up and ID reviews, system bring up, interface validation, manufacture runs, interface debugs, and mass production.
-Gave a power consumption related demo to Nintendo team during sales process.

Associate Software Engineer
Nintendo Technology Development
Sep 2014 - Mar 2015 · 7 mos
Redmond

Porting Graphics demos
- Port proprietary nintendo sdk graphics sample tutorial apps from Wii U to Switch
Benchmark parallel processing
- Created linux apps using OpenMP to stress test components on SoC Nvidia Tegra X1


yrz9Z3W.png

:unsure:
 
You are correct that Nintendo was in talks with Nvidia long before the switch. In fact, it was well reported that Nintendo was considering a tegra for the 3ds. But the X1 was definitely chosen for its price, considering the pascal was an option and they elected for the older version.
 
You are correct that Nintendo was in talks with Nvidia long before the switch. In fact, it was well reported that Nintendo was considering a tegra for the 3ds. But the X1 was definitely chosen for its price, considering the pascal was an option and they elected for the older version.
I thought one of the prototypes for in the hands of a collector, but I might be wrong.
 
0
I mean you're really splitting hairs here. It's true the GPU was not solely made for cars, they use Orin in other things as well, but the point is that orin was designed to use a lot more power than a tablet GPU is, that's why it can have features like DLSS and RT cores which a tablet probably isn't gonna have.
I mean, maybe. I don’t really think that applies in this case though. A GPU is still a GPU regardless, how it does and how it is supported is what determines its use case. ORIN uses a modified version of the Ampere GPU microarchitecture, but this architecture is also similar to the one found in the Datacenter GPUs. So it’s not really a “car” GPU in this case.

With a tablet GPU? Well, there’s not really a thing as tablet GPU. To pass off as something that can function for a tablet, what is necessary is that it qualifies for the power targets. The industry has more or less homogenized where one size is trying to fit all. Apple for example uses the same GPU and CPU across a variety of processors. A newer generation gets a new name for what the efficiency cores and the performance cores are of course, but what works in the iPhone can work in the iPad and has worked before. Hell, if I’m not mistaken, Apple is putting their A13 Bionic in the new display!

In the case of Drake, which is the name of the chip that will be in the Switch Next, if you want to get more into the nitty gritty, it has more in common with the desktop GPUs than with ORIN. While ORIN has more in common with the Datacenter GPUs. All are ampere of course, but slight modifications that aid it or reduce an unnecessary aspect of it to make it work in its use case.

the reason ORIN uses more power, is really just down to being clocked so much higher and/or having so much more to it on the whole entire board that demands it. For example, the chip has DLAs which are meant for training neural networks, it’s like a mini supercomputer. A dedicated gaming console doesn’t really need this. It has 12 CPU cores, consoles at most have 8 for area and efficiency concerns, Switch doesn’t need 12. ORiN has a PVA which is a programmable vision accelerator, and these all have their own individual clock frequencies. it’s a whole different beast, and this is separate from the GPU itself!







But anyway, here’s why orin can be a confusing product, or rather I should say, why NVidia is so confusing:

IMG_0124.png


Officially, ORIN had 17B transistors when it was revealed in 2019. Then later one it had 21B transistors. The gentleman in the video is using an ORIN AGX which has the full feature set if I’m not mistaken, just two configs of 32 or 64GB of RAM. Anyway, here we see that he mentions ORIN AGX has 13Billion Transistors.

Of course I should mention, these are pre production OEMs that NVidia provided, I don’t know if that means anything in this case.

And I’m unsure if it’s that ORIN is 21B transistors in total and the person from this video is just referring to the GPU+CPU+Caches only equates to 13Billion Transistors, excluding the automotive only feature that do take up their own allotment of silicon. Or if ORIN is actually 13Billion transistors, instead of 21B.

Needless to say, this is confusing and throws a whole wrench in the conversation.



Actually, if the 2048 GPU cores +Caches + 12 CPU cores =13B transistors, then Nintendo with 1536+Caches+8 CPU cores would be very much below 13B and obviously the 21B transistors.

You are correct that Nintendo was in talks with Nvidia long before the switch. In fact, it was well reported that Nintendo was considering a tegra for the 3ds. But the X1 was definitely chosen for its price, considering the pascal was an option and they elected for the older version.
Pascal wasn’t even out when they selected for the TX1.
 
I totally get what you're saying. I obviously extremely oversimplified the orin to make the point of it not using a gpu designed for a tablet, and I apologize for that but I'm sure you can see what my actual point is - that a gpu designed primarily for tablet use (even if it gets used in other things as a side benefit) is going to lack major features that can benefit a device that can connect to the wall and draw potentially a lot more power. This is the main reason why the switch is getting features like DLSS and RT cores. It's hard to say if you can even give Nintendo credit for this system, since this is just what their partner happened to be working on anyway, but the benefits exist regardless.
 
How do we think Nintendo will announce it? Twitter drop or will they give a short heads up to tune in for a livestream reveal. Maybe a 30-60 min show?
My guess is probably the same way they have done it with all Switch hardware reveals. A quick heads up or shadow drop on Twitter or uploaded to their YT.
Things they will probably do:
  • ~5mins
  • Show off the main feature(s)
  • Show off some games on the screen
  • Show people taking into the world
  • Potentially some press have tried it a little before or after signed to an nda
  • Some trailers before or after that may show off dlss
  • Show off date & price
Things they probably won’t do
  • Livestream (probably only near E3 w/Treehouse or Games Con)
  • Press conference
  • 30-60min feature
  • Specific Direct
 
0
Nintendo obviously selected the orin years ago, was the orin out?
It just released to the public.
I totally get what you're saying. I obviously extremely oversimplified the orin to make the point of it not using a gpu designed for a tablet, and I apologize for that but I'm sure you can see what my actual point is - that a gpu designed primarily for tablet use (even if it gets used in other things as a side benefit) is going to lack major features that can benefit a device that can connect to the wall and draw potentially a lot more power. This is the main reason why the switch is getting features like DLSS and RT cores. It's hard to say if you can even give Nintendo credit for this system, since this is just what their partner happened to be working on anyway, but the benefits exist regardless.
Perhaps not RT cores, because NVidia created them for their silicon, Intel created it for their Silicon and AMD did… a thing for their silicon, but ML hardware which is what DLSS runs on? While tensor cores are much more robust and performant, ML hardware exists in smartphones. Apple even has their neural engine on their silicon. ARM has a special AI hardware they offer their clients.
 
You are correct that Nintendo was in talks with Nvidia long before the switch. In fact, it was well reported that Nintendo was considering a tegra for the 3ds. But the X1 was definitely chosen for its price, considering the pascal was an option and they elected for the older version.
Although I don't usually cite DigiTimes, DigiTimes mentioned that Nintendo delayed mass production of the Nintendo Switch from mid 2016 to early 2017, but I don't think Nintendo wanting to add VR to the Nintendo Switch is the reason. And Takahsi Mochizuki, who previously worked for the Wall Street Journal, and now works for Bloomberg, mentions that Nintendo originally planned to launch the Nintendo Switch by the end of 2016.

So I imagine Nintendo needs to decide which SoC to use for the Nintendo Switch by mid 2015 at the absolute latest if Nintendo's targeting a holiday 2016 launch for the Nintendo Switch. And going by how Nintendo confirmed working on the Nintendo Switch (codenamed NX) on 17 March 2015, I imagine Nintendo made the final decision to use the Tegra X1 for the Nintendo Switch at around the range of late 2014 to early 2015. And I don't think the Tegra X2 was ready when Nintendo made the final decision to use the Tegra X1, considering the Tegra X2 probably wasn't ready until probably around late 2015, considering Nvidia didn't formally announce an Arm based SoC with a Pascal based GPU until CES 2016, albeit indirectly via the Drive PX 2.

And considering that the Tegra X2 doesn't have DP4a instructions support, despite Nvidia mentioning that Pascal GPUs have DP4a instructions support, the Tegra X2 is probably very similar to the Tegra X1 in terms of the GPU.
 
I wonder, with us being more into the realm of diminishing returns, will devs focus more now on physics and AI of things in games.

Visuals are one thing, AI and Physics are another thing. The latter isn’t so obvious but the former definitely is obvious and easy to sell.

So it makes me doubt :/
 
We do not need 50 teraflops dedicated to rendering when technology like Nanite and RTXGI exist.

As a developer, I can say that we have reached an inflection point with technology like Nanite where needing more power to render more complex geometry is a thing of the past. Nanite assets are not lacking in this area and it would be a waste of resources to render detail beyond the human eye's ability to resolve. We are also fast approaching a similar inflection point with dynamic diffuse-interreflection and other lighting solutions. Same with virtualized texturing.

Going forward, more power should be going to physics (especially for soft body and granular/fibrous assets), simulations, and AI. And even then, there is only so much the consumer will be able to appreciate.

What we need are better energy efficiency solutions, better storage solutions, and better compression techniques, as those are still far behind where we need them to be.

EDIT:

We need to do away with the notion that more power is always better. It's important to focus on efficiency. We should only stive for more power when it's absolutely necessary and we're quickly reaching a point where more power will not only be unnecessary but also wasteful.
Totally agree with regards to Nanite and Lumen but my point was PS6 will almost certainly have in the realms of 40-50tflop of GPU compute if it releases late ‘27. Developers are still facing massive limitations even on PS5 where they’re using 1440p or Dynamic 4K as their output resolution aswell as other limitations such as 1/4 res RT reflections and rendering far off objects at half framerates. Current gen only UE5 games on PS5 which push the visual envelope will more than likely be 30fps due it being GPU bound (same for Series X).

Native 4k @60fps with full RT GI along with RT AO and full resolution RT reflections with no visual compromises on models will be a lot of the big AAA studios visual goals next gen imo.

Nintendo will of course be much more limited but probably far more efficient as using much weaker hardware basically required that.
 
Something like January->March would definitely be unprecedented for as big a step as this Drake hardware, but precedent's gotta start somewhere. Previous games and hardware revisions have happened in less time, though, so I'm not sure there'd be any extra problems with pre-ordering that haven't happened before.

Well huge difference would be that current Switch models would keep selling alongside Drake hardware and have continues support, and thats most likely scenario in any case,
this is not Switch to Wii U launch situation.


The people who have gone over five years without buying a Switch aren't going to be swayed from buying it coming up on year six by the announcement of a new system that costs significantly more. Nintendo could announce new hardware this year and their Switch sales this holiday would be marginally impacted.

Thing is that Nintendo always has very strong Holiday quarter, and announcing new hardware before Holiday season and releasing that hardware only after holiday season is bad business decision because it would effect on Holiday season sales in any case.
 
0
Nothing has been spoken of yet. Nate's still in the midst of doing follow up research before recording his podcast (no estimated release yet iirc), but like before he's said, the devs still have their new kits in hand, making games, and some will be exclusive to the new console.

I think we're still gonna need to wait a week or so for any information from GDC to make it's way out onto the internet for more info on the hardware

With Nvidia leak and now Zelda delay, I think he has more than enough content for new episode, I expecting new episode in around week.
 
New 3DS was 6x more powerful, GBC was similar. Not sure about DSi.

Worth mentioning that New 3DS was 6x more powerful only when comes to CPU.
When you say only "6x more powerful" people get impression that its 6x overall stronger (counting CPU, GPU, RAM).

So New 3DS has 6x more powerful CPU, 2x more system RAM and 40% more VRAM compared to OG 3DS, GPU is basically same.
 
When you say only "6x more powerful" people get impression that its 6x overall stronger (counting CPU, GPU, RAM).

So New 3DS has 6x more powerful CPU, 2x more system RAM and 40% more VRAM compared to OG 3DS, GPU is basically same.
That and it uses the same generation of hardware as the original.

Drake is several Nvidia/ arm generations ahead of tx1.
 
Worth mentioning that New 3DS was 6x more powerful only when comes to CPU.
When you say only "6x more powerful" people get impression that its 6x overall stronger (counting CPU, GPU, RAM).

So New 3DS has 6x more powerful CPU, 2x more system RAM and 40% more VRAM compared to OG 3DS, GPU is basically same.
The context I replied to was in comparison to other "pro" style upgrades being only 2 or 3x, which I assume refers to the PS4 Pro that only had that increase on the GPU side. It's a valid comparison in that respect.
 
Is there any information about the RAM configuration? Are we expecting LPDDR5 or is LPDDR5X possible? More than 100 GB/s bandwidth?

I remember people speculating if Nintendo could go for two 6 GB DIMMs, but it’s safer to assume they’ll go with 2x4 GB right?

I worry that there’s going to be a big improvement in CPU and GPU but RAM is still gonna be too slow to keep up…
 
Is there any information about the RAM configuration? Are we expecting LPDDR5 or is LPDDR5X possible? More than 100 GB/s bandwidth?

I remember people speculating if Nintendo could go for two 6 GB DIMMs, but it’s safer to assume they’ll go with 2x4 GB right?

I worry that there’s going to be a big improvement in CPU and GPU but RAM is still gonna be too slow to keep up…
There is no information, but lpddr 5 is the most likely candidate.

Based on that’s what Orin has, and price.
 
0
Is there any information about the RAM configuration? Are we expecting LPDDR5 or is LPDDR5X possible? More than 100 GB/s bandwidth?

I remember people speculating if Nintendo could go for two 6 GB DIMMs, but it’s safer to assume they’ll go with 2x4 GB right?

I worry that there’s going to be a big improvement in CPU and GPU but RAM is still gonna be too slow to keep up…

It looks like LPDDR5 on a 128-bit bus, so 102GB/s is most likely. There aren't any reliable reports on capacity, but my guess is either 8GB or 12GB.
 
The context I replied to was in comparison to other "pro" style upgrades being only 2 or 3x, which I assume refers to the PS4 Pro that only had that increase on the GPU side. It's a valid comparison in that respect.

Context doesnt matter too much here, because saying just "New 3DS is 6x stronger hardware" is simple wrong (I also think this is not 1st time you said that),
but for New 3DS counting hole hardware, actually could be said its 2-3x stronger than OG 3DS.

Also PS4 Pro didnt had only incrase on GPU side, it also has 50% more RAM, higher memory bandwith and higher CPU clock compared to PS4 Slim.
If we compare Xbox One X vs Xbox One, we talking about 4.5x stronger GPU, 4.5x higher memory bandwith, 50% more RAM and around 20% higher CPU clock,
so generally higher power difference compared even to New 3DS vs 3DS.
 
Last edited:
Context doesnt matter here, saying just "New 3DS is 6x stronger hardware" is simple wrong (I also think this is not 1st time you said that),
for New 3DS counting hole hardware actually could be said its 2-3x stronger than OG 3DS.

Also PS4 Pro didnt had only incrase on GPU side, it also has 50% more RAM, higher memory bandwith and higher CPU clock compared to PS4 Slim.
That doesn’t really matter when the person they quoted with respect to that subject called the other pro systems as being 2x to 3x stronger…. when only one part was that and the rest were paltry increases. 3DS had the CPU be 6X stronger, it fits the exact same context of the conversation the other person was talking it in even if it’s only the CPU, because the other platform only had 1 thing be 2-3x stronger. How are you going to call skittzo out on this but not call out the other? Come on now.

I think full raytracing is the next goal
I could see path tracing, but not anytime soon on consoles.

Doesn’t help that AMDs RT efforts have been… mediocre at best.
 
Context doesnt matter too much here, because saying just "New 3DS is 6x stronger hardware" is simple wrong (I also think this is not 1st time you said that),
but for New 3DS counting hole hardware, actually could be said its 2-3x stronger than OG 3DS.

Also PS4 Pro didnt had only incrase on GPU side, it also has 50% more RAM, higher memory bandwith and higher CPU clock compared to PS4 Slim.
If we compare Xbox One X vs Xbox One, we talking about 4.5x stronger GPU, 4.5x higher memory bandwith, 50% more RAM and around 20% higher CPU clock,
so generally higher power difference compared even to New 3DS vs 3DS.
That's kinda my point, the idea that "pro versions are only 2-3x stronger" is generally wrong, and in the only instance where it was not exactly wrong (PS4 Pro) that increase was mainly only on one side. New 3DS also had a RAM increase, and there was a debate a while back if the GPU clocks increased but I'm not sure what came of that.

Either way, the point I was making was that "pro versions are 2-3x stronger" is factually incorrect for the vast majority of actual "pro" revisions we've seen before.
 
My biggest issue with the concept of a revision in early 2023 is how does Nintendo keep up excitement for their product with a revision? You can definitely feel that the switch "fad" is dying off. Games are selling great due to the massive install base and consoles are selling well due to enthusiasts rebuying the OLED, but the actual mainstream hype around the switch is not as high as it used to be. If you go to online discussion forums, watch content creators, and of course just interact with real people, you'll find that there isn't the flare that there used to be. A lot of people have tuned it out. This is completely normal when a console is 5 years old, a lot of people who bought in the first two years are getting bored. And this is why the 6-7 year console generation has been the standard for such a long time. Usually about 5 years in when people start getting tired, the console manufacturer announces a new one and people get excited for their products again.

But if we assume people are right and this is merely a revision, to extend the switch life to around 10 years, will this bring back the hype and excitement that a new console would? Will this reignite the spark in Nintendo's fanbase to excitedly start talking and speculating and raving about Nintendo stuff again? If this is just playing Switch games in better quality, I don't think so. What gets people excited are new features, things like HD rumble 2.0 or VR support, maybe an improved NSO with gamecube games in 4k or something. And of course new iterations on the biggest games. Mario Kart, 3D Mario, Animal Crossing, Smash. But if this is just an improved switch, and these games still need to run on the original console, then they're not going to be the impressive and exciting generational leap that sets the internet on fire. People would compare Mario Kart 8 and Mario Kart 9 and say, isn't this just the same game in 4k?

These console manufacturers are so dependent on "natural marketing" from people talking excitedly online or with friends and family about these devices, and I don't think a revision would give Nintendo the buzz they're rapidly losing right now.
 
My biggest issue with the concept of a revision in early 2023 is how does Nintendo keep up excitement for their product with a revision? You can definitely feel that the switch "fad" is dying off. Games are selling great due to the massive install base and consoles are selling well due to enthusiasts rebuying the OLED, but the actual mainstream hype around the switch is not as high as it used to be. If you go to online discussion forums, watch content creators, and of course just interact with real people, you'll find that there isn't the flare that there used to be. A lot of people have tuned it out. This is completely normal when a console is 5 years old, a lot of people who bought in the first two years are getting bored. And this is why the 6-7 year console generation has been the standard for such a long time. Usually about 5 years in when people start getting tired, the console manufacturer announces a new one and people get excited for their products again.

But if we assume people are right and this is merely a revision, to extend the switch life to around 10 years, will this bring back the hype and excitement that a new console would? Will this reignite the spark in Nintendo's fanbase to excitedly start talking and speculating and raving about Nintendo stuff again? If this is just playing Switch games in better quality, I don't think so. What gets people excited are new features, things like HD rumble 2.0 or VR support, maybe an improved NSO with gamecube games in 4k or something. And of course new iterations on the biggest games. Mario Kart, 3D Mario, Animal Crossing, Smash. But if this is just an improved switch, and these games still need to run on the original console, then they're not going to be the impressive and exciting generational leap that sets the internet on fire. People would compare Mario Kart 8 and Mario Kart 9 and say, isn't this just the same game in 4k?

These console manufacturers are so dependent on "natural marketing" from people talking excitedly online or with friends and family about these devices, and I don't think a revision would give Nintendo the buzz they're rapidly losing right now.
The easy answer is that it's not binary. There's a whole spectrum of product placements between "revision" and "new gen" that this could fall into. New 3DS was not a simple revision because it replaced the original 3DS, which was no longer sold or produced. GBC was absolutely not a simple revision either.

The people who keep talking about "pro" and "revision" usually refer to Sony's way of doing this, which makes no sense to me when Nintendo has been doing these kinds of iterative upgrades far longer than anyone else has.
 
That doesn’t really matter when the person they quoted with respect to that subject called the other pro systems as being 2x to 3x stronger…. when only one part was that and the rest were paltry increases. 3DS had the CPU be 6X stronger, it fits the exact same context of the conversation the other person was talking it in even if it’s only the CPU, because the other platform only had 1 thing be 2-3x stronger. How are you going to call skittzo out on this but not call out the other? Come on now.

Because we actually had "Pro" models 2-3x stronger, New 3DS and Xbox One X are good examples, while saying "New 3DS hardware is 6x stronger than regular 3DS" is wrong.


That's kinda my point, the idea that "pro versions are only 2-3x stronger" is generally wrong, and in the only instance where it was not exactly wrong (PS4 Pro) that increase was mainly only on one side. New 3DS also had a RAM increase, and there was a debate a while back if the GPU clocks increased but I'm not sure what came of that.

Either way, the point I was making was that "pro versions are 2-3x stronger" is factually incorrect for the vast majority of actual "pro" revisions we've seen before.

Yeah, its wrong to say "revisions have always been around 2-3x stronger" because they are usually 1-2x stronger,
even it could be said that New 3DS and Xbox One X are really 2-3x stronger, but also you cant really say that New 3DS is 6x stronger in any case,
and thats what really got my attention, because saying 6x stronger hardware make people get picture thats generally 6x stronger hardware not only one part of hardware (in this case only CPU).
 
Last edited:
I think the N3DS comparisons are unhelpful. Sure, technically the CPU was however many times stronger. But to the average consumer the changes in that system were almost entirely physical/external (battery, screen, sticks, stable 3D). Power-wise it got a few releases that maybe couldn't have run on the previous model, but still looked like 3DS games, better performance or unlocked 3D support in a handful of titles, and SNES emulation. Whichever side of the revision argument you're on, I think the N3DS is a terrible example of what we should expect or hope for from a new model. The N3DS equivalent of the Switch would basically be the OLED with marginally higher clocks enabling it to run Korok Forest stably and emulate more GameCube games.
 
Also here's some actual evidence for the dropping in hype.
Here was the peak on google trends for each year's holiday season since the switch came out:
2017: 64
2018: 100 (Smash hype baby)
2019: 96
2020: 93
2021: 85

I would guess 2019 was probably the actual peak of excitement for the console, with Smash creating an outlier, but regardless it has been steadily dropping the last 3 years. It's impressive the switch has held on as well as it has but the excitement is definitely fading.

EDIT: btw obviously the retention in hype is probably aided in huge part by the pandemic, otherwise I'd guess it would be much lower by now.
 
Last edited:
I think the N3DS comparisons are unhelpful. Sure, technically the CPU was however many times stronger. But to the average consumer the changes in that system were almost entirely physical/external (battery, screen, sticks, stable 3D). Power-wise it got a few releases that maybe couldn't have run on the previous model, but still looked like 3DS games, better performance or unlocked 3D support in a handful of titles, and SNES emulation. Whichever side of the revision argument you're on, I think the N3DS is a terrible example of what we should expect or hope for from a new model. The N3DS equivalent of the Switch would basically be the OLED with marginally higher clocks enabling it to run Korok Forest stably and emulate more GameCube games.

Before all this rumors and leaks about basically new chip and next gen hardware,
I always though that Switch will get some kind of New 3DS type of upgrade, so higher CPU/GPU clocks, 2GB more RAM memory and more internal memory,
but I guess that kind of revision would have most sense in 2019. so basically that we get something like that instead of V2 Switch.
But almost 6 years after Switch launch, basically next gen Switch hardware has sense in any case.
 
Last edited:
0
One thing I've noticed is that no one who suggests Drake will be a revision actually has any reason for why they would make it a revision. It's just stated as if it's common sense, when no one has ever done anything like it (a revision 6 years in, and a revision that is a full generational leap in power). The closest to an argument I see is that the switch is still selling well, but so were many consoles that got replaced, like the PS2 or the DS.
 
I think the N3DS comparisons are unhelpful. Sure, technically the CPU was however many times stronger. But to the average consumer the changes in that system were almost entirely physical/external (battery, screen, sticks, stable 3D). Power-wise it got a few releases that maybe couldn't have run on the previous model, but still looked like 3DS games, better performance or unlocked 3D support in a handful of titles, and SNES emulation. Whichever side of the revision argument you're on, I think the N3DS is a terrible example of what we should expect or hope for from a new model. The N3DS equivalent of the Switch would basically be the OLED with marginally higher clocks enabling it to run Korok Forest stably and emulate more GameCube games.
We currently don't know exactly how Drake's extra power will be utilized. It's very reasonable to assume it will be reflected in games a lot more than n3DS games were but as of now we don't really know for sure what the difference will look like to the average consumer.

So comparing the actual hardware leaps is really the best, most accurate comparison we can currently make.

One thing I've noticed is that no one who suggests Drake will be a revision actually has any reason for why they would make it a revision. It's just stated as if it's common sense, when no one has ever done anything like it (a revision 6 years in, and a revision that is a full generational leap in power). The closest to an argument I see is that the switch is still selling well, but so were many consoles that got replaced, like the PS2 or the DS.
Just so we're clear, is there anybody here actually suggesting this will be positioned purely a revision and not anything else?
 
One thing I've noticed is that no one who suggests Drake will be a revision actually has any reason for why they would make it a revision. It's just stated as if it's common sense, when no one has ever done anything like it (a revision 6 years in, and a revision that is a full generational leap in power). The closest to an argument I see is that the switch is still selling well, but so were many consoles that got replaced, like the PS2 or the DS.

Most people agree that technically its next gen not only in term of hardware but in features also,
but people disagree how it will be positioned, some say simple like stronger Switch, other say like full next gen console and some think it will be something betwine.

I personally think it will positioning it more like next gen console instead of simple revision, so Nintendo could market it like next gen Switch (basically Switch 2) but thats actually part of same platform and where current models will keep selling and Nintendo keep supporting current models for around 2-3 years after this new Switch is out.

But that type discussion will be live at least until Nintendo officially unveal Drake Switch. :)
 
0
Just so we're clear, is there anybody here actually suggesting this will be positioned purely a revision and not anything else?
I think the main difference between people who think it's a successor and a revision are this, from what I've gathered from discussion here.

People who expect it to be a revision expect most exclusives to be third party games, while virtually all of Nintendo's first party games are cross generation for at least 2-3 years.

People who expect it to be a successor expect most first party games to be exclusive, with maybe a few cross gen or switch only games being published after its release, similar to the 3DS getting games like Metroid Samus Returns and Luigi's Mansion after the switch came out. But games like 3D Mario or Smash would be exclusive to the next console.

I have not heard any actual good arguments for the former, how exactly it benefits Nintendo in any way.
 
Because we actually had "Pro" models 2-3x stronger,
And of the recent consoles, only the PS4 Pro fits that narrative.
New 3DS and Xbox One X are good examples
The XBox One X is literally 4x the GPU and over 5x the memory bandwidth. On what world is that comparable to the paltry increase of the PS4 Pro.
while saying "New 3DS hardware is 6x stronger than regular 3DS" is wrong.
I’m sorry but is saying that the N3DS has hardwrae that is 6X stronger than the base 3DS wrong? Because last I checked it has hardware that is 6X stronger than the 3DS.

And comparing the PS4 Pro to the N3DS and XB1X and calling that a “2-3x upgrade” is really selling it short.

I think the N3DS comparisons are unhelpful. Sure, technically the CPU was however many times stronger. But to the average consumer the changes in that system were almost entirely physical/external (battery, screen, sticks, stable 3D). Power-wise it got a few releases that maybe couldn't have run on the previous model, but still looked like 3DS games, better performance or unlocked 3D support in a handful of titles, and SNES emulation. Whichever side of the revision argument you're on, I think the N3DS is a terrible example of what we should expect or hope for from a new model. The N3DS equivalent of the Switch would basically be the OLED with marginally higher clocks enabling it to run Korok Forest stably and emulate more GameCube games.
What, this is the worst comparison to be made with that.

The only apt comparison is if Nintendo doubled the amount of RAM and doubled the amount of CPU cores and raised the clocks of the switch while they were at it for the CPU. The N3DS literally had a large increase in the clock frequency and but you attribute it to marginal?
 
Please read this new, consolidated staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited by a moderator:


Back
Top Bottom