• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Furukawa Speaks! We discuss the announcement of the Nintendo Switch Successor and our June Direct Predictions on the new episode of the Famiboards Discussion Club! Check it out here!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

After spending hours looking at as many mobile devices with large GPU's as I could, especially anything in the realm of 17 billion transistors, one thing became very clear. Yes, it is possible today. You can make a small circuit board with a large chip. The biggest design challenge is the cooling solution for such a large chip, because the products out there have high clocks and high TDP.
This took all my dream clocks for T239 and crushed them.
I think based on what we are seeing from the other Orin products in terms of starting clocks, combined with the realities of putting anything similar into a mobile form factor, we should prepare ourselves for very very low clock speeds.
Nintendo will consider:
-keeping costs down and avoid any overly expensive cooling solution
-will prioritize battery life over performance
-will be conservative with thermals, especially in handheld mode, as they can't risk burning kids hands

The notebook line of GeForce 30 series chips are clocked from 713-1740 MHz (including boost modes) and consume from 35-150W.

I now believe Max docked clocks to be under 939 MHz, possibly as low as 765 MHz.
Handheld clocks under 384 MHz. If they are able to figure out a form of back-compat and run Maxwell GPU code on Ampere, then a mode that disables 83% of the GPU and run the rest at 384 MHz makes sense.

That works out to under 2.9 TFLOPS docked and given I believe half of the GPU will be normally be off while in handheld, under 0.6 TFOPS undocked.

The math:
Docked 2 * 1536 * 939 / 1,000,000 = 2.884608 TFLOPS
Undocked 2 * 768 * 384 / 1,000,000 = 0.589824 TFLOPS
 
After spending hours looking at as many mobile devices with large GPU's as I could, especially anything in the realm of 17 billion transistors, one thing became very clear. Yes, it is possible today. You can make a small circuit board with a large chip. The biggest design challenge is the cooling solution for such a large chip, because the products out there have high clocks and high TDP.
This took all my dream clocks for T239 and crushed them.
I think based on what we are seeing from the other Orin products in terms of starting clocks, combined with the realities of putting anything similar into a mobile form factor, we should prepare ourselves for very very low clock speeds.
Nintendo will consider:
-keeping costs down and avoid any overly expensive cooling solution
-will prioritize battery life over performance
-will be conservative with thermals, especially in handheld mode, as they can't risk burning kids hands

The notebook line of GeForce 30 series chips are clocked from 713-1740 MHz (including boost modes) and consume from 35-150W.

I now believe Max docked clocks to be under 939 MHz, possibly as low as 765 MHz.
Handheld clocks under 384 MHz. If they are able to figure out a form of back-compat and run Maxwell GPU code on Ampere, then a mode that disables 83% of the GPU and run the rest at 384 MHz makes sense.

That works out to under 2.9 TFLOPS docked and given I believe half of the GPU will be normally be off while in handheld, under 0.6 TFOPS undocked.

The math:
Docked 2 * 1536 * 939 / 1,000,000 = 2.884608 TFLOPS
Undocked 2 * 768 * 384 / 1,000,000 = 0.589824 TFLOPS
Read
Again, binning is a bit of an unfounded idea in relation to Drake as GA10F IS its own GPU die.
The G106 and GA104 3060's are not binns of each other, they are separate silicon branches that happen to have an overlap where the 3060 is (Same with the rumored GA107 Desktop 3050 or the GA103 RTX 3070.etc)

GA102, GA103, GA104, GA106, and GA107 are separate silicon branch chips formed from the highest end then binned down to their respective GPU SKUs, the fact that there is overlap doesn't mean GA017 = Binned GA016.

It's the same case with GA10B and GA10F, a separate GA10# designation means that chip is being designed from scratch to that rated SM count.

So all GA10B's start at 16SMs, and all GA10F's (to our knowledge) start at 12SMs

Then you bin down from there.

So Drake/T239/GA10F physically can not be a binn of GA10B, not to mention that GA10F and GA10B have different GPC/TPC layouts which makes it physically impossible to even consider GA10F as a binned version of GA10B.

So T234 (Orin, GA10B) and T239 (Drake, GA10F) are about as related as the Tegra X1 was to Desktop Maxwell, aka connected, but with notable feature differentiation in the design.

So when you have an unseen/unreleased GPU die in the NVN2-API data that has it's own branch offs from GA10B, and considering this is a game console and T239 is Customized by Nintendo, it is just likely that Orin was used as a devkit at some point in the development of NVN2 and GA10F is the actual silicon that will be used in the Drake SoC.

And with the 12SM count in the NVN2 API that means that 12SMS, aka 1536 CUDA cores and all that, is the amount of processing power we have for Drake, Binn or otherwise.

And the problem with a Lite using a binned T239 is that'd require a whole new optimization pass as the cut-down silicon would need to be clocked differently to run portable mode Drake games like the full T239 die could.

It's unesceary busywork.
Also Ampere seems to bottom out at around 0.002TFLOPs/CUDA core (Extrapolating from the 3050 Laptop) so 1Ghz Drake at 8nm with 12SMs would actually be upwards of 3TFLOPs of Ampere (Based on the 3050 Laptop's own near-1Ghz max clock having a TFLOP/CUDA value in that range)

And this is not even considering the likelihood that Drake could be on a 5nm node due to how big it would be on 8nm even with GA10F not being a binned down GA10B like I mentioned in the post above

EDIT:
Also they would not turn off the GPU count as NVN2 mentions NOTHING about turning off SMs in portable mode, only one GPU config, 12SMs, so that means the 1536 CUDA cores are on at all times.
 
Last edited:
After spending hours looking at as many mobile devices with large GPU's as I could, especially anything in the realm of 17 billion transistors, one thing became very clear. Yes, it is possible today. You can make a small circuit board with a large chip. The biggest design challenge is the cooling solution for such a large chip, because the products out there have high clocks and high TDP.
Do you mind linking what device you are referring to here? Thanks.
 
After spending hours looking at as many mobile devices with large GPU's as I could, especially anything in the realm of 17 billion transistors, one thing became very clear. Yes, it is possible today. You can make a small circuit board with a large chip. The biggest design challenge is the cooling solution for such a large chip, because the products out there have high clocks and high TDP.
The good news is that Nvidia said during GTC 2021 (Spring 2021) that Orin has 21 billion transistors, not 17 billion transistors, which Nvidia initially said during GTC China 2019.
 
The chip designation doesn't change when it gets binned. A GA104 is still a GA104 regardless of how much of it is enabled.

This is how we know Drake isn't a binned version of Orin. The chip has a different designation (GA10F instead of GA10B).
We don't yet know exactly what is different between GA10F and GA10B.
Given that biggest problem with putting the full Orin T234 into a Switch seems to be that the die size is too large, and that a large % of the die is taken up by the GPU, the most obvious change inside the GPU would be a reduction to cache size.
So for example the L1 and L2 cache could be cut down to save space on the die.
This change alone would likely require a different designation in code to differentiate the GPU in T234 and the GPU in T239 from each other. It is possible that outside this smaller cache, the GPUs SM design are identical and interchangeable.
 
We don't yet know exactly what is different between GA10F and GA10B.
Given that biggest problem with putting the full Orin T234 into a Switch seems to be that the die size is too large, and that a large % of the die is taken up by the GPU, the most obvious change inside the GPU would be a reduction to cache size.
So for example the L1 and L2 cache could be cut down to save space on the die.
This change alone would likely require a different designation in code to differentiate the GPU in T234 and the GPU in T239 from each other. It is possible that outside this smaller cache, the GPUs SM design are identical and interchangeable.
But we do though!
The GPC/TPC structure is different that is a massive change
 
We don't yet know exactly what is different between GA10F and GA10B.
Given that biggest problem with putting the full Orin T234 into a Switch seems to be that the die size is too large, and that a large % of the die is taken up by the GPU, the most obvious change inside the GPU would be a reduction to cache size.
So for example the L1 and L2 cache could be cut down to save space on the die.
This change alone would likely require a different designation in code to differentiate the GPU in T234 and the GPU in T239 from each other. It is possible that outside this smaller cache, the GPUs SM design are identical and interchangeable.
We do know several apparent differences between the configurations of GA10B and GA10F. GA10B has 4 TPCs per GPC while GA10F has 6. GA10B has 2:1 efficiency between FP16 and FP32 instructions, while GA10F (like desktop Ampere) has 1:1, which may indicate GA10F is using desktop Ampere's tensor cores while GA10B is using newer ones to accelerate FP16. And while these aren't necessarily differences based on the GPU silicon, GA10F is also the only Ampere chip to support a certain type of clock gating called FLCG, and it's using CUDA SM version 8.8 compared to GA10B's 8.7.
 
We don't yet know exactly what is different between GA10F and GA10B.
Given that biggest problem with putting the full Orin T234 into a Switch seems to be that the die size is too large, and that a large % of the die is taken up by the GPU, the most obvious change inside the GPU would be a reduction to cache size.
So for example the L1 and L2 cache could be cut down to save space on the die.
This change alone would likely require a different designation in code to differentiate the GPU in T234 and the GPU in T239 from each other. It is possible that outside this smaller cache, the GPUs SM design are identical and interchangeable.
We don't know all of the differences, but we have some pretty good ideas of a lot of them from the Nvidia leak. This includes seemingly some sort of difference (albeit probably not a huge one) in the shader architecture compared to other Ampere GPUs, including Orin.

Regardless, any change necessitates a new chip, and at that point intentionally including dead silicon is a liability when space is at such a premium.
 
Didn’t we only find out Switch’s specs a few months before release? I believe Eurogamer revealed them in the December or January prior to launch. If I’ it’s coming this year, the blowout would be sometime this summer around June, no?
 
Didn’t we only find out Switch’s specs a few months before release? I believe Eurogamer revealed them in the December or January prior to launch. If I’ it’s coming this year, the blowout would be sometime this summer around June, no?
Probably.

Could be later than that, or after release.

A die shot maybe after release some months after.
 
0
Do you mind linking what device you are referring to here? Thanks.
I linked to the Wikipedia article listing all the laptop 30 series chips.
From here you can google search and look for teardowns on tablets and ultra slim laptops that ship with one of these chips. Some of the teardowns are only in video form. From these devices specs (thickness, form factor) and the photos and videos of the teardowns, you can see the crazy cooling solutions that they employ. Multiple fans, heat spreaders, long copper heatsinks, ect. The devices are also not cheap.

One example I cited before was the Razer Blade 14, that is sold as an "ultra thin" 14" gaming laptop that is only 0.66” /16.8 mm thick.
https://www.razer.com/gaming-laptops/razer-blade-14
There is also the Geforce RTX 3080 Ti Laptop chip with a die size of 496mm2 launched in Feb 2022.

Nintendo is not going to price in a large expensive cooling solution like these. As a result they are going to have to have much lower clocks.

When we first saw the 12 CUs and 1536 CUDA Cores, it seems like there was some shock in this thread. Along the lines of "that is too much for Nintendo."
But maybe part of that is because we forgot to factor in that Nintendo likes to run there devices cooler and at lower clocks.

"fewer cores at crazy high clocks" VS "lots of cores with crazy low clocks". Which one seems more constant with Nintendo based on history?
 
Last edited:
I liked to the Wikipedia article listing all the laptop 30 series chips.
From here you can google search and look for teardowns on tablets and ultra slim laptops that ship with one of these chips. Some of the teardowns are only in video form. From these devices specs (thickness, form factor) and the photos and videos of the teardowns, you can see the crazy cooling solutions that they employ. Multiple fans, heat spreaders, long copper heatsinks, ect. The devices are also not cheap.

One example I cited before was the Razer Blade 14, that is sold as an "ultra thin" 14" gaming laptop that is only 0.66” /16.8 mm thick.
https://www.razer.com/gaming-laptops/razer-blade-14
There is also the Geforce RTX 3080 Ti Laptop chip with a die size of 496mm2 launched in Feb 2022.

Nintendo is not going to price in a large expensive cooling solution like these. As a result they are going to have to have much lower clocks.

When we first saw the 12 CUs and 1536 CUDA Cores, it seems like there was some shock in this thread. Along the lines of "that is too much for Nintendo."
But maybe part of that is because we forgot to factor in that Nintendo likes to run there devices cooler and at lower clocks.

"fewer cores at crazy high clocks" VS "lots of cores with crazy low clocks". Which one seems more constant with Nintendo based on history?
Can you please link a mobile device, the size of the switch or close to it, that has a soc that has 17B transistors?


There’s no point entertaining this another way if we don’t address this part here.

Let alone that ORIn has 21B transistors, not 17B.
 
Can you please link a mobile device, the size of the switch or close to it, that has a soc that has 17B transistors?


There’s no point entertaining this another way if we don’t address this part here.

Let alone that ORIn has 21B transistors, not 17B.
As I have responded in a couple of my follow up posts, the size of the T234 die is clearly the biggest reason against it showing up in a portable Switch console.
I continue to acknowledge this.
The point I was trying to make now, is that the cooling solutions used in these tablets and laptops are beyond what Nintendo would ever be willing to pay for.
This has caused me to lower what I believe is the highest clocks we should expect to see from the T239.
 
As I have responded in a couple of my follow up posts, the size of the T234 die is clearly the biggest reason against it showing up in a portable Switch console.
I continue to acknowledge this.
The point I was trying to make now, is that the cooling solutions used in these tablets and laptops are beyond what Nintendo would ever be willing to pay for.
This has caused me to lower what I believe is the highest clocks we should expect to see from the T239.
But the premise here is inherently flawed as T239 is removed from T234.

Structure wise and even seemingly core wise in respect to what component gives Orin Double FP16

So using T234 for die size and heat estimations is inaccurate to extrapolate usable data from
 
As I have responded in a couple of my follow up posts, the size of the T234 die is clearly the biggest reason against it showing up in a portable Switch console.
Let’s tackle this, one by one.

If Nintendo were to use a binned ORIN, just like you said, let me make it clear that binning an ORIN chip does not make it smaller.

So what does this mean? Nintendo paid the heavy upfront cost to make a smaller chip specifically for them, whatever happens to the binned versions of those chips is up to NVidia and if they want to make a new shield or shove it into a product for teaching.

I’ve said this multiple times but you keep on coming from the premise that because the company is cheap they are not willing to pay the amount to make a very customized version of the chip only for their own needs. An upfront cost that is high, for a chip that will depreciate in price over the years.

If they took the cheaper route and used a stock ORIN, it would be cheaper initially but overtime it will be a much more expensive product. The chip would literally be bigger than what the Xbox series X and the PlayStation5 have inside them.

Nintendo would literally need to spend for more wafers, and these wafers can make an X amount of chips. If they go the highly custom route, they would spend less per wafer amount to get the same amount of chips that ORIN needs.

Wafers are a precious silicon.

ORIN has several features that a game console does not really need.

If they were to disable these features, they will still be left with a gargantuan chip that is bigger than any other console chip that has ever existed (probably).

NVN2 being viable on ORIN does not in any facet indicate that the final product will use ORIN, it indicates that NVN2 is compatible with the ORIN soc. Why? The same reason that the PS4 Pro was used as the PS5 devkit for some time before they even got the final silicon or close to final silicon. That didn’t mean the PS4 Pro was literally the PS5.

ORIN can be used for preliminary devkits before the actual silicon gets to the final devkit that developers would be working on.


With this, there are few possibilities to consider: 8nm, 5nm SEC, 6nm TSMC and 5nm TSMC. 8nm would have a large chip but good enough yields. 5 and 6nm process would have smaller chips, higher yield rates, probably cheaper to make, and can be clocked higher without needing some elaborate cooling solution.
 
@Q-True Your methodology of looking at laptops has a flaw in it in that laptops are not in fact very power efficient devices (compared to truly small devices), and especially things like PC-level RAM and CPU components can really ratchet up the power draw, not so much the iGPU included in them. Mobile devices, on the other hand, tend to have much cheaper RAM and CPUs power-wise. Have a look at this chip: It is a 12 CU (AMD's form of SM) iGPU clocked at a whopping 2.4 GHz at max, and the TDP is... 15W. This is quite a limited power budget for something running at 2.4 GHz, and clocking something like this at 1.2 GHz (which several people predict) should get you to noticeably below 10W for the GPU, which would likely be an acceptable power draw in docked mode. Note that this 15 W TDP is the same as the Jetson Tegra X1 GPU, which has a max Clock of only 15W.

Of course, if it is on Samsung 8nm, then that might not be achievable (the part I referred to is on TSMC 7nm), so it remains a possibility that it is clocked lower than 1.2 GHz. I don't really think we need to worry about ultra low clocks, though.
 
It is much appreciated that you take some of your precious time to educate a few forum dwellers. Seriously, it's awesome.
Whatever project you work on, I hope the end product satisfies you. Also, I think quite a few of us would like to know more about it!

No problem.

So looking at my profilers, I can see that both DLSS and DLDSR don't have entirely fixed costs; they're relatively fixed in the sense that there are reliable averages around which I can optimize predictably, but there's just enough fluctuation that I don't feel comfortable sharing these numbers knowing that they could be taken at face value and out of context. Resolution also impacts the render costs, so even on the same GPU, the render cost will depend on what resolution is being targeted.

What I can say is that the render cost for DLSS on the RTX 3000 series cards is much less than what NVIDIA reports on the RTX 2000 series cards:

DLSS2_5-scaled.jpg


(less than 1 ms @ 4k on some RTX 3000 series cards)

but really, that's not what's driving up the computational costs and lowering the framerate for super-sampled images. It's the fact that higher resolutions = higher-quality assets being rendered (that's just how some games have their LODs set up). There's also a ceiling where the non-DLSS render time for each frame will be so fast (resulting in 300+ fps) that the DLSS overhead will be the primary factor holding back the framerate, which is why you won't see games that use DLSS reach those high frame rates. That being said, I think DLSS VRAM usage will be more of a concern than DLSS render time. The cost isn't negligible at higher resolutions.

Also, I personally think that DLSS Quality + DSR 4x looks better than DLSS Quality + DLDSR 2.25x in many cases simply because the input resolution is higher for the former and often produces a cleaner result. If a game has DLSS support, I typically choose DSR 4X in combination with DLSS. If a game doesn't have DLSS support, I tend to choose the DLDSR 2.25x option. That being said, if I was playing portably, for performance sake, I would choose DLSS Quality + DLDSR 2.25x (or an equivalent to DLDSR 2.25x if the portable system didn't have driver support for DLDSR).
 
Any reason why Switch 2 would need to turn off 10SM for BC? Wouldn’t it be better to run it with everything on for best performance/IQ?
Concern trolling is the only reason.

Maxwell instructions and Ampere instructions are different anyway, so it's likely nvidia will be working with Nintendo to get some sort of translation layer built to get backwards compatability working. No reason they wouldn't have something in there to accommodate more cuda cores, more ram and more CPU resource too.
 
0
I have quite a few questions in regard to brainchild's latest answer to my question and anyone can feel free to answer, if they feel like it:

No problem.

So looking at my profilers, I can see that both DLSS and DLDSR don't have entirely fixed costs; they're relatively fixed in the sense that there are reliable averages around which I can optimize predictably, but there's just enough fluctuation that I don't feel comfortable sharing these numbers knowing that they could be taken at face value and out of context.
If I understood you right, the calculation time for DLSS can vary from scene to scene. That seems logical if for example, we have a series of consecutive frames in which not much happens (like still images) . That would be because calculating the motion vectors will be more trivial in that case. Correct?
Resolution also impacts the render costs, so even on the same GPU, the render cost will depend on what resolution is being targeted.

What I can say is that the render cost for DLSS on the RTX 3000 series cards is much less than what NVIDIA reports on the RTX 2000 series cards:

DLSS2_5-scaled.jpg


(less than 1 ms @ 4k on some RTX 3000 series cards)
Thank you. I wonder how low we can go with Ampere. Alex's example was made using a RTX 2060 (not on the graph above).
but really, that's not what's driving up the computational costs and lowering the framerate for super-sampled images. It's the fact that higher resolutions = higher-quality assets being rendered (that's just how some games have their LODs set up).
Pardon me but are we talking here about DLSS, DSR/DLDSR or a combinaison of both?
That being said, I think DLSS VRAM usage will be more of a concern than DLSS render time. The cost isn't negligible at higher resolutions.
Yeah, this is yet another thing we have no information on. I wonder how the economics will play out. But it is definitely one more concern to add to the pile.
Also, I personally think that DLSS Quality + DSR 4x looks better than DLSS Quality + DLDSR 2.25x in many cases simply because the input resolution is higher for the former and often produces a cleaner result.
Did you mean maybe "DLSS Performance + DLDSR 2.25x" for the latter?
If a game has DLSS support, I typically choose DSR 4X in combination with DLSS. If a game doesn't have DLSS support, I tend to choose the DLDSR 2.25x option. That being said, if I was playing portably, for performance sake, I would choose DLSS Quality + DLDSR 2.25x (or an equivalent to DLDSR 2.25x if the portable system didn't have driver support for DLDSR).
Awesome. It's not every day that you get in the thought process of a developer. Thanks for sharing your inputs. If we are tempted to throw in an abusive one-fits-all estimate of how much time a combinaison of DLSS + DLDSR takes compared to an untouched frame, what could that be? I am talking about a 720p -> 1080p scenario for a device that consumes 100 W (like an undervolted RTX 3060). Between 1 and 4 ms? Above that?

Sorry for being so specific. That's my last question about this topic, I promise.
 
Last edited:
0
Any reason why Switch 2 would need to turn off 10SM for BC? Wouldn’t it be better to run it with everything on for best performance/IQ?
Switch games aren't made to adjust to new hardware automatically. You could put a 100TF graphic card and most the games will look the same or sightly better without a patch.

The best you're going to get without a patch is the game hitting the max dynamic resolution and the target frame rate all the time. And they almost certainly can achieve that with 4 A78 and 2 SM, so anything more is just wasting battery.

Of course, Nintendo/NVidia could do some patches themselves like Microsoft do, but that's not something to take for granted, especially if we're talking about the entire library.
 
After spending hours looking at as many mobile devices with large GPU's as I could, especially anything in the realm of 17 billion transistors, one thing became very clear. Yes, it is possible today. You can make a small circuit board with a large chip. The biggest design challenge is the cooling solution for such a large chip, because the products out there have high clocks and high TDP.
This took all my dream clocks for T239 and crushed them.
I think based on what we are seeing from the other Orin products in terms of starting clocks, combined with the realities of putting anything similar into a mobile form factor, we should prepare ourselves for very very low clock speeds.
Nintendo will consider:
-keeping costs down and avoid any overly expensive cooling solution
-will prioritize battery life over performance
-will be conservative with thermals, especially in handheld mode, as they can't risk burning kids hands

The notebook line of GeForce 30 series chips are clocked from 713-1740 MHz (including boost modes) and consume from 35-150W.

I now believe Max docked clocks to be under 939 MHz, possibly as low as 765 MHz.
Handheld clocks under 384 MHz. If they are able to figure out a form of back-compat and run Maxwell GPU code on Ampere, then a mode that disables 83% of the GPU and run the rest at 384 MHz makes sense.

That works out to under 2.9 TFLOPS docked and given I believe half of the GPU will be normally be off while in handheld, under 0.6 TFOPS undocked.

The math:
Docked 2 * 1536 * 939 / 1,000,000 = 2.884608 TFLOPS
Undocked 2 * 768 * 384 / 1,000,000 = 0.589824 TFLOPS
At the risk of sounding rude you are being a tad delusional here.
Even if you believe the chip is coming out later this year, which I don't, it will still be about 7 and a half years newer than the Switch's chip which was made in 2015. I realize TFLOPS is not a perfect comparison of performance, but do you really believe the handheld TFLOPS is only going to rise by like 2.5x vs the switch in 7 and a half years? To give you a comparison, I bought my laptop 2 years ago and the new stuff at the same price is now 3x higher - that's 2 years, and we're talking about 7 and a half. I think even the low end expectation here is more like 1.2 tflops undocked.
 
When we look at the leaked chip, it's clearly too large to fit into the switch, and it would likely have serious issues with power draw and heat.
Now a non-biased person would see this and simply think - okay well it isn't gonna be on the 8nm process then. But if it isn't on the 8nm process then it's probably not coming out this year, and we have to pretend that it is so we're gonna make all this wacky analysis of how it's gonna disable half the SM or whatever.

Just look at it rationally, what exactly makes this thing look like it's releasing in 2022?
-Nintendo's sales are at record highs, they have no need for a new console (and yes, this is a new console. The ps4 pro was 2.5x stronger, this appears about 12-15x)
-Nintendo has repeatedly shown BotW 2 clearly running on old hardware. If this were going to be a launch title for their next console, they would have wanted to show it first running on that device.
-There is no way Nintendo could control leaks of the software coming to it this close to launch. Even their first party games would be leaking, but especially third parties. We would be hearing of a slew of third party ports and collections coming to the new console, yet nothing.
-Nintendo would not need to run DLC for Mario Kart 8 through the end of 2023, if they could just release a Mario Kart for their new console.
-We would be hearing about mass orders of parts for this thing, screens for example. It's hard to hide the fact you're launching a new hybrid console when you're the only company making hybrid consoles.
-Game consoles software release slate always slows down heavily before the next console comes out, so that a large slate of releases can be on the next console year 1, yet this is the busiest year of releases yet. Look at ANY of Nintendo's other consoles and their last year before the successor comes out. This release strategy would make no sense if they intended to sell us a new console later this year. They would want to save all these games to release when the new console comes out, to hype it with. Even if a lot of the games would end up crossgen, you still want to launch the new console with them.

But nope, it simply has to come out this year, despite all evidence to the contrary. Just like it had to come out last year, and the year before that. Obviously the insiders at places like Bloomberg were not wrong, when they said it was releasing in previous years. Nintendo's plans are just very fluid and they don't see a need to release it until demand starts falling.
-
 
0
At the risk of sounding rude you are being a tad delusional here.
Even if you believe the chip is coming out later this year, which I don't, it will still be about 7 and a half years newer than the Switch's chip which was made in 2015. I realize TFLOPS is not a perfect comparison of performance, but do you really believe the handheld TFLOPS is only going to rise by like 2.5x vs the switch in 7 and a half years? To give you a comparison, I bought my laptop 2 years ago and the new stuff at the same price is now 3x higher - that's 2 years, and we're talking about 7 and a half. I think even the low end expectation here is more like 1.2 tflops undocked.
Sorry to butt in but I think I can help prevent some misunderstandigs.
There are a few reasons why we collectively believe that the jump in performance could be somewhat moderate.

First, Q-True are talking about the successor's GPU and only that. The Switch will be equipped presumably with more RAM, a better CPU and more internal memory which collectively will make the Switch perform faster than just whatever multiple the GPU performance is increased by. Laptops graphics can be three times faster than their counterparts from two years ago, but not within the same price bracket, I regret.

That brings us to the second point: the next Switch cannot be much more pricy than its predecessor. You can't sell 100 million machines at 600 USD. Sony tried and failed. Thus, Nintendo will make do with whatever technology will be available within a certain budget.

That leads us already to the conclusion. Are there technologies out there that can perform two and half times faster than the Switch in the same form factor? Yes, there are. There even were solutions back during Switch's launch (TX2). However, even with the recent leaks, it takes some amount of optimism to reach a actor of 10 in handheld mode. So a factor of two and a half is a prudent estimate of wat we can expect. The reason? Power consumption and battery life.

After all, we don't want to overhype ourselves and eventually be disappointed when the real specs are revealed. That would be dumb.
 
Sorry to butt in but I think I can help prevent some misunderstandigs.
There are a few reasons why we collectively believe that the jump in performance could be somewhat moderate.

First, Q-True are talking about the successor's GPU and only that. The Switch will be equipped presumably with more RAM, a better CPU and more internal memory which collectively will make the Switch perform faster than just whatever multiple the GPU performance is increased by. Laptops graphics can be three times faster than their counterparts from two years ago, but not within the same price bracket, I regret.

That brings us to the second point: the next Switch cannot be much more pricy than its predecessor. You can't sell 100 million machines at 600 USD. Sony tried and failed. Thus, Nintendo will make do with whatever technology will be available within a certain budget.

That leads us already to the conclusion. Are there technologies out there that can perform two and half times faster than the Switch in the same form factor? Yes, there are. There even were solutions back during Switch's launch (TX2). However, even with the recent leaks, it takes some amount of optimism to reach a actor of 10 in handheld mode. So a factor of two and a half is a prudent estimate of wat we can expect. The reason? Power consumption and battery life.

After all, we don't want to overhype ourselves and eventually be disappointed when the real specs are revealed. That would be dumb.
Where did I say that we would get 10x more raw power in handheld mode? Like I said I think the low end is about 1.2 tflops, and my expectation would be somewhere in the 1.6-1.8 range. I am absolutely not predicting 10x more in portable while also having DLSS and RT cores added in.

Also I hate this mindset where we have to ignore reality to "temper expectations." Yeah technically speaking, things can always be worse than expected. Nintendo could literally just keep using the mariko on the next console. They probably wont, but they could. But I'm not setting my expectations based on some arbitrary bad scenario that could occur but based on technological developments in the industry. To give a comparison of where the industry is at right now, the Adreno 660 is 1.72 tflops. I'm sure high end mobile processors will be around 2.5tflops by the time the next console comes out in late 2023 to early 2024. The switch will likely not reach this level simply because it is using more power on DLSS and RT, but 1.6-1.7 tflops is by all means a reasonable expectation to have.
 
Super Switch still could launch this year.
It could, but I think it would be more likely to see it alongside Zelda's launch.
Worked wonders last time, and they know they wouldn't be able to ship enough for a holiday season (they won't ship enough for mars or April either, but it will be less catastrophic)
That's just my two cents, but I think it makes sense.
 
0
Where did I say that we would get 10x more raw power in handheld mode? Like I said I think the low end is about 1.2 tflops, and my expectation would be somewhere in the 1.6-1.8 range. I am absolutely not predicting 10x more in portable while also having DLSS and RT cores added in.

Also I hate this mindset where we have to ignore reality to "temper expectations." Yeah technically speaking, things can always be worse than expected. Nintendo could literally just keep using the mariko on the next console. They probably wont, but they could. But I'm not setting my expectations based on some arbitrary bad scenario that could occur but based on technological developments in the industry. To give a comparison of where the industry is at right now, the Adreno 660 is 1.72 tflops. I'm sure high end mobile processors will be around 2.5tflops by the time the next console comes out in late 2023 to early 2024. The switch will likely not reach this level simply because it is using more power on DLSS and RT, but 1.6-1.7 tflops is by all means a reasonable expectation to have.
For some reasons I remembered that the Switch in handheld mode has 196 GFlops but the sources say that it has 393. I am slightly confused now because this is way more than a PS3 (230 Flops) and I have always assumed both machines were comparable.
 
It could launch this year but dropping zelda alone in spring... :unsure:
Not the most nintendo move ever when even oled got its metroid pairing.
We'll see anyways...
 
It could launch this year but dropping zelda alone in spring... :unsure:
Not the most nintendo move ever when even oled got its metroid pairing.
We'll see anyways...
could pokemon not be the pairing? It wouldn't really be a showcase of the tech per se, but a big name to launch alongside it to sell the system
 
For some reasons I remembered that the Switch in handheld mode has 196 GFlops but the sources say that it has 393. I am slightly confused now because this is way more than a PS3 (230 Flops) and I have always assumed both machines were comparable.
393 is in docked mode
 
There was some speculation that a 3D Mario will launch in cross-promotion with the Mario movie in December. If Drake is this year, that could be a launch title.

After listening to z0mbl3's podcast, what I understand is there's not a super huge window to delay release of hardware if they reserved production lines. So BotW2's delay doesn't tell me much. If this comes in 2022 then we'll still have a few months to enjoy a 4K BotW1 patch. :p
 
There was some speculation that a 3D Mario will launch in cross-promotion with the Mario movie in December. If Drake is this year, that could be a launch title.

After listening to z0mbl3's podcast, what I understand is there's not a super huge window to delay release of hardware if they reserved production lines. So BotW2's delay doesn't tell me much. If this comes in 2022 then we'll still have a few months to enjoy a 4K BotW1 patch. :p
They could still start production at the same time but just delay the release so that they have a larger number of units for release.
 
It already made no sense, but the BotW 2 delay effectively deconfirms a 2022 release of Drake.
It would make absolutely no sense to launch this game and then launch a new console 6 months later.

Unlike 2022 I don't think a spring 2023 launch would be crazy from Nintendo, but still think fall 2023 is more likely. But I hope we can all give up on the 2022 nonsense now.
 
There was some speculation that a 3D Mario will launch in cross-promotion with the Mario movie in December. If Drake is this year, that could be a launch title.

After listening to z0mbl3's podcast, what I understand is there's not a super huge window to delay release of hardware if they reserved production lines. So BotW2's delay doesn't tell me much. If this comes in 2022 then we'll still have a few months to enjoy a 4K BotW1 patch. :p
I totally agree that Nintendo wants to launch a Mario game with the movie, however it makes much more sense for it to be 2D Mario. For one, the people watching the movie will largely be the casual market who prefers and has nostalgia for those games. And they're more likely to tie into the story and characters of the movie as well, while 3D Mario games tend to have more unique settings.
 
Please read this new, consolidated staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited by a moderator:


Back
Top Bottom