• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

Another poster quoted one of these two posts to answer my question about how to calculate Drake's performance. Out of curiosity, how did you calculate the entries in the second list? It looks like you took the entries in the first on and multipled them by roughly 3. Where does this 3 come from?
It’s “SHADER COUNT*2*FREQUENCY” 2 being that it can handle 2 floating point operations per clock

so I’ll do number 3 as an example.

We know that Drake has 12 SMs, right? And per SM it has 128 shaders or cores. So, 12*128= 1536.

1536*2 gives us 3072

3072*230.4MHz= 707,788.8 FLOPs.

The next part you divide by 1,000,000 and you get 0.7077888 TFLOPs. If you divide by 1,000 you get 707.788GFLOPs



The user you quoted simply performed this same process for the other results.
 
Another poster quoted one of these two posts to answer my question about how to calculate Drake's performance. Out of curiosity, how did you calculate the entries in the second list? It looks like you took the entries in the first on and multipled them by roughly 3. Where does this 3 come from?

It's a basic fuse multiply accumulate shader core flop calculation.

You take the number of cores, multiply it by the clock speed, and since they can perform 2 flops a cycle (multiply and accumulate) multiply that by 2. It works well with modern shader cores, though it won't work right with older architectures like vliw. Although, on second thought, with how terrible vliw occupancy was, it's probably way more accurate than their max theoretical formula gives.

Here's a web GPU flops calculator you can mess around with.

 
What are we missing then? There's ray tracing hardware on the chip, and the API lets you define RT shaders and pass acceleration structures to them. Nate's only comments that I remember are that RT was "limited" and I thought that's what you were referring to, and questioning whether there was something lacking in the RT implementation that we don't know about.

I think feet is pointing out the difference between a limited use if ray tracing, like for a particular reflective surface, vs full scene Ray tracing.
 
0
I was always open to dock only type of revision, but not with completely new next gen chip with new features,
I dont think Nintendo would made so much difference and investment only for dock only type of revision.

This new hardware is most likely hybrid,
but its possible that not all GPU SM-s are active in handheld mode and that DLSS will not be enabled in handheld mode.
 
0
It’s “SHADER COUNT*2*FREQUENCY” 2 being that it can handle 2 floating point operations per clock

so I’ll do number 3 as an example.

We know that Drake has 12 SMs, right? And per SM it has 128 shaders or cores. So, 12*128= 1536.

1536*2 gives us 3072

3072*230.4MHz= 707,788.8 FLOPs.

The next part you divide by 1,000,000 and you get 0.7077888 TFLOPs. If you divide by 1,000 you get 707.788GFLOPs



The user you quoted simply performed this same process for the other results.
Thanks for the time you took, it was literally written. I would have understood it if I had read it a second time :-/
It's a basic fuse multiply accumulate shader core flop calculation.

You take the number of cores, multiply it by the clock speed, and since they can perform 2 flops a cycle (multiply and accumulate) multiply that by 2. It works well with modern shader cores, though it won't work right with older architectures like vliw. Although, on second thought, with how terrible vliw occupancy was, it's probably way more accurate than their max theoretical formula gives.

Here's a web GPU flops calculator you can mess around with.

Thanks a ton. What I get from that is that:

1) at a given frquency, a CUDA core as a fixed computation capability, right. So a CUDA core in a GTX 750 and one in a RTX 3070 will deliver the same performance at the same frequency. Also, the performance of the chip scales linearly with them. Correct?
2) a shorter node means being able to cram more cores on a die. A transistor that is built with a node for example half the size will take four times less space (since it's half as short and half as wide). Ignoring the fact that node names are rarely associated to their physical sizes nowadays, that would also be correct, right?

If the assumptions above are right, one parameter to take in account is how much the electric power varies in function of what's above. Does power increase linearly with frequency? Linearly in regard to the number of CUDA cores? Does it follow an inverse square law in regard to node length?

I am asking this because this probably isn't the last leak that will happen regarding the succ specs. Having a formula that can give link all the variables we are discussing can help us reach a consensus on the credibility of a leak much faster.
 
Yeah, this is kind of my thinking now. Even at the lower end of possible clock speeds, this should be powerful enough to run almost any Switch game at close to 4K natively, which makes the tensor cores basically redundant if it's primarily designed as a "play Switch games at 4K" machine. If they wanted to design a chip around playing Switch games at higher resolutions, we would have ended up with one of two designs:

1. A straight upgrade of the TX1, with a much bigger Maxwell/Pascal GPU (ie 8 SMs+). This is the simplest way to get maximal compatibility.
2. A smaller Ampere-based GPU (4-6 SMs) with Orin's double-rate tensor cores to get sufficient DLSS performance. This is a more efficient approach, but requires more work on the software side to leverage.

Instead we're getting a GPU that's bigger than it needs to be to brute-force 4K resolutions, but also uses the latest Ampere architecture and includes both tensor cores and RT cores. It strikes me as massive overkill for a machine designed primarily as a "Switch games but in 4K" console, and Nintendo definitely don't have a history of that kind of overkill.

Of course that doesn't necessarily mean they'll call it the Switch 2 and make a big deal of dropping support for the original models out of the gate, but it looks to me like this is (in function if not in name) the successor to the Switch. They'll likely have around 2 years of cross-gen support, which is the norm now in any case, and I'd expect them to still sell the Switch Lite at the very least for some time, so there's probably going to be a more gradual transition between generations than we've seen before from Nintendo, but to me at least this looks like a new generation.

I really think it comes down mostly to what Nvidia is offering at the time and incentivizing.

In 2019, it makes sense to work with something Orin based. It’s where Nvidia was.

You talk about just doing Ampere with 8SM or whatever…when we had pages earlier in this thread discussing how 12 SM at the most conservative clocks by Nintendo would barely allow 4K/60fps gaming. Talking about how the portable mode threshold might be too low to properly use tensor/rt cores.

The argument that this hardware is “overkill” to play Swirch games at 4K/60fps when docked…I don’t think that’s necessarily definitive yet.

I think they made a choice for 12SM’s so they can do the base minimum for clocks on this portable system and have enough power to render Switch games at native resolutions and 60fps. And maybe some left over to utilize the tensor/rt cores for some extra stuff.

That’s it.

They aren’t going to be shooting for the moon with this model. It will end up efficiently doing what it is designed to do…play the Switch library with better graphics/performance.

It won’t be overkill. It is promising something to Switch gaming that will make your average Lite gamer really not care that much about what it does.

The next hardware upgrade AFTER this Drake model…that’s when Nintendo will start to shift its development focus away from the Mariko models.

Yeah I will say Nintendo's first-party stuff will likely be "Built for Base Switch, then enhanced for Drake"

This actually gives them a ton of upwards room to enhance things as they want really due to how Drake would run most OG Switch games at 4k 60fps docked natively because of how big the CPU boost is and the 12SM GPU,

Yep, this is how I imaging Nintendo using this new model for.

I could see them making 4K Switch exclusives that utilize the tensor cores for AI or AR or something that people have theorized earlier. That’s absolutely possible

But they aren’t going to focus on exclusives for the new model cause they think their games can ONLY be played 4K/60 fps now…lol

What does this mean? BotW 2 and Prime 4 are being developed for the current Switch. If they're released on the next model it will be as ports with merely the same kinds of enhancement as BotW saw on Switch (resolution bump, higher draw distance, etc.).

Right. I’m simply saying this is how Nintendo will approach most of its games in the next 4-5 years. Because they will treat the new model more as a mid-gen revision than a successor.
 
Thanks for the time you took, it was literally written. I would have understood it if I had read it a second time :-/

Thanks a ton. What I get from that is that:

1) at a given frquency, a CUDA core as a fixed computation capability, right. So a CUDA core in a GTX 750 and one in a RTX 3070 will deliver the same performance at the same frequency. Also, the performance of the chip scales linearly with them. Correct?
2) a shorter node means being able to cram more cores on a die. A transistor that is built with a node for example half the size will take four times less space (since it's half as short and half as wide). Ignoring the fact that node names are rarely associated to their physical sizes nowadays, that would also be correct, right?

If the assumptions above are right, one parameter to take in account is how much the electric power varies in function of what's above. Does power increase linearly with frequency? Linearly in regard to the number of CUDA cores? Does it follow an inverse square law in regard to node length?

I am asking this because this probably isn't the last leak that will happen regarding the succ specs. Having a formula that can give link all the variables we are discussing can help us reach a consensus on the credibility of a leak much faster.

This is getting into what discussion a couple pages ago was about.

What this calculation was, was the max theoretical performance. It assumes everything else is 100% perfect.

But you got the general gist of things, that is why each successive GPU has more cores than the last.

However, in order to function at that peak capability, those cores need support, they need enough memory to do their work, and they need to be kept working by the CPU scheduling their jobs. Having to wait on memory, or sitting around not being used, would cause a gpu to not reach that perfect number which is why some products don't perform as well as the paper says they should, and others seem to punch above their weight.

Power draw does not increase linearly with clock frequency but rather on a curve that requires more and more power for less and less increases in clock/performance. There was some really good discussion on this a few pages back from.... Was it redd?

Yup, the smaller the node, the more transistors you can fit in the same amount of space. Can clock them higher with lower power draw as well.
 
What does Drake being as performant as it is have to do with price? Price doesn’t always equal performance. We had this song and dance with the Switch price reveal when the PS4 was literally the same price.

It would be a device that doesn’t have a screen, doesn’t have a battery, doesn’t have a dock. It could look similar to the switch and have rails for joycons. Have a fan, the soc and the RAM (plus sensors). USB ports. Expandable storage slot. 64GB of internal storage (most Nintendo games can fit in this). And have an HDMI port.

But the kicker is that, it’s digital only. That can work, as being sold at a loss, in the 200 price range. You have to buy a new game if you are new anyway on top of also the Online being a 20-50 fee (Family being 80), and these wouldn’t be produced in the same Level as the switch, just like how PS5 DE is not made in large quantities.

They are additives to the platform ecosystem, for the areas where it doesn’t reach. It is not meant to be the new platform like the switch is. But an additive in the same way as the Switch Lite.

Like, 200-250 or 270.

Even then, this isn’t a real idea for them to actually go out of their way to do.
I would eat crow and close my account if they priced it at your range at launch. Nintendo being Nintendo high off the Switch's success and selling $60 Wii and Wii u ports, as well as not price dropping the regular switch model when OLED came out.. Should tell you this isn't going to happen. Especially not for a new state of the art nvidia mobile chip. If I was Nintendo I sure as hell wouldn't sell it at $200. Hell I would be hard pressed to sell it under $350 tbqh.. Especially with the OLED at $350 (or $300).

Let's just agree to disagree.


If Nintendo decided to release a docked version, maybe the best time would be to release it is with the hybrid or a few months before. Basically have it with docked specs of Switch 2 but with double storage and make Home version $50 cheaper (or even equal). If Switch 2 hybrid is 128 GB, docked gets 256GB. $400 for home and $350.
 
Last edited:
I really think it comes down mostly to what Nvidia is offering at the time and incentivizing.

In 2019, it makes sense to work with something Orin based. It’s where Nvidia was.

You talk about just doing Ampere with 8SM or whatever…when we had pages earlier in this thread discussing how 12 SM at the most conservative clocks by Nintendo would barely allow 4K/60fps gaming. Talking about how the portable mode threshold might be too low to properly use tensor/rt cores.

The argument that this hardware is “overkill” to play Swirch games at 4K/60fps when docked…I don’t think that’s necessarily definitive yet.

I think they made a choice for 12SM’s so they can do the base minimum for clocks on this portable system and have enough power to render Switch games at native resolutions and 60fps. And maybe some left over to utilize the tensor/rt cores for some extra stuff.

That’s it.

They aren’t going to be shooting for the moon with this model. It will end up efficiently doing what it is designed to do…play the Switch library with better graphics/performance.

It won’t be overkill. It is promising something to Switch gaming that will make your average Lite gamer really not care that much about what it does.

The next hardware upgrade AFTER this Drake model…that’s when Nintendo will start to shift its development focus away from the Mariko models.
Do you have the link to that/those post(s)? I am interested in seeing how the poster(s) reached that conclusion. I tend to agree with you/them since a GPU with 6 SM running at 768 MHz would translate to a total of 1.18 TFlops. That's about what an Xbox One can achieve (assuming the Flops are comparable between AMD and Nvidia), and it doesn't do 4K.

I haven't touched on the subject of power draw since I know too little about it. An Ampere chip will sip less power since it is built on a shorter node and it can turn the argument on its head, I suppose.

This is getting into what discussion a couple pages ago was about.

What this calculation was, was the max theoretical performance. It assumes everything else is 100% perfect.

But you got the general gist of things, that is why each successive GPU has more cores than the last.

However, in order to function at that peak capability, those cores need support, they need enough memory to do their work, and they need to be kept working by the CPU scheduling their jobs. Having to wait on memory, or sitting around not being used, would cause a gpu to not reach that perfect number which is why some products don't perform as well as the paper says they should, and others seem to punch above their weight.

Power draw does not increase linearly with clock frequency but rather on a curve that requires more and more power for less and less increases in clock/performance. There was some really good discussion on this a few pages back from.... Was it redd?

Yup, the smaller the node, the more transistors you can fit in the same amount of space. Can clock them higher with lower power draw as well.
That is the best explanation I have had about the subject. Most still remember the 'secret sauce' expression people threw around semi-seriously in the WUST era. Thank you.
 
Last edited:
0
I really think it comes down mostly to what Nvidia is offering at the time and incentivizing.

In 2019, it makes sense to work with something Orin based. It’s where Nvidia was.

You talk about just doing Ampere with 8SM or whatever…when we had pages earlier in this thread discussing how 12 SM at the most conservative clocks by Nintendo would barely allow 4K/60fps gaming. Talking about how the portable mode threshold might be too low to properly use tensor/rt cores.

The argument that this hardware is “overkill” to play Swirch games at 4K/60fps when docked…I don’t think that’s necessarily definitive yet.

I think they made a choice for 12SM’s so they can do the base minimum for clocks on this portable system and have enough power to render Switch games at native resolutions and 60fps. And maybe some left over to utilize the tensor/rt cores for some extra stuff.

That’s it.

They aren’t going to be shooting for the moon with this model. It will end up efficiently doing what it is designed to do…play the Switch library with better graphics/performance.

It won’t be overkill. It is promising something to Switch gaming that will make your average Lite gamer really not care that much about what it does.

The next hardware upgrade AFTER this Drake model…that’s when Nintendo will start to shift its development focus away from the Mariko models.
This doesn't make sense to me. This system will be so powerful that it would be able to run a lot of Switch games at 4K/60 even without DLSS(assuming they patch them). It will also require a patch for every single game, and at least a translation layer even if nothing on the game is being boosted. This is a far bigger upgrade on every level than any "pro" style system in the past. Generally with a pro system you want to boost games while preserving as much compatibility as possible, but there's honestly nothing to indicate that kind of thinking here. That's the entire reason we're talking about things like translation layers and patches, because out of the box this system wouldn't even be able to play Switch games natively.
 
This doesn't make sense to me. This system will be so powerful that it would be able to run a lot of Switch games at 4K/60 even without DLSS(assuming they patch them). It will also require a patch for every single game, and at least a translation layer even if nothing on the game is being boosted. This is a far bigger upgrade on every level than any "pro" style system in the past. Generally with a pro system you want to boost games while preserving as much compatibility as possible, but there's honestly nothing to indicate that kind of thinking here. That's the entire reason we're talking about things like translation layers and patches, because out of the box this system wouldn't even be able to play Switch games natively.
What is exactly the reason for that. If NVN2 is the successor API to NVN, then I assume that Nvidia has a built-in retrocompatibility layer/function/whatever in place to ensure that. Am I misrepreseting what an API should be capable of doing? Also, Ampere and Maxwell are architectures devised by the same company and Maxwell should be (feature-wise) a strict subset of Ampere. So, perfect emulation should be possible, I assume?
 
What is exactly the reason for that. If NVN2 is the successor API to NVN, then I assume that Nvidia has a built-in retrocompatibility layer/function/whatever in place to ensure that. Am I misrepreseting what an API should be capable of doing? Also, Ampere and Maxwell are architectures devised by the same company and Maxwell should be (feature-wise) a strict subset of Ampere. So, perfect emulation should be possible, I assume?
I'm not the expert here but from what I understand Switch games have pre-compiled shaders that won't automatically work on newer architecture. It's not an insurmountable problem or anything but work will have to be done on it and it probably won't be 100% accurate in every case.
 
0
What is exactly the reason for that. If NVN2 is the successor API to NVN, then I assume that Nvidia has a built-in retrocompatibility layer/function/whatever in place to ensure that. Am I misrepreseting what an API should be capable of doing? Also, Ampere and Maxwell are architectures devised by the same company and Maxwell should be (feature-wise) a strict subset of Ampere. So, perfect emulation should be possible, I assume?
Backwards compitability for consoles is never trivial, because games will use any quirk of the hardware to try to make the code run a tiny bit faster. And for switch because some of the drivers are baked into every single piece of software, some dataminers have speculated it cant be done on something newer than Maxwell. However Im 100% sure Nvidia will find a way.

The easiest thing Nintendo could have done for a pro style upgrade is having a more powerful Maxwell based gpu, Add a die shrink and a few more SM to the TX1. However, this is not that. Not even close.
 
Backwards compitability for consoles is never trivial, because games will use any quirk of the hardware to try to make the code run a tiny bit faster. And for switch because some of the drivers are baked into every single piece of software, some dataminers have speculated it cant be done on something newer than Maxwell. However Im 100% sure Nvidia will find a way.
So, in general, APIs are not a 'shroud' that isolates software from the hardware and thus ensure that the code that runs on a machine runs equally on newer devices (if said newer devices are designed in a way that facilitates the transition, that is)?

While typing that, I realize also that NVN is not the only option to code stuff for the Switch since Vulkan and DirectX(?) are probably options too. That makes the problem for the hardware/API maker much harder, I imagine. Is that so? If yes, I imagine Nintendo will want to push the developers to using NVN2 exclusively so it can facilitate backwards compatibility with subsequent hardware in the Switch family.
 
Last edited:
Based on current rumors and leaks. not only that compared to other "Pro" models or mid gen revisions this hardware will be next gen in every way (so its not point only about hardware about features like DLSS..), but also should have some big exclusive games right away on launch, and by time there will more and more exclusive games and sooner or later games will be stopped released for current Switch models and will be released only for this next gen Switch and its revisions.

So I really dont agree with point that Nintendo will treat it like simple mid gen revision.
 
Based on current rumors and leaks. not only that compared to other "Pro" models or mid gen revisions this hardware will be next gen in every way (so its not point only about hardware about features like DLSS..), but also should have some big exclusive games right away on launch, and by time there will more and more exclusive games and sooner or later games will be stopped released for current Switch models and will be released only for this next gen Switch and its revisions.

So I really dont agree with point that Nintendo will treat it like simple mid gen revision.
MS didn’t have that and still don’t. You could argue flight sim, but that was a pc port that would be physically impossible to port to last gen.

In this economy, supply situation a long cross gen period is inevitable imo.

Edit: I could see a big third party exclusive that would be to compromised/ can’t run on old switch happening. But nothing from Nintendo.
 
Last edited:
MS didn’t have that and still don’t. You could argue flight sim, but that was a pc port that would be physically impossible to port to last gen.

In this economy, supply situation a long cross gen period is inevitable imo.

Edit: I could see a big third party exclusive that would be to compromised/ can’t run on old switch happening. But nothing from Nintendo.

Yeah, my point is that Sony and MS didnt had that with their mid gen revisions, and that is much more than just mid gen revision in any case,
this will be much more comparable PS5 to PS4 than PS4 Pro to PS4, with difference that Nintendo will keep selling some of current Switch models for around 2 years after this next gen Switch launch.

At least 2 years of cross gen games (at least in Nintendo case) is expected in any case, but I dont see 4-5 years of cross gen games like some people here saying.

Well Nate said that he knows for couple of 3rd party games that developed just for this new Switch hardware.
 
At least 2 years of cross gen games (at least in Nintendo case) is expected in any case, but I dont see 4-5 years of cross gen games like some people here saying.

This is pretty much my thoughts on it as well. I think we'll still see some stuff cross (gen) for the Switch iterations here or there if there's no complications running on the original hardware even past that, but I expect general software to sunset within the 2-3 years of release.
 
So, in general, APIs are not a 'shroud' that isolates software from the hardware and thus ensure that the code that runs on a machine runs equally on newer devices (if said newer devices are designed in a way that facilitates the transition, that is)?
API are interfaces which let 2 different components to talk with each other and also allows you to replace one of them with another component as long as it offers the same interface.

So, from my understanding, you're mostly correct, except the NVN is an interface between game and driver, not with the hardware. The problem is that the driver is being compiled with the game, meaning that any NVN usage is converted to Maxwell instructions on the binary.

If you have the source code, you can just compile it again with the Ampere driver and NVN usage will be converted to Ampere instructions this time. If you don't, then you need to translate Maxwell instructions to Ampere instructions or emulate Maxwell.
 
0
Digital Foundry video is garbage. "Because Nintendo" logic in all but name. Nintendo is NOT averse to the idea of having high-performance consoles. If they're able to achieve it while being both what they deem affordable and (ideally) profitable, they'll try and make it. That's how consoles should be built, if we're keeping it 100, but we are where we are. The Switch was literally the best they could have at the time and schedule, but they believe Nvidia will make them something unambitious? This isn't the first time DF have showed flawed reasoning or straight-up tech illiteracy, but I guess that's a rant for another day. What trash. That video couldn't even explain WHY they thought it couldn't be what they can literally see, and they have now many subscribers and paying supporters!?
 
Last edited:
This doesn't make sense to me. This system will be so powerful that it would be able to run a lot of Switch games at 4K/60 even without DLSS(assuming they patch them). It will also require a patch for every single game, and at least a translation layer even if nothing on the game is being boosted. This is a far bigger upgrade on every level than any "pro" style system in the past. Generally with a pro system you want to boost games while preserving as much compatibility as possible, but there's honestly nothing to indicate that kind of thinking here. That's the entire reason we're talking about things like translation layers and patches, because out of the box this system wouldn't even be able to play Switch games natively.
We're likely talking a 6x GPU upgrade. If it's 5nm (unlikely IMO) probably more as they'd probably raise the clocks a bit, but 8nm I'd assume more that they will keep the same clocks. Possibly less than 6x for handheld mode.

They did a 6x CPU upgrade for the new 3DS. Giant leaps for revisions isn't new to Nintendo.
 
Digital Foundry video is garbage. "Because Nintendo" logic in all but name. Nintendo is NOT averse to the idea of having high-performance consoles. If they're able to achieve it while being both what they deem affordable and (ideally) profitable, they'll try and make it. That's how consoles should be built, if we're keeping it 100, but we are where we are. The Switch was literally the best they could have at the time and schedule, but they believe Nvidia will make them something unambitious? This isn't the first time DF have showed flawed reasoning or straight -up tech illiteracy, but I guess that's a rant for another day. What trash. That video couldn't even explain WHY they thought it couldn't be what they can literally see, and they have now many subscribers and paying supporters!?
Tired of hearing “because Nintendo”.
 
What are we missing then? There's ray tracing hardware on the chip, and the API lets you define RT shaders and pass acceleration structures to them. Nate's only comments that I remember are that RT was "limited" and I thought that's what you were referring to, and questioning whether there was something lacking in the RT implementation that we don't know about.
What I'm missing is the specifics of the RT implementation of the tested software. It's one thing to throw out full per-pixel, full screen rt effects and kill the battery, but what I want to know is if something like quarter pixel RT was inplemented. We're seeing RT on other consoles that I think are too high fidelity (it's ok for graphics mode though), I think it could be the same in that RT test on Drake.
 
0
We're likely talking a 6x GPU upgrade. If it's 5nm (unlikely IMO) probably more as they'd probably raise the clocks a bit, but 8nm I'd assume more that they will keep the same clocks. Possibly less than 6x for handheld mode.

They did a 6x CPU upgrade for the new 3DS. Giant leaps for revisions isn't new to Nintendo.
The N3DS had a massive upgrade in CPU and RAM upgrade, yes, but it still used the exact same (pretty outdated) GPU, which I imagine made 3DS compatibility fairly easy. With this hardware we're talking about a massive upgrade on pretty much every level, CPU, GPU, RAM, probably storage too. Not even mentioning the massive hardware leaps like RTX and Tensor cores which will enable ray tracing and DLSS.
 
Digital Foundry video is garbage. "Because Nintendo" logic in all but name. Nintendo is NOT averse to the idea of having high-performance consoles. If they're able to achieve it while being both what they deem affordable and (ideally) profitable, they'll try and make it. That's how consoles should be built, if we're keeping it 100, but we are where we are. The Switch was literally the best they could have at the time and schedule, but they believe Nvidia will make them something unambitious? This isn't the first time DF have showed flawed reasoning or straight -up tech illiteracy, but I guess that's a rant for another day. What trash. That video couldn't even explain WHY they thought it couldn't be what they can literally see, and they have now many subscribers and paying supporters!?
Agree. It was weird. I was expecting a dive into the leak but they only mentioned it in passing and then didn’t add anything to the conversation. It was strange.
 
The N3DS had a massive upgrade in CPU and RAM upgrade, yes, but it still used the exact same (pretty outdated) GPU, which I imagine made 3DS compatibility fairly easy. With this hardware we're talking about a massive upgrade on pretty much every level, CPU, GPU, RAM, probably storage too. Not even mentioning the massive hardware leaps like RTX and Tensor cores which will enable ray tracing and DLSS.
It's a lot easier nowadays to make things compatible via software, so they can afford to use a new architecture along with all of the upgrades that comes with. It's probably a lot cheaper to do so.

Back in the 3DS days it may not have been cheaper to develop a software solution for BC so they were stuck with the same general architecture. Same with Wii to Wii U.
 
0
I unsubscribed from digital foundry a while ago. Their videos judging performance of existing games are well done and you get a pretty accurate estimation of frame rates and resolutions but it's not something that aids my decision in buying games any more so than other reviews that simply state "Clean visuals and smooth frame rates."

Their hardware speculation is a little lack luster and sometimes ill informed for a tech channel.

Anyway do we have any insider knowledge on when the reveal of the release date of the podcast is going to take place?
 
0
My logic for why I'm not worried about backwards compatibility is as follows:
  • Nvidia provide both the SoC hardware and related software (drivers, tools, etc.) for Switch
  • Nvidia want to keep Nintendo around as a customer
  • Nvidia's biggest advantage in keeping Nintendo as a customer for future devices is that they can provide hardware and software that's backwards compatible with the Switch
  • Therefore, Nvidia has a very strong incentive to provide Nintendo with a hardware and software solution that provides a very high level of backwards compatibility
That's not to say that implementing such a solution is trivial, as dealing with both the embedded drivers and pre-compiled shaders are issues that need to be solved, but they're definitely not insurmountable, and Nvidia is literally in the best possible position to solve them, with an incredibly strong incentive to do so. They designed both the Maxwell and Ampere architectures and ISAs, so are intimately familiar with any points of incompatibility between them. They designed the NVN API and the driver stack used in the Switch OS. They've managed compatibility of their own software across GPU architectures for decades.

The idea that Nvidia would just dump this new chip on Nintendo with no solution for backwards compatibility just doesn't make sense to me. They may as well just give Nintendo AMD's phone number.

We're likely talking a 6x GPU upgrade. If it's 5nm (unlikely IMO) probably more as they'd probably raise the clocks a bit, but 8nm I'd assume more that they will keep the same clocks. Possibly less than 6x for handheld mode.

They did a 6x CPU upgrade for the new 3DS. Giant leaps for revisions isn't new to Nintendo.

For both the DSi and new 3DS, though, we had big leaps in certain places, but on the same architecture as the original. If they were taking the same approach as with those, then we'd have A57 CPU cores and a maybe 8 SM Maxwell GPU. And if there were a change in philosophy around revisions, and Nintendo thought "well, Ampere exists, so we might as well use it and make use of DLSS", then we'd have a much smaller GPU, as they could leverage DLSS to get Switch games running at close-enough-to-4K resolution. To me it seems like too much for a simple revision.
 
While true that DF speculation was quite poor and "Because Nintendo" indeed, I can understand why they're wary of jumping into the hype train despite having some factual evidence that Nintendo next SoC might be a high performer. They got burned bad by the 3DS.
My logic for why I'm not worried about backwards compatibility is as follows:[..............]
Even if Nintendo and Nvidia couldn't(For some unknown reason) achieve a BC layer with Switch games, wouldn't this SoC(Here I'm basing at the fact Nintendo is using a 12 SM GPU, so they're spending a lot for the GPU and there's no reason to think they might cut back in others areas) be capable of emulating Switch? So let's say they weren't able to achieve BC through normal means. Wouldn't they be able to have a optimized Switch emulator for past Switch games. Just like Yuzu/Ryujinx does on PC? I legit don't understand why some people(Like SciresM and MVG) think Nintendo will need to do some ungodly amount of work or need to have a TX1 on die.
And like, what you've said is literally why nobody should fear there's no BC. Nvidia and Nintendo know what they're doing and how to circumvent any problems that may arise.
 
Last edited:
Digital Foundry video is garbage. "Because Nintendo" logic in all but name. Nintendo is NOT averse to the idea of having high-performance consoles. If they're able to achieve it while being both what they deem affordable and (ideally) profitable, they'll try and make it. That's how consoles should be built, if we're keeping it 100, but we are where we are. The Switch was literally the best they could have at the time and schedule, but they believe Nvidia will make them something unambitious? This isn't the first time DF have showed flawed reasoning or straight -up tech illiteracy, but I guess that's a rant for another day. What trash. That video couldn't even explain WHY they thought it couldn't be what they can literally see, and they have now many subscribers and paying supporters!?

Tired of hearing “because Nintendo”.
Where did they say that. You mean, the 59-minute mark? They specifically refer to it as a big chip and thus, expect it to operate at low clocks to ensure a viable battery life. Alex adds that one concern he has is the memory config.

I don't think they pulled a 'Nintendo' card. They are just making the safe assumption that the specs must be balanced to ensure that the concept is viable.
 
Probably due to the dubious nature of its origins.
Yep. Rich is a very well-connected individual and probably doesn't want to rock the boat too much. He's said in the past: when there's something for him to latch unto - like the Switch leak in 2016 - he'll try to add to it. In that case DF had information about the clocks and technical specs.

Also, John is the resident Nintendo guy and he wasn't there for this DF Direct. Those are unrehearsed so I don't hold those to the same standards as regular DF content. Their Switch DLSS video for example was very well-researched and informative.

DF has been PS5/XSX town for the past year, and I totally understand that that's where the discourse is. As soon as there's something to report on Switch 4K they will, and until that time I'll follow them like I follow this thread: trying to learn stuff that I'll never really understand or use in real life! xD
 
For both the DSi and new 3DS, though, we had big leaps in certain places, but on the same architecture as the original. If they were taking the same approach as with those, then we'd have A57 CPU cores and a maybe 8 SM Maxwell GPU. And if there were a change in philosophy around revisions, and Nintendo thought "well, Ampere exists, so we might as well use it and make use of DLSS", then we'd have a much smaller GPU, as they could leverage DLSS to get Switch games running at close-enough-to-4K resolution. To me it seems like too much for a simple revision.
Well yeah, I don't expect this to be a "simple" revision. This is more like an iterative successor that's not treated like a traditional successor.

And regarding the architecture change, as I noted above it's a lot, lot easier in this case to do a software solution for BC, mainly for the reasons you noted. I doubt this was possible for the new 3DS or Wii U, they had to retain the same general microarchitecture because software solutions were simply too difficult or expensive.


I just don't see how they will market this as a new generation of Switch with the lineup they've announced for this year. I know it will be BC, but to announce so many high profile games for your older model and then announce a new generation later that's launching with the biggest titles, that doesn't seem like good marketing. You severely diminish the selling capacity of the older models, especially if we're talking about a $400+ price.
 
While true that DF speculation was quite poor and "Because Nintendo" indeed, I can understand why they're wary of jumping into the hype train despite having some factual evidence that Nintendo next SoC might be a high performer. They got burned bad by the 3DS.
I don't remember this. Can you give a brief description of what happened? That Nintendo was looking into Tegra 2, and might even been some development kits using the hardware on the wild is well known. But aside that, I cannot recall of any rumors stating that the 3DS was going to be significantly more powerful than what we got.
 
I believe that the one big case of Nintendo under-delivering with hardware is the Wii U. It was clear that the DS was aiming for a portable N64+, a PSP+ with the 3DS and the Wii was never stated to be more than a GC+ . With the Wii U, there were consistent reports that there was a last minute downclock of the GPU and sticking with dead end tech like PowerPC really gimped the CPU. For some reason they had a target of 30W for the console.
 
Last edited:
I don't remember this. Can you give a brief description of what happened? That Nintendo was looking into Tegra 2, and might even been some development kits using the hardware on the wild is well known. But aside that, I cannot recall of any rumors stating that the 3DS was going to be significantly more powerful than what we got.
DF was the one who leaked/confirmed the 3DS would use Tegra hardware. And that was the assumption until rumors floated that Nintendo and Nvidia deal fell through and DMP/PICA 200 was confirmed. They suffered some mocking and was a small hit to their reputation(As they always double or triple check their info). They even had to research what happened and show a development kit with Tegra hardware(Or was references to Tegra hardware. I don't remember well).
TBF the 3DS was messy. Nintendo delayed the hardware from 2010 to 2011 and was caught up right when mobile SoC started to explode in performance(Each quarter had a more performant SoC than the other). They could have gotten a much better solution and even them, they kept changing some things until the final product(Added + 64 MB RAM, went from 133MHz GPU to 268MHz, increased the FCRAM to 6MB, etc).
I believe that the one big case of Nintendo under-delivering with hardware is the Wii U. It was clear that the DS was aiming for a portable N64+, a PSP+ with the 3DS and the Wii was never stated to be more than a GC+ . With the Wii U, there were consistent reports that there was a last minute downclock of the GPU and sticking with dead end tech like PowerPC really gimped the CPU. For some reason they had a target of 30W with the console.
Oh yeah, that's also true. I remember there were talks that development kits were downgraded, although Shi'En said they weren't. It's probable they might have hit some thermal issues(I don't know how due to the fact the machine only used ~30W) but it's true COD BO2 went from promising 1080p60 to 720p60, same for Assasin's Creed 3(1080p -> 720p) and New Super Mario Bros U. (1080p60 -> 720p60).
 
Last edited:
It's coming together. The Online app updates, improved online, expansion pass. Next may be a return of Miiverse or some community feature but i think they will wait for the launch of the new system before we see the bigger OS improvements. I also wonder what features the new Switch OS will have that can help sell the system, besides very fast game loading.
 
It's coming together. The Online app updates, improved online, expansion pass. Next may be a return of Miiverse or some community feature but i think they will wait for the launch of the new system before we see the bigger OS improvements. I also wonder what features the new Switch OS will have that can help sell the system, besides very fast game loading.
F O L D E R S
 
DF was the one who leaked/confirmed the 3DS would use Tegra hardware. And that was the assumption until rumors floated that Nintendo and Nvidia deal fell through and DMP/PICA 200 was confirmed. They suffered some mocking and was a small hit to their reputation(As they always double or triple check their info). They even had to research what happened and show a development kit with Tegra hardware(Or was references to Tegra hardware. I don't remember well).
TBF the 3DS was messy. Nintendo delayed the hardware from 2010 to 2011 and was caught up right when mobile SoC started to explode in performance(Each quarter had a more performant SoC than the other). They could have gotten a much better solution and even them, they kept changing some things until the final product(Added + 64 MB RAM, went from 133MHz GPU to 268MHz, increased the FCRAM to 6MB, etc).

Oh yeah, that's also true. I remember there were talks that development kits were downgraded, although Shi'En said they weren't. It's probable they might have hit some thermal issues(I don't know how due to the fact the machine only used ~30W) but it's true COD BO2 went from promising 1080p60 to 720p60, same for Assasin's Creed 3(1080p -> 720p) and New Super Mario Bros U. (1080p60 -> 720p60).

I always thought that with the WiiU they were trying to make a switch, but had to give up when they realized the hardware wasn't there yet and remained with their pants down, forced to release a weird machine with a concept even them didn't really believe in (asynchronous gameplay).
 
I always thought that with the WiiU they were trying to make a switch, but had to give up when they realized the hardware wasn't there yet and remained with their pants down, forced to release a weird machine with a concept even them didn't really believe in (asynchronous gameplay).
Nintendoland one of the GOATS, fool!
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom