• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Do you have audio editing experience and want to help out with the Famiboards Discussion Club Podcast? If so, we're looking for help and would love to have you on the team! Just let us know in the Podcast Thread if you are interested!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

I agree, choosing battery over clocks is a decision that locks them to that lesser performance profile for the foreseeable future, as it did with Erista.

It was a necessary decision for that chipset, but it did hamstring Nintendo down the line.
In Erista's case, it was purely down to thermal throttling.

In terms of Mariko, it was likely to prevent the segmentation of install bases by making devs do "Mariko enhanced" titles, which would mean having to account for its own set of power profiles. By using the "increased battery life" angle, it makes the newer Mariko models show it having a clear, marketable improvement.

In the case of Drake, a new chipset with new hardware features (DLSS, potentially RT cores?) would offer a nice clean break, without it having to be "hamstrung by Erista's clocks". That does bring us back into discussing how backwards compatibility will be handled, and the merits of having a revision versus a successor and vice versa...
 
So with the rumors of a direct featuring Twilight Princess and Windwaker has got me wondering… they used WW as a test bed for seeing what they could do with the WiiU’s power. What if they brought it back and played with how else they could push it with the Switch 2’s grunt? With that art style they could easily just crank it to 4K 60 and be done but what if they used it to experiment with RT? They adjusted the lighting for the Wii U port so there is precedent.
 
In Erista's case, it was purely down to thermal throttling.

In terms of Mariko, it was likely to prevent the segmentation of install bases by making devs do "Mariko enhanced" titles, which would mean having to account for its own set of power profiles. By using the "increased battery life" angle, it makes the newer Mariko models show it having a clear, marketable improvement.

In the case of Drake, a new chipset with new hardware features (DLSS, potentially RT cores?) would offer a nice clean break, without it having to be "hamstrung by Erista's clocks". That does bring us back into discussing how backwards compatibility will be handled, and the merits of having a revision versus a successor and vice versa...
Yeah, this is what I was getting at, though less eloquently. Decisions made due to node limitations on Erista had significant knock-on effects to how Nintendo could treat Mariko. Drake has no such limitations, and it doesn't make a lot of sense to assume that they're once again going to maximize battery life at the expense of power.
 
0
So with the rumors of a direct featuring Twilight Princess and Windwaker has got me wondering… they used WW as a test bed for seeing what they could do with the WiiU’s power. What if they brought it back and played with how else they could push it with the Switch 2’s grunt? With that art style they could easily just crank it to 4K 60 and be done but what if they used it to experiment with RT? They adjusted the lighting for the Wii U port so there is precedent.
They where, to be concrete, trying to find out whre to go next with Zelda artstyles.
Its not auniversally good Tech Demo, and it does not make sense to do it again, since they know what artstyle they are using, and could easily just push BotW2s artstyle, since that one would benefit more from higher power.

If the ports get made, i hope and expect 4k60... at the same time, i dont see them doing such deep changes as adding RT.
 
0
So am I the only one that thinks that they're going to down clock the heck out of this thing way more than people are expecting? I remember the rumors of the Switch, and then the real clocks were released and people were pissed. I'm expecting the same deal again. XD
I fully understand where you are coming from with this bc we all know traditional Nintendo. But coming into Switch, Nintendo had a lot of reason to try and play it safe, as they were trying to stabilize from the Wii U failure, and also combining their handheld & console development. They didn’t know how consumers felt about them anymore which could make reasonably apprehensive about investing heavily in super strong hardware, bc the quickest way to sink would be asking consumers to buy a $399+ portable device after what was the Wii U.

That’s why from my perspective though, I think the Switch (OG, Lite, OLED) was sent out as a soft “reset button”, and a statement that the Nintendo of old is gone. When you look at the console, while it lacks BC, they managed to create a platform that is capable of playing every single piece of hardware theyve ever released, from GBA to 3DS/ NES to Wii U. It contains touchscreen, motion controls, and portability.

I think looking at Nintendo in the way we’ve all grown accustomed to has become outdated. They’ve shown at every turn that the effort and desire to shed their traditional approaches. And its safe to say the Switch is a phenomenon as a brand, and there’s no reason or them to be hesistant on investing more aggressively in the platform from a technical standpoint. Especially since they likely want to be sure the majority of those 120+ Million owners follow over.
 
So am I the only one that thinks that they're going to down clock the heck out of this thing way more than people are expecting? I remember the rumors of the Switch, and then the real clocks were released and people were pissed. I'm expecting the same deal again. XD
Problem with this is simply the GPU being smaller serves the same purpose and would cost much less. Using such a big GPU actually means that clocks are within expectations. Also Switch clocks aren't much less than TX1's spec... Shield TV had a clock of 998MHz, Switch has a top clock of 921MHz (according to Digital Foundry) and even portably, it is 460MHz, which gives 235GFLOPs, expectations for a Nintendo handheld was 128GFLOPs - 157GFLOPs, ended up being much higher. There was some hopeful that they would use 16nm and go with Pascal over TX1, and while Mariko was available for Nintendo if they paid the extra price, they ended up with a pretty powerful device given its size... They are raising expectations well beyond what people thought with Drake, literally expectations were 768 cuda cores, some had it less, some had it more, it's literally twice that, and then you have DLSS on top, which doubles GPU performance, and finally spacial upscaling can be used in conjunction because of how fast DLSS works, giving even more performance... This thing will be a low closer to PS5 than Switch was to PS4... Not to say it will be close, but it should be able to trade image quality for graphical settings on par with PS5.

In fact, process node change will have a bigger impact on CPU than GPU because of upscalers.
 
So am I the only one that thinks that they're going to down clock the heck out of this thing way more than people are expecting? I remember the rumors of the Switch, and then the real clocks were released and people were pissed. I'm expecting the same deal again. XD
I think the realistic minimum we are going to see for clocks is the original switch clocks. Simply because of the size of the GPU they are using.

The clocks and number of shades used in the original switch were likely to be the optimal clock speeds to get the best perf per watt at the performance the original chip targeted. That optimal clock speed for perf per watt increases as node sizes get smaller so where 307mhz for the GPU might have been optimal on 20nm maxwell this is likely higher on 8nm Ampere.

Meaning that 1536 shader cores at any less than that optimal number is a waste of silicon as using less silicon and clocking higher becomes more efficient. Just for the sake of easy math, let's say you have a chip that's set at its optimal frequency for perf per watt, that frequency is 600mhz and its a 768 core part. Its performance is 921GFLOPS. Now let's take a 1536 core part and clock it at 300mhz we also get 921GFLOPS. Only because of how the power efficiency curve works, larger chip draws more power and you get half the number of chips per wafer because its larger. There are some advantages, you can have a bigger delta between handheld and docked clocks because the handheld floor is so low. But I would say you can probably do this with more of a middle ground between the two and use something like 1024 cores at 400mhz.

My expectation if it's 8nm is the standard switch boost clock for portable, about 460mhz and 900mhz in docked. This gives us:

Portable - 1.4TFLOPs
Docked - 2.75TFLOPs.

I think they could go as far as 1ghz, plenty of laptop ampere GPUs running a constant 1300mhz clock. I expect this setup to give us roughly OG switch battery life.

If it's TSMC 5nm I'm really not sure what to expect tbh, maybe slightly higher portable clocks but a much higher boost clock to give us a larger delta between docked and portable to facilitate 720p - 4k. Portable 550mhz, docked 1.3ghz.

Portable - 1.7TFLOPS.
Docked - 4 Tflops.
 
Don’t know about y’all but it really feels like the last stretch. All of Nintendo recent software uses some sort of upscaler Switch sports, Xenoblade 3, and now Splatoon 3.

Surley they know their devs are struggling and inventing new ways just to make games run on this thing. The devs are reaching their limits with the hardware.
 
I wonder if this is one of the reasons Qualcomm opposed Nvidia acquiring Arm.
Some more context:


And some more context:



I suppose the questions to ask aren't "Will Nvidia and Nintendo switch from Arm to RISC-V?", but rather "When will Nvidia and Nintendo switch from Arm to RISC-V?", assuming Nintendo continues working with Nvidia two decades from when Nintendo launched the Nintendo Switch; and "Can Arm's lawsuit with Qualcomm and Nuvia cause companies to shift from using Arm's ISA and/or IPs to using RISC-V's ISA, making Arm suffer the same fate as x86-64 of becoming more and more irrelevant, but more quickly than anticipated?"
 
0
It's actually gonna be hilarious how a hybrid handheld is going to have better ray tracing capability and performance than the Xbox Series X and PS5 mainly due to AMD sucking at RT (especially RDNA 2) and Nvidia having the tensor cores and better software for RT.
 
It's actually gonna be hilarious how a hybrid handheld is going to have better ray tracing capability and performance than the Xbox Series X and PS5 mainly due to AMD sucking at RT (especially RDNA 2) and Nvidia having the tensor cores and better software for RT.
Better RT relative to its profile? Yes. But not necessarily better RT in practice to those two. They’ll be able to brute force it.

Granted, we have to wait and see how that actually plays out and this is just speaking in theoreticals.


But, it’ll be better than the Series S no doubt lol
 
I think the realistic minimum we are going to see for clocks is the original switch clocks. Simply because of the size of the GPU they are using.

The clocks and number of shades used in the original switch were likely to be the optimal clock speeds to get the best perf per watt at the performance the original chip targeted. That optimal clock speed for perf per watt increases as node sizes get smaller so where 307mhz for the GPU might have been optimal on 20nm maxwell this is likely higher on 8nm Ampere.

Meaning that 1536 shader cores at any less than that optimal number is a waste of silicon as using less silicon and clocking higher becomes more efficient. Just for the sake of easy math, let's say you have a chip that's set at its optimal frequency for perf per watt, that frequency is 600mhz and its a 768 core part. Its performance is 921GFLOPS. Now let's take a 1536 core part and clock it at 300mhz we also get 921GFLOPS. Only because of how the power efficiency curve works, larger chip draws more power and you get half the number of chips per wafer because its larger. There are some advantages, you can have a bigger delta between handheld and docked clocks because the handheld floor is so low. But I would say you can probably do this with more of a middle ground between the two and use something like 1024 cores at 400mhz.

My expectation if it's 8nm is the standard switch boost clock for portable, about 460mhz and 900mhz in docked. This gives us:

Portable - 1.4TFLOPs
Docked - 2.75TFLOPs.

I think they could go as far as 1ghz, plenty of laptop ampere GPUs running a constant 1300mhz clock. I expect this setup to give us roughly OG switch battery life.

If it's TSMC 5nm I'm really not sure what to expect tbh, maybe slightly higher portable clocks but a much higher boost clock to give us a larger delta between docked and portable to facilitate 720p - 4k. Portable 550mhz, docked 1.3ghz.

Portable - 1.7TFLOPS.
Docked - 4 Tflops.
2 portable / 4 docked seems the most likely ballpark according to the peeps in here that know their nm / clocks / calculations. That should give at least three hours of battery life in handheld mode. DLSS is a huge game changer due to the fact a lot of the other consoles teraflops are used up by simply rendering the huge amount of pixels needed for 1440p/4k.

Just for the record all of my developers friends and the cowboy I got the most recent info from shook their heads and laughed when asked about Drake’s teraflop number. I guess it doesn’t even come into their thinking when developing a game. They get the engine running on the hardware, push visuals then strip out what they don’t need until they hit their framerate target. Then a final optimisation push to see what (if anything) visually can be added back in and the framerate improved. Everything is a race against time because their deadlines are impossible from the start.

It really does seem like a thankless, impossible industry to be in unless you’re a PR person but I’m sure they face their own unique challenges like bad boys leaking dev kit information 😝
 
RDNA2 Ray Tracing is subpar compared to the latest Ray Tracing cores in the Ampere cards. AMD needs much more bandwidth and ram just to try and compete.

A RX 6600 XT which trades blows with the RTX 3060 despite less ram and on a smaller memory bus + 32MB infinity cache has RT performance at RTX 3050 levels.

I suspect that with even less cache and ram for Series S, Drake should be able to compete with it
 
This post was about Image upscaling, I stated it multiple times, this post was about the AI temporal upscaling + spacial upscaling and how even if Drake ends up less performant than the XBSS, it could actually out perform it, thanks to being able to render at a lower resolution and output a better final image thanks to these technologies both working together... Like if Drake's GPU is ~1GHz for ~3TFLOPs, it could beat out the 4TFLOPs XBSS thanks to rendering at lower resolutions and outputting the same or better final image.
Uh okay, this reply was also about image upscaling? DLSS is all well and good, but I don’t think NIS or other spatial upscaling should be considered when assessing relative performance of different platforms because 1) it cannot reconstruct high frequency detail and 2) it’s platform agnostic because it gets no performance advantage from tensor cores. It’s not equivalent to rendering at a higher resolution or temporal upscaling.
 
We know Zelda Botw on the Switch runs at 900p (docked) and 30 fps and ocuppies 13.4 GB of HD space ( on a 16 GB cartridge i presume).

If we presume BOTW could run on the next Switch at 4k resolution (docked) thanks to DLSS :
- how much additional space on the HD would be needed for all the 4K assets for BOTW?
- what would an estimated average physical cartidge size be the for a 4K game form NIntendo?

I would presume the HD of the next swicth would be at least 2 to 3 times bigger then that presumed average game size.
I don't think 4K has much to do with it. If Switch games were already hitting some sort of 1080p maximum or ideal, sure, you'd expect a certain increase if trying to match that at 4K. But they're not. Even if Drake was permanently limited to 720p we'd still expect much higher resolution assets because it's still going to look a lot better. But yeah, if the system has 2-3x as much RAM and even more difference than that in processing capabilities, it makes sense that the best looking games would be using several times as much space on assets, too.
I feel like people are forgetting that this still has to be a good consumer product for the mass market. I understand that this is an enthusiast forum filled with people that don’t mind spending 4000 hours in a week just to complete a game and find every secret that it has even 100% or hell, 100% 4 times over, etc. but most people are not those people. And a lot of people still do use the switch as a portable device. Despite those super low clock speeds, the switch still has games that were just two hours of battery life. Let that sink in.
I feel like you're forgetting that this device with a minimum two hour battery life launched and... was extremely popular among the masses! The first 2.5 years of Switch's life aren't remembered as some missed opportunity that Nintendo must work to avoid repeating.
 
Uh okay, this reply was also about image upscaling? DLSS is all well and good, but I don’t think NIS or other spatial upscaling should be considered when assessing relative performance of different platforms because 1) it cannot reconstruct high frequency detail and 2) it’s platform agnostic because it gets no performance advantage from tensor cores. It’s not equivalent to rendering at a higher resolution or temporal upscaling.
You are missing the point, my point is that DLSS + spacial upscaling can be used together on Drake for larger performance gains, as you can render at a lower resolution and still output "4K" the trade off is image quality, but because DLSS is so fast in comparison to other temporal solutions, it can do both types in a similar time window with better results than PS5 or XBSX... Of course the image should be more blurry, but it should look better than 1440p native anyways. There is a performance edge when we talk about using both temporal and spacial together, that's all I'm really getting at here.

For instance, you could render in 720p and use DLSS to get to 1440p, then use a spacial upscale to get to 4K, yes it won't be as sharp as 4K, but performance DLSS is pretty good and a good spacial upscale on top should get it to 1440p+ in terms of image quality. That is 9x the pixels output from rendered and should offer performance gains that just can't be matched on PS5.
 
Last edited:
RDNA2 Ray Tracing is subpar compared to the latest Ray Tracing cores in the Ampere cards. AMD needs much more bandwidth and ram just to try and compete.

A RX 6600 XT which trades blows with the RTX 3060 despite less ram and on a smaller memory bus + 32MB infinity cache has RT performance at RTX 3050 levels.

I suspect that with even less cache and ram for Series S, Drake should be able to compete with it
I'm pretty excited to see how far some devs can push mobile RT.



in other news, Vulkan gets mesh shader support

 
0
I think the realistic minimum we are going to see for clocks is the original switch clocks. Simply because of the size of the GPU they are using.

The clocks and number of shades used in the original switch were likely to be the optimal clock speeds to get the best perf per watt at the performance the original chip targeted. That optimal clock speed for perf per watt increases as node sizes get smaller so where 307mhz for the GPU might have been optimal on 20nm maxwell this is likely higher on 8nm Ampere.

Meaning that 1536 shader cores at any less than that optimal number is a waste of silicon as using less silicon and clocking higher becomes more efficient. Just for the sake of easy math, let's say you have a chip that's set at its optimal frequency for perf per watt, that frequency is 600mhz and its a 768 core part. Its performance is 921GFLOPS. Now let's take a 1536 core part and clock it at 300mhz we also get 921GFLOPS. Only because of how the power efficiency curve works, larger chip draws more power and you get half the number of chips per wafer because its larger. There are some advantages, you can have a bigger delta between handheld and docked clocks because the handheld floor is so low. But I would say you can probably do this with more of a middle ground between the two and use something like 1024 cores at 400mhz.

My expectation if it's 8nm is the standard switch boost clock for portable, about 460mhz and 900mhz in docked. This gives us:

Portable - 1.4TFLOPs
Docked - 2.75TFLOPs.

I think they could go as far as 1ghz, plenty of laptop ampere GPUs running a constant 1300mhz clock. I expect this setup to give us roughly OG switch battery life.

If it's TSMC 5nm I'm really not sure what to expect tbh, maybe slightly higher portable clocks but a much higher boost clock to give us a larger delta between docked and portable to facilitate 720p - 4k. Portable 550mhz, docked 1.3ghz.

Portable - 1.7TFLOPS.
Docked - 4 Tflops.
If they kept the Mariko switch clocks but used the Drake chip at 8nm, that device would have some insane battery life. Like 6+hours.
I think the max clock for the GPU Orin right now is 1Ghz? Or 1.2Ghz?

The OG Switch portable clock was 40% of docked mode (307.2mhz vs 768mhz)

But the new profile they released with Mariko was about 60% of the docked mode.

I think 900mhz is a good CPU clock for Drake when docked and 60% of that in handheld (540mhz).

Portable - 1.65TF
Docked - 2.7TF docked
 
I feel like people are forgetting that this still has to be a good consumer product for the mass market. I understand that this is an enthusiast forum filled with people that don’t mind spending 4000 hours in a week just to complete a game and find every secret that it has even 100% or hell, 100% 4 times over, etc. but most people are not those people. And a lot of people still do use the switch as a portable device. Despite those super low clock speeds, the switch still has games that were just two hours of battery life. Let that sink in.



The switch at those low clockspeeds still had games that could be two hours at most in battery life.


That’s gonna be a no from me dawg. I’m sorry. But if they have to clock it down further to get a good battery life so be it.

Imagine having a smartphone that dies after 3 hours.


Actually, don’t, it’s not fun. Having a phone that did that it was pain and suffering. Never again. I’d file that under some sort of cruel and unusual punishment.🤣





That was my life at some point 💀…. Yeah… no.
There's a huge difference between a smartphone and a handheld gaming device.

Smartphone is a necessity for one and is meant for light stuff and lasting a day. Apps, browsing the web. It's not meant for gaming.

Gaming is a lot more demanding and drains the batteries quicker. It's also not often people can be on any screen more than 2 hours on handheld without breaks. 3 hours would be good though and a nice minimum for Drake. It's been the norm since switch released.

You don't seem to directly respond to the upside of less battery, and that is more power throughput which makes games more future proof and gives us better ports. I would gladly take OG switch battery life for that, when we can get a revision tbst will match v2 battery life 2 years later and will be more than enough for most.

Do you have a v2 switch?

If they kept the Mariko switch clocks but used the Drake chip at 8nm, that device would have some insane battery life. Like 6+hours.
I think the max clock for the GPU Orin right now is 1Ghz? Or 1.2Ghz?

The OG Switch portable clock was 40% of docked mode (307.2mhz vs 768mhz)

But the new profile they released with Mariko was about 60% of the docked mode.

I think 900mhz is a good CPU clock for Drake when docked and 60% of that in handheld (540mhz).

Portable - 1.65TF
Docked - 2.7TF docked
I really hope they get CPU clock speeds closer to 1.5GHz. If they stay at 1 GHz, Drake will be in the same exact position as Switch in CPU performance parity vs PS5/X series S and PS4/Xbone. Which is 3.5x gap at least if clocked at 1 Ghz

I'd gladly take 900Mhz or even 768Mhz if we get a 40-50% boost in CPU speed for what will likely be Drake's biggest bottleneck for ports.
 
Last edited:
It really does seem like a thankless, impossible industry to be in unless you’re a PR person but I’m sure they face their own unique challenges like bad boys leaking dev kit information 😝
I feel bad for your friends. AAA does sound like a very thankless job in many places.
 
I hope Nintendo uses USB4 Version 2.0 for future hardware coming after Nintendo launches new hardware that's equipped with Drake.
I think we're ALL hoping they adopt USB4 eventually, but that'll have to wait for Nvidia to adopt it at the chip level, so we might be another generation out from that. (Edit: I realise that's what you said, so yeah, I agree. 😆)

I'd also like to reiterate, OS datamines are incredibly reliable, leaking Bluetooth Audio and the OLED Model months before we knew anything about them. Those same datamines showed their intent to use USB 3.0 for 4K output. Which makes some sense if they want to use the Nintendo Switch Dock with LAN Port, which as explored earlier in the thread, dropped the USB 3.0 A port and has all of its IO (LAN Port, USB ports) working off the USB-C port's USB 2.0 lanes, keeping the 3.0 lanes free for, presumably, 4K video, I mean the HDMI port is the only place those lanes can be used, and the dock does have a HDMI 2.0 controller.

Unfortunately, I assume this is going to mean no matter how powerful the internals are, TV output will be limited to 2.0b speeds at absolute most.
 
Last edited:
I feel like you're forgetting that this device with a minimum two hour battery life launched and... was extremely popular among the masses! The first 2.5 years of Switch's life aren't remembered as some missed opportunity that Nintendo must work to avoid repeating.
One of the biggest complaints of said device was the poor battery life :p

Other than drift

There's a huge difference between a smartphone and a handheld gaming device.
You missed the point, it’s not fun having something that dies on you quickly. At all.



I guess it doesn’t even come into their thinking when developing a game.
I think it doesn’t come into consideration because that’s influx and can change by the time it releases.

Reminder that the PS5 was 9.2TFLOP or whatever before release, then it was 10.3TFLOPs.
 
It's all going to depend what the main board and screen will cost at the volume they're buying, as those are the 2 most expensive parts in the bill of materials for hardware like this.
If they can get similar BoM as Switch had in 2016/2017 when production began for that, they could just as easily take a shave on their more-than-ample margins for the current Switch hardware to launch at a $350 price and drive adoption to their new hardware even further, while not losing any revenue from software sales.
But you are correct, the ultimate goal is to not lose money on hardware, as Nintendo has frequently in its history opted for a price somewhere closer to break-even, especially if they expect costs to quickly reduce after launch, either through die shrinks or other means.

Well, we can look at it in comparison to the OLED model. Hypothetically, it could use the same dock, screen, joy-cons, etc. as the OLED model, and have a BoM which aligns very closely to the OLED model with the exception of three components: the SoC, RAM and flash storage. Obviously there would be other smaller changes like PMICs, changes to the heatsink/fan assembly, etc., but I'd expect these three to be the main drivers of any cost increase over the OLED model.

So, we could estimate what scope they have to increase costs of these components and target a break-even price-point. Let's assume a 10% retailer margin, so $315 of revenue for Nintendo for an OLED Switch. Then, we can take my gross margin estimate, round it down to 30% (as Nintendo have stated margins are lower on the OLED model), and we get a cost of $220.50 and a profit of $94.50 for each OLED Switch. This cost includes assembly, distribution, etc, but again we can assume these don't change with the new model. If correct, that means Nintendo would have the scope to increase the BoM by almost $95 over the OLED model and break-even at $350.

To break this down further between the SoC, RAM and flash memory, we need cost estimates on both the old components and new ones. The SoC is the most difficult, so we'll leave that until last, but we can get some cost estimates for RAM and flash from smartphone teardowns. For these I'm going to use TechInsight's BoMs for the Poco C40 and the Galaxy Z Fold 5G. The Z Fold 5G in particular is last year's model, but it gives us a ballpark to work off.

On the Poco C40, a single cost of $24 is given for memory, which includes both RAM and flash storage, but the phone does use 4GB of LPDDR4X and 64GB eMMC, so is close to the OLED model. I'm going to assume a split of $14 for the LPDDR4X and $10 for the eMMC here. Neither of these quite match the costs that Nintendo would be paying, though. Firstly because the components in question are being manufactured by ChangXin Memory Tech and Yangtze Memory respectively. These are relatively new entrants to the industry and are probably undercutting the likes of Samsung or Micron on price in order to gain market share. Secondly the LPDDR4X here is a single 4GB chip, whereas Nintendo uses two 2GB chips, which would increase the cost. As such, I would adjust the estimates for the LPDDR4X to $28 (2x) and the eMMC to $15 (1.5x) to account for these factors.

On the Galaxy Z Fold 5G, the costs are explicitly provided for the LPDDR5 and UFS, which makes things a bit easier. The 12GB of LPDDR5 is estimated at $45.17, and the 256GB of UFS 3.1 is estimated at $28.91. For the RAM, while Nintendo may use 12GB of LPDDR5, it will again likely be two 6GB chips instead of one 12GB chip, so I'll adjust the cost accordingly to $67.75 (1.5x), and for the flash memory, I'd expect 128GB of slower flash (possibly UFS 2.1), so I'll split the difference and call it $22. Combine these and we have a $46.75 increase in BoM from moving from the OLED model's RAM and flash to 12GB LPDDR5 and 128GB of UFS.

Then, the question is whether the SoC upgrade could fit into the remaining $47.75 of available BoM. This is a much trickier question to answer, as Nintendo isn't buying an off-the-shelf part. I'm assuming Mariko is probably costing Nintendo less than $30 these days, which would limit the cost of Drake to around $75. Given the relatively slim margins of semi-custom parts, I don't think that's unreasonable, even if it's on a TSMC 5nm process. Back in late 2020, the 11.8 billion transistor A14 was reported to cost Apple just $40, despite being one of the earliest chips on the bleeding-edge process. Obviously this is just the manufacturing cost, but if Nvidia is manufacturing Drake on a TSMC 5nm process, with likely a smaller die size, and 2 and a half years later on what is now a mature process, you would expect that their costs per chip to be lower than the $40 Apple were paying for the A14 in 2020. They could potentially make a 50% margin on the chips and still allow Nintendo to sell at break-even, and 50% is a big margin for semi-custom (I'd be surprised if they were making that on their consumer GPU business).

Of course I'm making a lot of assumptions here which could be (and probably are) way off, and there may be other changes to the device than just the SoC, RAM and storage, such as a higher res screen, integrated cameras for AR, etc. This is part of the reason I'm erring on the side of expecting a $400 price point, but I still wouldn't rule out $350 if Nintendo have designed around it.
 
I think it doesn’t come into consideration because that’s influx and can change by the time it releases.

Reminder that the PS5 was 9.2TFLOP or whatever before release, then it was 10.3TFLOPs.
TFLOPS are a way of comparing hardware, but actually developing a game is getting down into the nitty gritty of this bottleneck or that. Anything that reduces a whole system down to a single number is, at best, used for benchmarking during hardware design, and at worst, is just marketing
 
0
Well, we can look at it in comparison to the OLED model. Hypothetically, it could use the same dock, screen, joy-cons, etc. as the OLED model, and have a BoM which aligns very closely to the OLED model with the exception of three components: the SoC, RAM and flash storage. Obviously there would be other smaller changes like PMICs, changes to the heatsink/fan assembly, etc., but I'd expect these three to be the main drivers of any cost increase over the OLED model.

So, we could estimate what scope they have to increase costs of these components and target a break-even price-point. Let's assume a 10% retailer margin, so $315 of revenue for Nintendo for an OLED Switch. Then, we can take my gross margin estimate, round it down to 30% (as Nintendo have stated margins are lower on the OLED model), and we get a cost of $220.50 and a profit of $94.50 for each OLED Switch. This cost includes assembly, distribution, etc, but again we can assume these don't change with the new model. If correct, that means Nintendo would have the scope to increase the BoM by almost $95 over the OLED model and break-even at $350.

To break this down further between the SoC, RAM and flash memory, we need cost estimates on both the old components and new ones. The SoC is the most difficult, so we'll leave that until last, but we can get some cost estimates for RAM and flash from smartphone teardowns. For these I'm going to use TechInsight's BoMs for the Poco C40 and the Galaxy Z Fold 5G. The Z Fold 5G in particular is last year's model, but it gives us a ballpark to work off.

On the Poco C40, a single cost of $24 is given for memory, which includes both RAM and flash storage, but the phone does use 4GB of LPDDR4X and 64GB eMMC, so is close to the OLED model. I'm going to assume a split of $14 for the LPDDR4X and $10 for the eMMC here. Neither of these quite match the costs that Nintendo would be paying, though. Firstly because the components in question are being manufactured by ChangXin Memory Tech and Yangtze Memory respectively. These are relatively new entrants to the industry and are probably undercutting the likes of Samsung or Micron on price in order to gain market share. Secondly the LPDDR4X here is a single 4GB chip, whereas Nintendo uses two 2GB chips, which would increase the cost. As such, I would adjust the estimates for the LPDDR4X to $28 (2x) and the eMMC to $15 (1.5x) to account for these factors.

On the Galaxy Z Fold 5G, the costs are explicitly provided for the LPDDR5 and UFS, which makes things a bit easier. The 12GB of LPDDR5 is estimated at $45.17, and the 256GB of UFS 3.1 is estimated at $28.91. For the RAM, while Nintendo may use 12GB of LPDDR5, it will again likely be two 6GB chips instead of one 12GB chip, so I'll adjust the cost accordingly to $67.75 (1.5x), and for the flash memory, I'd expect 128GB of slower flash (possibly UFS 2.1), so I'll split the difference and call it $22. Combine these and we have a $46.75 increase in BoM from moving from the OLED model's RAM and flash to 12GB LPDDR5 and 128GB of UFS.

Then, the question is whether the SoC upgrade could fit into the remaining $47.75 of available BoM. This is a much trickier question to answer, as Nintendo isn't buying an off-the-shelf part. I'm assuming Mariko is probably costing Nintendo less than $30 these days, which would limit the cost of Drake to around $75. Given the relatively slim margins of semi-custom parts, I don't think that's unreasonable, even if it's on a TSMC 5nm process. Back in late 2020, the 11.8 billion transistor A14 was reported to cost Apple just $40, despite being one of the earliest chips on the bleeding-edge process. Obviously this is just the manufacturing cost, but if Nvidia is manufacturing Drake on a TSMC 5nm process, with likely a smaller die size, and 2 and a half years later on what is now a mature process, you would expect that their costs per chip to be lower than the $40 Apple were paying for the A14 in 2020. They could potentially make a 50% margin on the chips and still allow Nintendo to sell at break-even, and 50% is a big margin for semi-custom (I'd be surprised if they were making that on their consumer GPU business).

Of course I'm making a lot of assumptions here which could be (and probably are) way off, and there may be other changes to the device than just the SoC, RAM and storage, such as a higher res screen, integrated cameras for AR, etc. This is part of the reason I'm erring on the side of expecting a $400 price point, but I still wouldn't rule out $350 if Nintendo have designed around it.
I don't see a scenario where the Drake is less than $449 USD for the entry model. Especially if they keep OLED at $349. I am expecting $499 USD for the Drake and the Steamdeck has shown that people are ok with paying premiums for portability if the games are of console/PC quality, which Switch/Steamdeck is and Switch Drake will be.
 
So for the technically hindered people like myself, it seems the general consensus over the last few pages is that (with known Drake specs and DLSS) 1st party Drake titles are likely to be 4k30 (or 4k60 if it’s a remaster), while PS4/XOne ports will likely be 1080p60, and “miracle” PS5/XSX games will be 720p30 to 1080p30. This is obviously a gross oversimplification, but I’m just looking for averages here. Is this a reasonable expectation?
 
One of the biggest complaints of said device was the poor battery life :p

Other than drift


You missed the point, it’s not fun having something that dies on you quickly. At all.




I think it doesn’t come into consideration because that’s influx and can change by the time it releases.

Reminder that the PS5 was 9.2TFLOP or whatever before release, then it was 10.3TFLOPs.
I don't think i missed the point. I know what you are saying but smartphones and gaming devices are two different things with different purposes. It's normal for smartphones to last a day without a charge and handheld gaming devices (including laptops) to last 3 hours.

3 hours is not bad at all for a portable gaming device on par with handheld wii U in 2017, and then also ps4 720p settings in 2022+2023 in handheld mode. It's the minimum to get by, and it's already blowing away the larest handheld PC portables . I personally need breaks sooner in handheld mode (can't really play it over an hour) over docked because it's uncomfortable to hold plus the small screen for extended period of times.

We can agree to disagree here then. I know that Nintendo will prioritize battery over specs, but I hope they strike a balance with 3 hours (upgrade for what you claim as low as 2 hours for the more demanding games on OH switch) for the most demanding games while making games future proof as long as possible with higher clocks, until the inevitable revision that will boost battery life by 1.5-2x.

If we do somehow get 1500 cuda cores activated, then it probably won't be an issue to get handheld ps4 performance at 720p at really low clock speeds and a good battery. This could literally be 700 TFLOPs GPU. Though many here are hoping more (up to 1.4?), if the rumors for say PS4+ is true, 0.9-1.0 TFLOPs will be solid on ampere architecture. 1 TFLOP more so likely than 700 gflops for a newer node than 8nm Samsung.
 
Last edited:
So for the technically hindered people like myself, it seems the general consensus over the last few pages is that (with known Drake specs and DLSS) 1st party Drake titles are likely to be 4k30 (or 4k60 if it’s a remaster), while PS4/XOne ports will likely be 1080p60, and “miracle” PS5/XSX games will be 720p30 to 1080p30. This is obviously a gross oversimplification, but I’m just looking for averages here. Is this a reasonable expectation?
I1080p 60fps games on switch should be 4k 60fps native without too much trouble, given the massive GPU and bandwidth increase (up to 8-9x for the former, 4x for the latter). I think 720p switch games can at least get to 2k native. There's some games I question like Bayonetta that are 720p on switch that had some weird bottlenecks from alpha particles.. so But yeah every 1st party switch game at 720p to 1080p should theoretically get to 4k 30 fps with or without DLSS.

I think 1080p ps4 quality ports could get to 2k with DLSS on Drake This is just dependant on how powerful Drake ends up being.
 
If the port is untouched, 60fps should be the bare minimum.
I don't think it'll reach 60. Both games drop frames in alpha heavy scenes, plus putting them up to 60 would involve some work as the physics are tied to framerate.
 
I don't think i missed the point. I know what you are saying but smartphones and gaming devices are two different things with different purposes.
No you clearly are not understanding what I’m saying here: I’ve experienced both scenarios in which I have to use both in very short bursts at a time to make it last a while.

The phone I used would die in about three hours and I would have to use it very short bursts at a moment and it would have to be limited when I really needed it for something crucial like recording a lecture that I used for playback in a study.

For a game, if I get engrossed in a game, I have to play it for only a certain amount of time and then put it away and then later on play for another set amount of time that’s pretty short and basically breaking up with something that I can just do and say an hour into three or four hours of shorter burst per session



That’s annoying and really immersion breaking for me, and it’s cumbersome if I have to bring in a power bank just for it, I don’t like that at all. I know people are saying that it should be the highest clocks because it’ll be refreshed down the line, but I don’t think people are understanding that just because it has those super high clocks that’s gonna be great early on. If it comes down to it, I’d prefer that they literally down clock it more to get it to an agreeable battery life even if they’re going to refresh it later on with a longer battery life and a new switch lite model.


I’ve used 3.5-8H battery life as an example here before, that’s a nice battery life.

I’m fine with it not being like the V2/OLED, but I am certainly not fine with it being near or worse than the V1 battery at all.

Steam Deck dies after like an hour and a half for some people, how are people even fine with this.


Edit: I’m not trying to sound aggressive with this, I’m just saying my experience with this is not really fun for me.
 
Last edited:
So for the technically hindered people like myself, it seems the general consensus over the last few pages is that (with known Drake specs and DLSS) 1st party Drake titles are likely to be 4k30 (or 4k60 if it’s a remaster)
There is no known state for first party games. Nintendo loves frame rate (as do I), and it will be up to them to chose what to prioritize for which games. Drake/DLSS makes talking about output resolution no longer really a useful metric.

, while PS4/XOne ports will likely be 1080p60,
"as good or better than the original games" might be a better way of thinking about it. But yes, as many of those games were topping out at 1080p60, that's a reasonable target.

and “miracle” PS5/XSX games will be 720p30 to 1080p30. This is obviously a gross oversimplification, but I’m just looking for averages here. Is this a reasonable expectation?
This might be a slightly resolution high bar for current-gen ports.
 
0
Problem with this is simply the GPU being smaller serves the same purpose and would cost much less. Using such a big GPU actually means that clocks are within expectations. Also Switch clocks aren't much less than TX1's spec... Shield TV had a clock of 998MHz, Switch has a top clock of 921MHz (according to Digital Foundry) and even portably, it is 460MHz, which gives 235GFLOPs, expectations for a Nintendo handheld was 128GFLOPs - 157GFLOPs, ended up being much higher. There was some hopeful that they would use 16nm and go with Pascal over TX1, and while Mariko was available for Nintendo if they paid the extra price, they ended up with a pretty powerful device given its size... They are raising expectations well beyond what people thought with Drake, literally expectations were 768 cuda cores, some had it less, some had it more, it's literally twice that, and then you have DLSS on top, which doubles GPU performance, and finally spacial upscaling can be used in conjunction because of how fast DLSS works, giving even more performance... This thing will be a low closer to PS5 than Switch was to PS4... Not to say it will be close, but it should be able to trade image quality for graphical settings on par with PS5.

In fact, process node change will have a bigger impact on CPU than GPU because of upscalers.

I think many of your points keep getting overlooked and I continue to see people stuck on theoretical raw performance when you keep mentioning Image Quality being on another level for this new Switch... Again this can't be stressed enough and we don't even have the current generation consoles using graphical effects like VRS on any games yet, let alone having DLSS as an IQ equalizer in punching well above what the silicon says that it can do.

The GPU size and potential clocks are also very interesting, because we clearly see how Sony and Microsoft are dealing with this and only having a few hundred megahertz between the PS5 and Series X has caused massive yield issues for Sony on the same node no less.
Nintendo and Nvidia have clearly analyzed the benefits of both a smaller chip at higher clocks and a larger with slower to land on the solution for Drake being the size it is.
 
So for the technically hindered people like myself, it seems the general consensus over the last few pages is that (with known Drake specs and DLSS) 1st party Drake titles are likely to be 4k30 (or 4k60 if it’s a remaster), while PS4/XOne ports will likely be 1080p60, and “miracle” PS5/XSX games will be 720p30 to 1080p30. This is obviously a gross oversimplification, but I’m just looking for averages here. Is this a reasonable expectation?
Those third party ports probably only need be that low if they're not using DLSS or are some insane miracle PS5 port doing 240p->DLSS->720p.
 
0
For a game, if I get engrossed in a game, I have to play it for only a certain amount of time and then put it away and then later on play for another set amount of time that’s pretty short and basically breaking up with something that I can just do and say an hour into three or four hours of shorter burst per session


That's really strange. Did you contact customer support? I never had that problem with my OG Switch.
 
So for the technically hindered people like myself, it seems the general consensus over the last few pages is that (with known Drake specs and DLSS) 1st party Drake titles are likely to be 4k30 (or 4k60 if it’s a remaster), while PS4/XOne ports will likely be 1080p60, and “miracle” PS5/XSX games will be 720p30 to 1080p30. This is obviously a gross oversimplification, but I’m just looking for averages here. Is this a reasonable expectation?
There's no way you can make blanket assumptions about resolution and framerate since games are all different
 
You are missing the point, my point is that DLSS + spacial upscaling can be used together on Drake for larger performance gains, as you can render at a lower resolution and still output "4K" the trade off is image quality, but because DLSS is so fast in comparison to other temporal solutions, it can do both types in a similar time window with better results than PS5 or XBSX... Of course the image should be more blurry, but it should look better than 1440p native anyways. There is a performance edge when we talk about using both temporal and spacial together, that's all I'm really getting at here.

For instance, you could render in 720p and use DLSS to get to 1440p, then use a spacial upscale to get to 4K, yes it won't be as sharp as 4K, but performance DLSS is pretty good and a good spacial upscale on top should get it to 1440p+ in terms of image quality. That is 9x the pixels output from rendered and should offer performance gains that just can't be matched on PS5.
I get that you were talking about using both DLSS and spatial upscaling together, but I am not disputing the efficacy of DLSS or the performance advantage that concurrency gives the tensor cores. It is very plausible that Drake will be able to render a 1440p image with quality and performance comparable to or better than XSS.

What I am arguing is that the spatial upscaling half of that equation isn’t an advantage of the Switch. Once you have that 1440p image on either platform, you could just as easily use FSR 1.0 on Xbox as you could NIS on Switch (or FSR 1.0 on Switch).

Additionally, the output image isn’t equivalent to TAA at 4K or temporal upscaling to 4K. It is true that to some extent, the current crop of spatial upscalers can reconstruct edges equivalent to a higher resolution. Both NIS and FSR 1.0 have adaptive sharpening and FSR 1.0 also detects and tries to correct for gradient reversals in its upscaling step. Maybe that is what you mean by “1440p+”?

However, these spatial upscalers aren’t correcting for other important aliasing artifacts, like Moiré patterns or aliasing of internal texture detail where gradients are less sharp. They also don’t correct for crawling or flickering, although the DLSS/TAA pass before calling the spatial upscaler will temper most of this. To me, eliminating these artifacts is an important part of rendering an image that truly looks like 4K, more than just reaching a certain number of pixels.
 
No you clearly are not understanding what I’m saying here: I’ve experienced both scenarios in which I have to use both in very short bursts at a time to make it last a while.

The phone I used would die in about three hours and I would have to use it very short bursts at a moment and it would have to be limited when I really needed it for something crucial like recording a lecture that I used for playback in a study.

For a game, if I get engrossed in a game, I have to play it for only a certain amount of time and then put it away and then later on play for another set amount of time that’s pretty short and basically breaking up with something that I can just do and say an hour into three or four hours of shorter burst per session



That’s annoying and really immersion breaking for me, and it’s cumbersome if I have to bring in a power bank just for it, I don’t like that at all. I know people are saying that it should be the highest clocks because it’ll be refreshed down the line, but I don’t think people are understanding that just because it has those super high clocks that’s gonna be great early on. If it comes down to it, I’d prefer that they literally down clock it more to get it to an agreeable battery life even if they’re going to refresh it later on with a longer battery life and a new switch lite model.


I’ve used 3.5-8H battery life as an example here before, that’s a nice battery life.

I’m fine with it not being like the V2/OLED, but I am certainly not fine with it being near or worse than the V1 battery at all.

Steam Deck dies after like an hour and a half for some people, how are people even fine with this.


Edit: I’m not trying to sound aggressive with this, I’m just saying my experience with this is not really fun for me.

Your phone is +5 years old and in need of an upgrade (mine is pushing 5 years as well) and gets way more usage than a gaming console. So I'm not sure why you mention this. Of course it's only gonna last 3 hours of screen time or less.

I never said 1.5 hours is acceptable. I don't think anyone is okay with that. I did say multiple times that 3 hours for the most demanding games is a good absolute minimum to target, if they want to balance power with it. That's just my two cents. We are just going in circles about this. I think we can both agree that v1 and v2 battery life would be great, with me being more tolerant of v1 battery life.
 
Last edited:
So am I the only one that thinks that they're going to down clock the heck out of this thing way more than people are expecting? I remember the rumors of the Switch, and then the real clocks were released and people were pissed. I'm expecting the same deal again. XD
Ehh, I could get the hesitation/caution, but thanks to the Nvidia hack, there are some details that establish some rough bounds.
- the size of the GPU
Whatever the final clocks are, the smell test to pass is: "Would it have been optimal to go with a smaller GPU?"
- both the type of memory and the (implied) bus width, which informs us of likely bandwidth
The lower bound smell test here is a form of: "Would it have been optimal to go with a cheaper memory configuration?"
Conversely, the upper bound smell test is "Can this reasonably fit within this presumed bandwidth?" That's trickier, but it can exclude the more extraordinary high guesses.
 
Those same datamines showed their intent to use USB 3.0 for 4K output. Which makes some sense if they want to use the Nintendo Switch Dock with LAN Port, which as explored earlier in the thread, dropped the USB 3.0 A port and has all of its IO (LAN Port, USB ports) working off the USB-C port's USB 2.0 lanes, keeping the 3.0 lanes free for, presumably, 4K video, I mean the HDMI port is the only place those lanes can be used, and the dock does have a HDMI 2.0 controller.
USB 3.0's also infamous for causing radio frequency interference, which I assume is the reason why Nintendo had the USB 3.0 port on the Nintendo Switch's dock running at USB 2.0 speeds.

Unfortunately, I assume this is going to mean no matter how powerful the internals are, TV output will be limited to 2.0b speeds at absolute most.
I think that hypothetically, there's a possibility that for Nintendo's new hardware, Nintendo could replace the PI3USB30532, a USB 3.2 Gen 1/DisplayPort 1.2 crossbar switch chip, which is equipped on the OLED model, with the PI3USB31532, a USB 3.2 Gen 2/DisplayPort 1.4 crossbar switch chip, on the console.
And hypothetically, Nintendo could replace the RTD2172N, a DisplayPort 1.4 to HDMI 2.0b converter chip, which is equipped on the OLED model's dock, with the RTD2173, a DisplayPort 1.4 to HDMI 2.1 converter chip, on the dock.
Of course, the HDMI 2.1 signals on the dock is hypothetically limited to 32.4 Gbps due to DisplayPort 1.4 having a max bandwidth of 32.4 Gbps. (Hypothetically, USB 3.2 Gen 2 signals are switched to DisplayPort 1.4 signals on the console before being sent to the dock when the console is connected to the dock. Afterwards, the DisplayPort 1.4 signals gets converted to HDMI 2.1 signals on the dock before being sent to the TV to output an image. But that hypothetically assumes that Nintendo has all of the USB 3.0 signals on USB 3.2 Gen 2 be converted to DisplayPort 1.4 signals.)
But 32.4 Gbps should be enough for 4K 60 Hz with 4:4:4/RGB chroma with a colour bit depth of 12, or 4K 120 Hz with 4:4:4/RGB chroma with a colour bit depth of 8.

Anyways, HDMI 2.0b should be enough for 4K 60 Hz with 4:4:4/RGB chroma with a colour bit depth of 8, with HDMI 2.0b having a max bandwidth of 18 Gbps.

misc-formatdataratetable-large.jpg
 
Ehh, I could get the hesitation/caution, but thanks to the Nvidia hack, there are some details that establish some rough bounds.
- the size of the GPU
Whatever the final clocks are, the smell test to pass is: "Would it have been optimal to go with a smaller GPU?"
- both the type of memory and the (implied) bus width, which informs us of likely bandwidth
The lower bound smell test here is a form of: "Would it have been optimal to go with a cheaper memory configuration?"
Conversely, the upper bound smell test is "Can this reasonably fit within this presumed bandwidth?" That's trickier, but it can exclude the more extraordinary high guesses.
Not to mention Erista is more of an “off-the-shelf” SoC. Mariko was custom made, but forced to used Erista’s power profiles to avoid any performance discrepancies. Whatever this Drake is, it’s a clean break and made specifically with Nintendo’s needs in mind.
 
Last edited:
There's no way you can make blanket assumptions about resolution and framerate since games are all different
I said it was a gross oversimplification. In 10 years when all Drake titles have been released my guess was these would be the averages between 1st party and ports from last and current gen. Now we could also go into “performance” and “resolution” modes and estimates but at the end of the day I feel like the majority of 4K titles will be Nintendo IPs only, though I would love to be proven wrong.
 
That's really strange. Did you contact customer support? I never had that problem with my OG Switch.
The switch works fine, it’s working as intended. But for a game like BOTW I’d have to use it less at a moment if I wanted the switch to last for a while throughout.

Your phone is +5 years old and in need of an upgrade (mine is pushing 5 years as well) and gets way more usage than a gaming console. So I'm not sure why you mention this. Of course it's only gonna last 3 hours of screen time or less.
You’re not understanding the issue here, it’s not about the device being old, it’s about the actual experience using a device that would die pretty quickly.
 
The switch works fine, it’s working as intended. But for a game like BOTW I’d have to use it less at a moment if I wanted the switch to last for a while throughout.


You’re not understanding the issue here, it’s not about the device being old, it’s about the actual experience using a device that would die pretty quickly.
I think the sticking point is the definition of ~3 hours handheld as ‘quickly.’
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom