• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Do you have audio editing experience and want to help out with the Famiboards Discussion Club Podcast? If so, we're looking for help and would love to have you on the team! Just let us know in the Podcast Thread if you are interested!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

Well. . . Nintendo has a unique problem, one they can't solve the same way as the PS5/Xbox.

Current MicroSD cards (UHS-1) top out at ~90MBs and there aren't yet any cards in that format that are faster*. Even the current emmc in the switch supports up to 250MBs.

If the [REDACTED] does have much faster internal storage at some point Nintendo has a decision to make. Do you limit the MicroSD to fridge storage only or do you limit the guaranteed read speed to 40-90MBs to developers?

*I know previous conversation about CFexpress and SD Express have occurred, but even over the life of this thread, have any of those options become more realistic?

EDIT: to be clarify I dont claim to have a specific answer. Assuming Nintendo uses UFS for internal storage that only increases the speed delta between internal and microSD external.
If Nintendo were to move to one of the faster UFS options, I'd expect them to swap out the card format to match. For example, eUFS 2.0 for internal storage and UFS Card 3.0 for external. That should yield fairly even speeds between the two.

There isn't exactly a huge market for UFS cards at the moment, but if MS can get multiple vendors to produce semi-proprietary memory cards for Xbox, Nintendo can probably get manufacturers to ramp up production of a standardized format (especially one intentionally similar to one of the most popular embedded memory types at the moment).
 
Well. . . Nintendo has a unique problem, one they can't solve the same way as the PS5/Xbox.

Current MicroSD cards (UHS-1) top out at ~90MBs and there aren't yet any cards in that format that are faster*. Even the current emmc in the switch supports up to 250MBs.

If the [REDACTED] does have much faster internal storage at some point Nintendo has a decision to make. Do you limit the MicroSD to fridge storage only or do you limit the guaranteed read speed to 40-90MBs to developers?

*I know previous conversation about CFexpress and SD Express have occurred, but even over the life of this thread, have any of those options become more realistic?

EDIT: to be clarify I dont claim to have a specific answer. Assuming Nintendo uses UFS for internal storage that only increases the speed delta between internal and microSD external.
One option is to remove SD card, and offer models with internal storage from 128GB to 512 GB (or even 1TB).

The 128GB one will be $399, while the most expensive one can be $699

Just like what the smart phone industry has done years ago.
 
I could see UFS 2.1 or 3.0 as internal storage. I don't see why 4.0 would be necessary considering the actual specs of this platform. You also have to consider that if they are going to have mandatory installs then the memory they ship with will likely need to be 256GB at minimum and they'll need to start shipping UFS cards unless they want fridge cleaning which I can see but isn't ideal. I don't see them shooting for the newest spec. Will that really be necessary?

I expect them to go cheaper on internal storage and could see emmc but will perform better with the improved CPU and FDE.
 
0
I'm not "looking for dooming", whatever eventually comes may be better than speculated, I didn't say we shouldn't speculate.
My point is the use of "we know" is incorrect.
People pop in here all the time asking what the lates is, not everyone reads every post and then they read "we know this, we know that", it's misleading.
but the usage of “we know” isn’t used incorrectly. It’s what we the collective group have at hand with knowledge. “This is what we know” and “we know this and we know that” doesn’t make it untrue, policing what others should use as a verbiage for this is really silly.

We know What we know because that’s all we have as information about what you want to know. What we don’t know we don’t know.

If the people are saying PS4, which you contested for whatever reason as being incorrect because “we don’t know”, when they are using “we know” have you thought that, idk, maybe, just maybe, it’s based off of information that we do know that we can use to speculate in this thread?


We know that GCN is less performant than Ampere, we know that the PS4 is GCN based and T239 is Ampere based. We know at around what level and degree Ampere outperforms GCN based on other cards and scaling. We are speculating with what we know that theoretically, X will outperform or perform about on par with Y while being more efficient based on actual data.


There’s a whole thing about this. Making an educated conclusion based off of real world data.

And as I said, it came off as doom posting even without intending to do so. Unless there’s another chip worth discussing, policing the “we know” of which is based on what we know seems unnecessary especially for a thread about speculating, preferably of the educated speculating and theory crafting.


If people start saying, “we know they’ll switch to AMD because the other consoles are AMD” but:
A) there’s no evidence that we know of this happening
B) how likely do we expect Nintendo to just break compatibility and start over based on past history?
C) related to A), but is there anything in AMD’s database that would fit Nintendo’s needs and isn’t set for retail?

We knew about the Series X and PS5 APUs well before they released and even their GFX number.

Because they gave us this information.

Thraktor previously pointed out that we don’t know if T239 will use 8 A78, and that’s true. Even that it could be a newer CPU cores.

But A) there aren’t older cores that can do this
B) Orin is based on A78, what is the likelihood of them moving away from this?
C) are we expecting a super custom CPU core?

A lot of this “we know” isn’t use lightly, it’s used with proper educated and deductive reasoning behind it.

Like if it quacks like a duck, looks like a duck, gives births to baby ducks, we cannot actually entertain the idea that we do not know what it is and it could be a cow. An extreme example, but like I said, policing the usage of “we know” in the thread seems awfully silly.



To give an example, we do not know what storage they’ll use besides flash storage. Hence why we speculate on the possibilities even the lowest one (eMMC).

We do know they will not use HDD for a system like this. I will not elaborate on why. I don’t think I need to.
You're yelling at a brick wall really. I've had this conversation here already. Unfortunately, there is still a hostile minority in this thread. If you are out of the loop you are either treated as an idiot or you are talked down to.

As for you, I’ve read your posts in the past and there was no point in engaging in them because you came off as someone that gets triggered very easily by a post if it wasn’t what you wanted to read, so there was no point in discussing if it was going to get emotional.

I remember Dakhil made a simple suggestion about reading the OP and you rudely scoffed it off and thanked someone else for giving you a summary because “you had no time to read that”.


Old_Puck has been very courteous with you at least. patient.
 
Last edited:
I mean it’s not really about what Nintendo can afford so much as how much it’s going to cost for consumers.
Even with that I would imagine Nintendo would not want to be entirely reliant upon software sales & subs to cover for hardware that loses them money. If you have a single revenue stream then needlessly cutting one off is just asking for trouble if software lags &/or loses are higher then sub revenue can manage.
 
0
Tom's Hardware did some benchmarks here, and for a point of comparison I'm going to use the difference between the RTX 3070 and RTX 4070, as they both use 46 SMs. According to the benchmarks, the RTX 4070 outperformed the RTX 3070 by 42.3% at native 1080p. The RTX 4070 runs at higher clocks, though, with a boost clock of 2,475MHz compared to 1,725MHz on the 3070. That's a 43.5% increase in clocks for a 42.3% increase in performance with the same number of SMs. Not exactly what I was expecting.
This is roughly what I expected, though it's interesting to see it born out. The Ada Lovelace white paper just doesn't suggest there is a lot going on that isn't pure die shrink. No mention is made of changes to the SM design, and the main bullet points for RT, Tensor Core, and OFA improvements all parrot the same "over 2x performance" increase, which happens to match the performance increase that Nvidia attributes to the node shrink.

Nvidia claims that in "Cyberpunk 2077 running in RT: Overdrive Mode, we’ve measured overall performance gains of up to 44% from SER." But SER requires a custom API, and I have reason to believe this context-free benchmark is comparing Ada-with-SER to Ada-without-SER, rather than Ada-with-SER to Ampere-without-SER, because that would be less flattering.

The remaining two RT related architectural improvements are the Opacity Micromap Engine, and the Displaced Micro-mesh Engine, and both of them require you reauthor assets for them to work, and both of them are wins for things that I don't think would affect Cyberpunk much. The OME is designed to make efficient ray tracing of noisy geometry, like foliage, work better, and the DMME is targeted at small objects with lots of detail.


There are some other caveats like memory bandwidth only being about 10% higher on the 4070, although I'd expect the much larger L2 to be a big benefit there.
The larger L2 cache is Nvidia gesturing vaguely in the direction of Infinity Cache. I've said that while Infinity Cache is a good idea, it's not a performance win, it's a performance trade off, where AMD is sacrificing performance on "touch once" data paths in order to hugely accelerate "touch multiple times" data paths, and the net result is usually a wash. RT workloads are highly divergent and cache antagonistic.

This is where I think SER comes in. SER lets developers give the driver more information about RT Shaders so that they can be scheduled to increase thread/cache locality. I suspect that SER is simply reclaiming performance lost in moving to a "cache over bandwidth" design.


Bringing this back to the topic of the thread, it doesn't look like Nintendo's missing much by using Ampere over Ada. I was assuming that Ada would offer a nice boost to RT performance, but that doesn't seem to be the case.
Ada is much more power efficient than Ampere, but this tracks what we'd expect from the move to a better manufacturing process, and we should expect the same benefit shrinking Ampere to 4N.

In fact, with Ada being a much more transistor-heavy architecture, if you're trying to optimise for the best performance within limited die size/transistor count, it seems like Ampere on 4N would actually comfortably outperform Ada, even on RT-heavy workloads.


If you look carefully a Nvidia's white paper on Ada, and keep an eye out for contextless benchmarks you can see that Nvidia claims a 2x performance increase from moving to 5nm. If you go through an eliminate every time they mention 2x increase without a clear reason why, you're left with very very little. The full list of documented changes:
  • Bigger Cache: it's entirely possible that Drake has a larger cache, like Orin did, but this represents a standing-in-place for Ada, where the bandwidth is not substantially increased relative to performance
  • Shader Execution Reordering: a change to the SM scheduler that seems to exist mostly to help counter the performance loss of not scaling bandwidth with performance
  • Opacity Micromap Engine: In rasterization, high density foliage and fire can be approximated by textures with alpha, rather than a lot of geometry. If you wanted your shadows of that foliage to look super detailed, you'd have to write a custom shader for that. The OME lets you supplement the texture with more complex faux-geometry that the RT engine uses to test triangle intersection, which lets your custom shader run less often. This might be a real win, but probably not in Cyberpunk
  • MicroMesh Engine: This is like the opposite. Really detailed small objects are rendered with lots of geometry, which is the worst case scenario for our Fully Path Traced future. This lets you once again substitute simpler faux-geometry for triangle intersection testing, which is post-processed back to the more detailed version. This looks like a pretty big win for fully path traced games, but I doubt we're going to see any of those, and in the case of CPDR, they are not listed as one of the early partners, instead Adobe is. I suspect Nvidia needs to get asset authoring tools in place before they can even look at engine support.
  • FP8 Transformer Engine: the only documented update to the tensor cores, and it's a backport from Hopper. It doesn't accelerate FP8 operations, it accelerates FP8 scaling, which is something only used during model training, not during model execution.
  • NVENC updates: Neat. And backported to Drake.
  • Improved Clock Gating: This doesn't show up in the white paper, but does get talked about a bit when Nvidia talks about Ada power efficiency. It's poorly documented, but it appears that Ada supports fine grained control over the memory clock to save power. Considering Ada moves to a more cache centric design, this is probably smart, but it's purely a power win, and the Lapsus$ hack indicates it was backported to Drake already (FLCG).
You'll note the OFA isn't on the list. Nvidia's description of Ada's OFA doesn't document any core improvements over Ampere, just the 2x increase in performance that seems matches the increase in TFLOPS for the GPU over all. This is why I think "does Drake have the OFA from Ada" is an irrelevant question. The OFA is a customized ARM core, and there doesn't appear to be any upgrades to it that would enable RTX 40 perf without drawing RTX 40 watts. DLSS 3 on Drake will be set by the power budget and no other magic exists to work around that.

Of these 7 Ada improvements, 2 are back ported to Drake (NVENC/FLCG), 1 of them is a tradeoff (bigger cache), 1 of them is a workaround for said trade off (SER), and 1 is not usable in video games (FP8 Transformer).

That just leaves the Opacity Micromap and MicroMesh engines. One of these is laying the ground work for a fully path traced future which no 9th gen console is going to be able to reach, and the other accelerates specific edge cases in RT workloads that I suspect are out of REDACTED's reach 99.9% of the time anyway.

We come back to folks saying that REDACTED is "out of date" already. In terms of its design, Nvidia has nothing in its pocket that could deliver a more powerful mobile device than T239. It remains to be seen what Blackwell will look like, whether Nvidia will fundamentally shake up the design it's been maturing since 2006, or ride a node shrink One Last Time, but whichever path they take will not be available to Nintendo till 2026 at the earliest.

At the end of the day It's the Process Node, Stupid, and the driver there is going to be "how much power can Nintendo give folks at a price point they'll accept."
 
The same can be said about UFS 2.1/2.2 and UFS 3.0/3.1.

And Nintendo has shown with the Nintendo Switch to do cost cutting with the internal flash storage using eMMC 5.1 instead of using UFS 2.0/2.1.
But there was absolutely no reason for the Switch to use something like UFS unless it was cheaper. Games are made of compressed assets, and the way the Switch decompressed those assets was via the CPU. It's not Nintendo forcing the storage to be slow. It's the mere fact that the CPU in this action is the bottleneck. They improved it when they added a CPU boost mode (that also dropped the GPU) so loading can be faster by those that utilize it, but it wasn't even close to providing the 200MB/s+ speed the eMMC is capable of.

This differs from Drake, which has decompression hardware on it, so that has a need for faster storage mediums.
 
SNES sold worse because the Sega Genesis was eating into its market share.
GBA sold worse because the DS released 3 years before it could reach its full potential.
Wii U sold worse because it was the fucking Wii U.
3DS sold worse because of the rise of mobile gaming in the early 2010s.
If/when we get the Switch 2, there will be little standing in its way of success, even if it sells a little less than its predecessor.
Switch 2 will fight on a marketing full of other machines, like Steam Deck and Asus ROG.
 
Switch 2 will fight on a marketing full of other machines, like Steam Deck and Asus ROG.
I doubt that they can realistically put even a scratch on the Switch 2. Not to mention they fulfill different niches from the Switch.
 
The idea that Nintendo would return to a Wii-esque storage arrangement is laughable. I don't think a console is going to get away with not letting you run games off of external storage in 2023.

The Xbox Series SX has external storage so expensive that I don't know if anyone buys it.

Other external storage is just used as a fridge and you can't run games off of them.
 
This is roughly what I expected, though it's interesting to see it born out. The Ada Lovelace white paper just doesn't suggest there is a lot going on that isn't pure die shrink. No mention is made of changes to the SM design, and the main bullet points for RT, Tensor Core, and OFA improvements all parrot the same "over 2x performance" increase, which happens to match the performance increase that Nvidia attributes to the node shrink.

Nvidia claims that in "Cyberpunk 2077 running in RT: Overdrive Mode, we’ve measured overall performance gains of up to 44% from SER." But SER requires a custom API, and I have reason to believe this context-free benchmark is comparing Ada-with-SER to Ada-without-SER, rather than Ada-with-SER to Ampere-without-SER, because that would be less flattering.

The remaining two RT related architectural improvements are the Opacity Micromap Engine, and the Displaced Micro-mesh Engine, and both of them require you reauthor assets for them to work, and both of them are wins for things that I don't think would affect Cyberpunk much. The OME is designed to make efficient ray tracing of noisy geometry, like foliage, work better, and the DMME is targeted at small objects with lots of detail.



The larger L2 cache is Nvidia gesturing vaguely in the direction of Infinity Cache. I've said that while Infinity Cache is a good idea, it's not a performance win, it's a performance trade off, where AMD is sacrificing performance on "touch once" data paths in order to hugely accelerate "touch multiple times" data paths, and the net result is usually a wash. RT workloads are highly divergent and cache antagonistic.

This is where I think SER comes in. SER lets developers give the driver more information about RT Shaders so that they can be scheduled to increase thread/cache locality. I suspect that SER is simply reclaiming performance lost in moving to a "cache over bandwidth" design.






If you look carefully a Nvidia's white paper on Ada, and keep an eye out for contextless benchmarks you can see that Nvidia claims a 2x performance increase from moving to 5nm. If you go through an eliminate every time they mention 2x increase without a clear reason why, you're left with very very little. The full list of documented changes:
  • Bigger Cache: it's entirely possible that Drake has a larger cache, like Orin did, but this represents a standing-in-place for Ada, where the bandwidth is not substantially increased relative to performance
  • Shader Execution Reordering: a change to the SM scheduler that seems to exist mostly to help counter the performance loss of not scaling bandwidth with performance
  • Opacity Micromap Engine: In rasterization, high density foliage and fire can be approximated by textures with alpha, rather than a lot of geometry. If you wanted your shadows of that foliage to look super detailed, you'd have to write a custom shader for that. The OME lets you supplement the texture with more complex faux-geometry that the RT engine uses to test triangle intersection, which lets your custom shader run less often. This might be a real win, but probably not in Cyberpunk
  • MicroMesh Engine: This is like the opposite. Really detailed small objects are rendered with lots of geometry, which is the worst case scenario for our Fully Path Traced future. This lets you once again substitute simpler faux-geometry for triangle intersection testing, which is post-processed back to the more detailed version. This looks like a pretty big win for fully path traced games, but I doubt we're going to see any of those, and in the case of CPDR, they are not listed as one of the early partners, instead Adobe is. I suspect Nvidia needs to get asset authoring tools in place before they can even look at engine support.
  • FP8 Transformer Engine: the only documented update to the tensor cores, and it's a backport from Hopper. It doesn't accelerate FP8 operations, it accelerates FP8 scaling, which is something only used during model training, not during model execution.
  • NVENC updates: Neat. And backported to Drake.
  • Improved Clock Gating: This doesn't show up in the white paper, but does get talked about a bit when Nvidia talks about Ada power efficiency. It's poorly documented, but it appears that Ada supports fine grained control over the memory clock to save power. Considering Ada moves to a more cache centric design, this is probably smart, but it's purely a power win, and the Lapsus$ hack indicates it was backported to Drake already (FLCG).
You'll note the OFA isn't on the list. Nvidia's description of Ada's OFA doesn't document any core improvements over Ampere, just the 2x increase in performance that seems matches the increase in TFLOPS for the GPU over all. This is why I think "does Drake have the OFA from Ada" is an irrelevant question. The OFA is a customized ARM core, and there doesn't appear to be any upgrades to it that would enable RTX 40 perf without drawing RTX 40 watts. DLSS 3 on Drake will be set by the power budget and no other magic exists to work around that.

Of these 7 Ada improvements, 2 are back ported to Drake (NVENC/FLCG), 1 of them is a tradeoff (bigger cache), 1 of them is a workaround for said trade off (SER), and 1 is not usable in video games (FP8 Transformer).

That just leaves the Opacity Micromap and MicroMesh engines. One of these is laying the ground work for a fully path traced future which no 9th gen console is going to be able to reach, and the other accelerates specific edge cases in RT workloads that I suspect are out of REDACTED's reach 99.9% of the time anyway.

We come back to folks saying that REDACTED is "out of date" already. In terms of its design, Nvidia has nothing in its pocket that could deliver a more powerful mobile device than T239. It remains to be seen what Blackwell will look like, whether Nvidia will fundamentally shake up the design it's been maturing since 2006, or ride a node shrink One Last Time, but whichever path they take will not be available to Nintendo till 2026 at the earliest.

At the end of the day It's the Process Node, Stupid, and the driver there is going to be "how much power can Nintendo give folks at a price point they'll accept."
Based on GN's recent video on the Cyberpunk path tracing mode, it seems that SER should be applicable to ray tracing workloads in general (or at least heavier ones like full path tracing) as it can group rays that hit similar materials together in order to get performance benefits.

(should be timestamped at 4:55)
 
As for you, I’ve read your posts in the past and there was no point in engaging in them because you came off as someone that gets triggered very easily by a post if it wasn’t what you wanted to read, so there was no point in discussing if it was going to get emotional.

I remember Dakhil made a simple suggestion about reading the OP and you rudely scoffed it off and thanked someone else for giving you a summary because “you had no time to read that”.


Old_Puck has been very courteous with you at least. patient.
I treat people rudely who I think are rude, especially those who talk down to others. I don't really care if you don't like it. I thanked someone for a summary because I have adhd which makes it fucking hard to go through whatever is in the OP lmao. And it's ironic that you want to talk about someone being easily triggered but go off.
 
I was very confused by the “staff post” for this thread at first but after these last couple pages, I understand. Not sure why people get so nasty and high horsey about this topic specifically.
 
Based on GN's recent video on the Cyberpunk path tracing mode, it seems that SER should be applicable to ray tracing workloads in general (or at least heavier ones like full path tracing) as it can group rays that hit similar materials together in order to get performance benefits.

(should be timestamped at 4:55)

I worded this poorly! I think it's a general improvement, but one whose advantages are exaggerated on Ada.

For rasterization workloads, AMD has shown that increasing cache instead of increasing bandwidth can get you roughly the same performance. That's because raster workloads are already heavily optimized for locality, so lots of things stay in cache.

RT workloads aren't like that. So when Nvidia decided to make the cache larger for Ada, instead of increasing bandwidth, I imagine really aggressive RT loads start to fall off a performance cliff. SER fixes that, at the expense of a new, non-standard RT API, and needing additional hardware support.

That's why I'm not surprised to see Ampere and Ada have roughly the same RT performance. Not just because Ada has a performance win and a performance loss, but that they are directly related to each other.

If you did manage to backport SER to Ampere, I'm sure it would be a performance win, but much less than the 44% improvement Nvidia reports, because with SER, it doesn't have as much cache to take advantage of, and without SER, it isn't suffering from a dearth of memory bandwidth.
 
But there was absolutely no reason for the Switch to use something like UFS unless it was cheaper.
I agree with respect to UFS 4.0 (and maybe with respect to UFS 3.0/3.1).

But with respect to UFS 2.1, considering there are entry-level Android smartphones using UFS 2.1 (e.g. OnePlus Nord N200 5G, Samsung Galaxy A42 5G, etc.), I think UFS 2.1's probably as cheap as, if not cheaper than, eMMC 5.1.
 
I was very confused by the “staff post” for this thread at first but after these last couple pages, I understand. Not sure why people get so nasty and high horsey about this topic specifically.
I get what you're saying but also, nobody likes a semantics debate.

A post that is literally trying to draw a distinction between people saying "we know" in a hardware speculation thread serves 0 purpose. We technically know nothing in the sense that tomorrow we could say an asteroid will hit earth and leave the planet as nothing more than ashes. Technically we don't know but like, does this sort of distinction serve any value?

We could say "a reasonable person would conclude A through Z based on the available information" but like, is that actually necessary? I understand why regulars get annoyed. If the chip magically isn't the T239 then all that means is that the massive data breach that exposed more information about future hardware than has ever been leaked before a console's release happened to be incorrect. I feel everyone would be comfortable accepting they were incorrect if that were the case but until something more concrete pops up that suggests otherwise, posters saying "we know" is literally not worth the semantics policing.

I agree some posts can have an air of condescension at times. But man, sometimes people are just asking for it.
 
I worded this poorly! I think it's a general improvement, but one whose advantages are exaggerated on Ada.

For rasterization workloads, AMD has shown that increasing cache instead of increasing bandwidth can get you roughly the same performance. That's because raster workloads are already heavily optimized for locality, so lots of things stay in cache.

RT workloads aren't like that. So when Nvidia decided to make the cache larger for Ada, instead of increasing bandwidth, I imagine really aggressive RT loads start to fall off a performance cliff. SER fixes that, at the expense of a new, non-standard RT API, and needing additional hardware support.

That's why I'm not surprised to see Ampere and Ada have roughly the same RT performance. Not just because Ada has a performance win and a performance loss, but that they are directly related to each other.

If you did manage to backport SER to Ampere, I'm sure it would be a performance win, but much less than the 44% improvement Nvidia reports, because with SER, it doesn't have as much cache to take advantage of, and without SER, it isn't suffering from a dearth of memory bandwidth.
If I understand correctly, part of what SER is doing here goes beyond caching and ensures that nearby shader threads are running the same code, which matters for GPU performance to my understanding. I believe they're supposed to perform better the less divergence there is on what code is running (as opposed to the input data).
 
The remaining two RT related architectural improvements are the Opacity Micromap Engine, and the Displaced Micro-mesh Engine, and both of them require you reauthor assets for them to work, and both of them are wins for things that I don't think would affect Cyberpunk much. The OME is designed to make efficient ray tracing of noisy geometry, like foliage, work better, and the DMME is targeted at small objects with lots of detail.
The DMME seems like something that is better for something like Nanite which will become a big boon very soon.

A device like the switch being better/more efficient at processing the geometry while keeping that lower profile would, in theory I think, help it in the long run if it had it. For like Mesh Shaders if enabled and built with in mind.
 
I treat people rudely who I think are rude, especially those who talk down to others. I don't really care if you don't like it. I thanked someone for a summary because I have adhd which makes it fucking hard to go through whatever is in the OP lmao. And it's ironic that you want to talk about someone being easily triggered but go off.
That’s nice dear, but that doesn’t excuse you to be rude to others first.

Really, Arguing semantics of “we know” is really pointless in this context.

Like I don’t think we have to elaborate what “we know” means and it’s jumping through hoops to get across what it means here.

We can’t use Nintendo as a source because Nintendo will not discuss specifications of their product like that. They have the Tegra X1 listed as “Custom Tegra Processor” on their website.

There’s multiple Tegra-Based products under nvidia.

Nintendo is very likely to do the same thing again.

They didn’t detail the PICA GPU in the 3DS, didn’t go a deep dive into the Terascale based GPU of the Wii U, simply AMD or ATI.


Nintendo is a primary source and one we don’t know and won’t know. The next best thing we have is literally nvidia.
 
Last edited:
I think the bigger issue is the Odyssey team should be ready to reveal their next big Mario game, Miyamoto has already stated that we will see Mario in an upcoming direct, and that Drake not getting botw sequel, means that Mario is almost certainly the launch title, also it's likely that totk patch via DLC will probably be there for Drake, likely coming in the first 12 months of totk's release. Then the only insider to leak real information and tying it into Switch 2 info, is the Pokemon dev who said the Drake patch for pokemon is coming this winter. I'd place the end of winter for Nintendo at March 1st 2024, which is a friday, but the pokemon patch can come AFTER Switch 2's launch, and doesn't have to be that far into "winter". It's also known that Drake is physically complete, and is publically still being worked on via Linux patch notes, so we know it was never canceled.

Thraktor's discovery of expensive circuitry in Switch OLED model, only done to allow 4K output, along with Mariko's known higher clocks (650GFLOPs, near double CPU speed) was most likely the "pro" that was canceled, as you and I both heard about it way back in 2019? and we knew something had been in the works for years at that point, which we know Drake's origins is late 2019/2020 from the Nvidia hack, it doesn't fit with the original Pro information we had both heard about, so tying the canceled project to Drake is a fools errand IMO.

Drake as far as we know is ready for mass production, almost certainly launching with a Mario game, and now you have Metroid Prime 4 estimated out by the end of this FY, but possibly late by a few months. We've also clearly seen that Switch has peaked and been in decline for 3 years straight now, there is no reason for Nintendo to delay Drake that we are aware of, and as many of us have brought up, Nintendo has nothing scheduled publically after July, and only 1 known title after that, with a Direct in June, I don't see how they avoid announcing a bunch of titles, and they are very clearly avoiding the 2H of this year, releasing games with built in million seller audiences with no fan fare or marketing push, like Metroid Prime being a shadow drop or XBC3 DLC having a week from announced date to release. Why anyone without inside knowledge of Nintendo's reasoning/release schedule for Drake, would believe that it is coming any later than this FY, is far beyond me. It's also just not surprising that it would come in 2023, 3 years after Ampere and A78 hit the market, that is the same time frame as Maxwell in the Switch.

Anyways, you'll likely be pleasantly surprised, hope you enjoy the summer direct, as I'm confident (via speculation and some insider chatter saying they have heard it is this year) that the summer direct will have Mario shown off for the new generation hardware and we will see it launch this FY, likely this calendar year to fully take advantage of the Mario movie and the cleared calendar Nintendo has opened up for new games that haven't been announced for this holiday.
i-like-you-richard-speight-jr.gif
 
On an architectural level, Nintendo Switch REDACTED is several generations ahead of Xbox Series X|S. On the other hand, it HAS to be to achieve the efficiency required fo run Series X|S tier games as a handheld.
That does seem a bit of a stretch. PS5/Series seem to be part of AMD's RX 6000 line, which was basically concurrent with NVIDIA's RTX 3000 the T239 seems to be based on. So the difference comes down to AMD vs NVIDIA rather than "several generations".
Not to mention it’s been a couple decades since Nintendo has been concerned about releasing hardware that might be outdated and below current standards. Part of their approach for the last few generations has been manufacturing hardware they can sell at a point that’s considered cheap compared to the competition, while also avoiding selling at a loss.
I think there's a difference between "let's design simple hardware even if it seems outdated" and "let's design fancy hardware and sit on it until it's outdated".
PLEASE!!!!! Gen 5 was the pinnacle of sprite art for this franchise, to see it in that art style would be a dream come true.
I was thinking about this last week and I wouldn't be mad if the eventual Gen5 remakes were in this style.
Doesn't seem like you guys are asking a lot. Render Gen 5 in HD and throw too many lighting effects at it and bam, HD-2D.
You could just have the SD cards as backup storage instead of storage you can play from.
Bringing back the fridge? The FRIDGE!?
 
I agree with respect to UFS 4.0 (and maybe with respect to UFS 3.0/3.1).

But with respect to UFS 2.1, considering there are entry-level Android smartphones using UFS 2.1 (e.g. OnePlus Nord N200 5G, Samsung Galaxy A42 5G, etc.), I think UFS 2.1's probably as cheap as, if not cheaper than, eMMC 5.1.
I feel like I'm missing something here. If UFS 2.1 was as cheap or cheaper than eMMC 5.1, then why use the latter if Nintendo was about cost cutting?
 
That does seem a bit of a stretch. PS5/Series seem to be part of AMD's RX 6000 line, which was basically concurrent with NVIDIA's RTX 3000 the T239 seems to be based on. So the difference comes down to AMD vs NVIDIA rather than "several generations".

I think there's a difference between "let's design simple hardware even if it seems outdated" and "let's design fancy hardware and sit on it until it's outdated".


Doesn't seem like you guys are asking a lot. Render Gen 5 in HD and throw too many lighting effects at it and bam, HD-2D.

Bringing back the fridge? The FRIDGE!?
In that case, give it to me!!!!!!!
 
I doubt that they can realistically put even a scratch on the Switch 2. Not to mention they fulfill different niches from the Switch.
Switch 2 would outsell all its competitors combined through pre order, likely multiple times over
 
* Hidden text: cannot be quoted. *
I appreciate the summary as always, however in 2022 is it not possible he tweeted fake insider stuff in response to Bloomberg reports, it got him a lot of engagement and so in June 2022 he formulated his own stream of info to keep that engagement going?
 
I feel like I'm missing something here. If UFS 2.1 was as cheap or cheaper than eMMC 5.1, then why use the latter if Nintendo was about cost cutting?
I was responding to ItWasMeantToBe19's post talking about being disappointed if Nintendo didn't use UFS 4.0. So I was talking about with respect to UFS 4.0 (and maybe UFS 3.0/3.1) when talking about cost cutting.
 
0
I appreciate the summary as always, however in 2022 is it not possible he tweeted fake insider stuff in response to Bloomberg reports, it got him a lot of engagement and so in June 2022 he formulated his own stream of info to keep that engagement going?
If he's a buyer, what exactly would he gain by doing that?
 
I can't believe on a pro until I see it announced. Totk was the last opportunity for making a huge chunk of Switch owners to buy an upgraded model.
If they were planning to launch a pro this year they would definitely delay Totk.

I don't see people wanting a pro to play Odyssey sequel(Odyssey already looks good enough and 60 fps) and even if MP4 comes out this gen I don't see the franchise being relevant enough for a pro launch(even though metroid fans will want a upgrade to play this).
 
I can't believe on a pro until I see it announced. Totk was the last opportunity for making a huge chunk of Switch owners to buy an upgraded model.
If they were planning to launch a pro this year they would definitely delay Totk.

I don't see people wanting a pro to play Odyssey sequel(Odyssey already looks good enough and 60 fps) and even if MP4 comes out this gen I don't see the franchise being relevant enough for a pro launch(even though metroid fans will want a upgrade to play this).
They could always go the Last of Us - Last of Us Remastered route and just release a better running/looking version of TotK a year or so after the original release to launch with the console.
 
There isn't any, I made a mistake and retract my message. Sorry y'all. 🫡

I read a mistranslation that called it an "update", the intent was more like "news", informing people that it exists, as while it's been present since launch, it has never been mentioned outside of the game for spoiler reasons.

That said, it does mean that Nintendo has shoved Bayonetta Origins marketing into the same week as 1+2 Rebootcamp, XBC3 DLC, etc. So while not as meaningful as I thought, I also don't think it's null.
Ah, no problem. Honestly, that new trailer threw me off when it popped up, too, as I thought it was maybe announcing something. It's quite a nice trailer, so it coming out several weeks after the game released without any kind of news attached to it felt weird. But maybe it's not weird to market a game after launch and I'm the one with weird expectations.
 
0
They could always go the Last of Us - Last of Us Remastered route and just release a better running/looking version of TotK a year or so after the original release to launch with the console.
Not aware what happened with Last of Us. But people already know they will get a better running version of Totk with Switch 2. And a Switch successor needs to be launched until the end of 2024 maybe early 2025.

Delaying Zelda would make Switch Pro desirable for the 20+ million people who will probably buy Totk.
Launching Totk now and Pro later will make it desirable only for the people who wants to replay it with better specs. They wouldn't even profit with double dip because people you play with the owned copy.

And right now I believe that Zelda is the only Nintendo franchise where there's an overlap between popularity and people who cares about graphics and specs.
Of course bad decision are always possible or maybe they will show us some unexpected software that will make everybody want to buy a pro.
 
0
* Hidden text: cannot be quoted. *
Hidden content is only available for registered users. Sharing it outside of Famiboards is subject to moderation.


I can't believe on a pro until I see it announced. Totk was the last opportunity for making a huge chunk of Switch owners to buy an upgraded model.
If they were planning to launch a pro this year they would definitely delay Totk.

I don't see people wanting a pro to play Odyssey sequel(Odyssey already looks good enough and 60 fps) and even if MP4 comes out this gen I don't see the franchise being relevant enough for a pro launch(even though metroid fans will want a upgrade to play this).
If you’re referring to the retail uncle leak, the theory is that he actually refers to a successor and not a Pro revision

Nintendo has been working on upscaling technology. I don’t think we’ll be seeing re-releases of Switch games. We’ll either get the Switch 2 natively upresing games or releasing patches
 
Last edited:
I don't see people wanting a pro to play Odyssey sequel(Odyssey already looks good enough and 60 fps) and even if MP4 comes out this gen I don't see the franchise being relevant enough for a pro launch(even though metroid fans will want a upgrade to play this).
3D Mario would be exclusive to Drake anyway. and Mario is a system seller as is. MP4's audience would definitely prefer the game on Drake over the Switch as they're a more niche and hardcore audience, whom the benefits are more worthwhile
 
[hidden]Especially since all Japanese Fiscal Years are the same: these hot new products would enter into the current Fiscal Year[/hidden]


If you’re referring to the retail uncle leak, the theory is that he actually refers to a successor and not a Pro revision

Nintendo has been working on upscaling technology. I don’t think we’ll be seeing re-releases of Switch games. We’ll either get the Switch 2 natively upresing games or releasing patches
Hidden content is only available for registered users. Sharing it outside of Famiboards is subject to moderation.
 
3D Mario would be exclusive to Drake anyway. and Mario is a system seller as is. MP4's audience would definitely prefer the game on Drake over the Switch as they're a more niche and hardcore audience, whom the benefits are more worthwhile

Agree with you if Drake is Switch 2.

[hidden]Especially since all Japanese Fiscal Years are the same: these hot new products would enter into the current Fiscal Year[/hidden]


If you’re referring to the retail uncle leak, the theory is that he actually refers to a successor and not a Pro revision

Nintendo has been working on upscaling technology. I don’t think we’ll be seeing re-releases of Switch games. We’ll either get the Switch 2 natively upresing games or releasing patches
I thought uncle resurrected the pro discussion again.
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom