• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

I’m thinking a summer release.
May or June!!!

giphy.gif
 

This message is not directed at Serif. Just wanted to share some comments on this RGT “leak”. Allow me to quote our own Look over there:
- “…another claiming 4267 Mhz, though they didn’t specify if that’s docked or not.”
What the hell, dude. If it’s 4267 Mhz, then it’s ~8533 MT/s, which would clearly be docked. (8533 MT/s would also be LPDDR5X) And if he meant to say 4267 MT/s, then that’s obviously portable mode.
This guy does not understand the words coming out of his own mouth.
- “…512 GB eMMC”
I cannot find evidence of 512 GB eMMC existing.
This guy either does not verify input, or he doesn’t doublecheck the feasibility of his bullshit.

Also, let’s not overlook that on Sep. 8th RGT claimed that the Switch NG will use a MediaTek SoC with 2 X4, 2 A720, and 4 A520 cores, but merely a week later (Sep. 15th) he changed his tune to Nvidia SoC with 8 A78 cores. I, for one, do not forget that easily.
 
But how does the word "within" work then? I have always understood the word "Within" similar to "Inside", not as a subtraction between numbers, but as a part of a total.
I mean, "within 15% of Series S" I understand it as 15% of the Series S power range.
You are correct that the word “within” can mean “inside” of some subject. Example: the item resides within this container.

Or in a more cliché form, “the power resides within you”

In both examples, these are inside of something.


However, when it is used in the conversation of comparing subject A to Subject B in terms of displaced distance or of approximation, it is a shortened form of this: “within the reach of”


Example: “the rope was within reach of Onizuka” here you and the Rope are the subjects, and “within reach of” is what connects both of you.


So, if someone says, “10-20% within the Series S”, typically what you can do is use context clues to figure what they are saying. The video is about T239, and it brings in another subject for that bullet point (the Series S, subject B). So, he is comparing T239 to Series S, and shortens the whole phrase of, “within the reach of” to just “within”.


Keep note my English isn’t the best either despite being a native speaker, but that has to do with being bilingual. I would translate things for my parents or relatives as best I can, and the sentence structure jumbled the way even I use the English language. But something I’ve told my parents whenever they see something in English that confuses them, to break it down and look at the clues that give them an objective overview of what’s going on. And if they need help I always help them as best I can.

For the eagle-eyed members, they’d notice many grammatical errors and mistakes :p.


I hope this helps!
 
The part that's "comparable to ps4" is just the GPU (not counting DLSS/RT). The CPU will be significantly faster, and the SOC apparently has dedicated decompression hardware, and the system likely has much faster storage than the PS4's hard drive.
Switch's storage drive is faster than PS4 😅
You’re exactly right but not all “architectural upgrades” make FLOPS more powerful.

The PS4 Pro is a 4 TFLOPS machine, and the architecture means all those flops can be effectively used all the time.

NG is a (possibly) 3 TFLOPS machine, but only half of them can be used effectively all the time. The other half are only available under certain conditions.

You can’t just compare flops and know roughly how two radically different architecture will perform against each other.
PS4 Pro's Polaris mixed precision is similar to Switch's Maxwell, right? Meaning it's normally 4 tflops, but games and engines that use FP16, it could theoretically have performance that is 50% above that (roughly) with fp16 and fp32 used at the same time.

I'm aware that Ampere GPU architecture isn't as efficient as Turing (and maybe Maxwell too?). I just wasn't sure how it worked. Wouldn't 3 tflops be the fp32 number for NG switch, and not go lower. Are you saying that fp16 can only be used in conjunction with fp32 only 50% of the time?

Edit: I read the later posts after the quote. Lic's especially made a lot of sense. Thanks.
 
Last edited:
Wait what.. PS4 Pro's Polaris mixed precision is similar to Switch's Maxwell, right? Meaning it's normally 4 tflops, but games and engines that use FP16, it could theoretically have performance that is 50% above that (roughly) with fp16 and fp32 used at the same time.
Hidden content is only available for registered users. Sharing it outside of Famiboards is subject to moderation.
 
is it worth catching up today or is everything about salsa
Some really good technical posts explaining the difference between architectures if your into that.

Hidden content is only available for registered users. Sharing it outside of Famiboards is subject to moderation.


Otherwise salsa.
 
Yep I know the original PS4, xbone based X bone X don't have mixed precision. Not sure why you hid that.

I know that PS4 pro was only made to upres PS4 games and/or offer better framerate as well.
Also mixed precision on pro was often used up to checkerboard and that’s it
 
0
Actually how many triangles can the Series and Ps5 output? How many can the PS4 and the Xbox one consoles output?

Anyone know?
 
Haven't read the last 3 pages (thread is still moving fast) but I'm wondering about what kind of performance we can expect in some PS4 titles that could be ported to Switch NG.

Let's take FFVII Remake as it's a game I love and it's been rumored as an upcoming port.
Is it reasonable to expect a 60fps port of this game?

It's always been capped at 30fps even on PS4 Pro, but considering NG's much better/modern CPU and DLSS to save performance, I feel like it's something that could be done without much compromises on visual fidelity and/or final resolution.
It would make it comparable to the PS5 version at first glance, yes.

Of course it's hypothetical as we don't know if the game will be ported, and there may be better examples out there, but in my opinion it's a great way to know what we can reasonably expect on this console; concrete examples with PS4 games that could run with same or even better performance.
I could ask the same thing about Death Stranding for instance, though I'd be more cautious with this one as it's an open world game so I'm not expecting more than 30fps.
 
Is this being taken into account for the calculations done on this thread though?

The Tegra X1 GPU does 2x FP32 operations per clock, and so we calculate the Flops by:
2 (ops) x number of cores x clock (e.g. 2 x 256 x 768 = 393 GFlops docked)

I always assumed we were just counting the 2 guaranteed FP32 operations rather than counting the "fake flops" (and have to put huge disclaimers for those unaware), and therefore 2 ops per clock x 1536 cores x 1.1 GHz = 3.38 TF.

Is that not right? Or are you just using the RTX 30 as the prime example of why flop comparisons between different architectures is so flawed?
The calculation error is in the number of cores.
The architecture change described above led to a theoretical doubling of the number of FP cores, but like he already described, in practice it doesn't work out like that because calculating graphics requires Integer cores as well.

When calculating FLOPS, if you want to be objective and not just deceive yourself, you need to always use 64 per SM/CU, not the theoretical 128 as introduced by the new Int/FP unit. Hence only 768 Cores for 12 SMs.


Also, while I'm here:
Core for core and clock for clock, same number of CUs clocked at the same fixed mhz, meaning at the exact same theoretical TFLOPS, RDNA2 delivers almost exactly 50 % more FPS than GCN. Meaning a 4 TFLOPS GCN GPU will be matched in real world performance by a 2.7 TFLOP RDNA2 GPU.

Also: SM for SM and clock for clock, Ada Lovelace delivers the same performance in raster graphics workloads (i.e. regular graphics: no RT, DLLS, Frame Generation - those are where it has been enhanced) as Ampere. The difference is that the much better 4N process removes the shackles from Ampere's architecture introduced by the 8N process, allowing it to clock much higher and pack more cores for the same use of energy. Hence: Ampere at 4 nm equals Ada Lovelace as far as regular graphics performance goes, just without the improvements like frame generation and whatever else was improved.

Also:
CU to SM and clocked roughly the same, RDNA2 has on average 72% the graphics fps performance as Ada Lovelace with the same number of SMs to it's CUs.

Also: Clock for Clock and CU for CU, RDNA3 is only 5% faster than RDNA2.

Also:
Ada Lovelace (or Ampere at 4nm) with 24 SM achieves the same performance in fps at the same raw graphics output (no rt, dlss), as RDNA3 does with 32 CU. The same is true at 46SM - 60CU and at 76SM - 96CU. They all clock around the same. Giving a performance equivalent of roughly 75%, 77% and 79%, respectively.
Hence: 12 SM Ampere @4nm are equivalent to 16 CU RDNA3, and 17CU RDNA2, with a similar clock or power constraints. Or, with the same number of CUs, it would have to clock roughly 33% higher, 39% for RDNA2.
For example, to match a 12 SM GPU @ 1Ghz, a 12CU RDNA2 chip would need to be clocked @1.39 Ghz.

So to recap, RDNA2 has 50% more graphics performance (fps output at identical settings at identical TFLOPS) than GCN. RDNA3 has 5% more than RDNA2. Ada Lovelace or Ampere@4nm has 33% more than RDNA3, and 39% more than RDNA2, and roughly 105 - 110 % more than GCN, or put another way, it's 2.05 times as fast.

All of this is without any DLSS.

Knowing all of this, we can calculate an honest comparison. Remember, for the numbers of cores always use: number of SM/CU x 64.

For example, a hypothetical 1.84 TFLOP GCN GPUs graphics performance is matched by a RDNA2 GPU with theoretical 1.27 TFLOPS, and an Ampere@4nm GPU with 883 - 898 GFLOPS (no, this is not a typo).
So, a hypothetical 12SM Ampere GPU would need to be clocked at 575 - 585 Mhz to achieve the same graphics performance as a notional 1.84 TFLOP GCN GPU.

Let's do another hypothetical: That same notional 12SM Ampere@N4 GPU would need to be clocked at 1333 Mhz to match a 4.2 TFLOP GCN GPU.

To match a 4 TFLOP RDNA2 GPU in gaming performance it would need to be clocked at 1878 Mhz.

Also, when calculating for DLSS or resolutions in general, many here make the mistake of thinking half the number of pixels - twice the fps. It's not as linear. DLSS has been tested a thousand times, and if I remember correctly, it provides roughly 25% at Quality (66% of the target resolution) and 50% at Performance (50% of the target resolution or 1/4 the total pixels). But even that line of thinking is wrong, it doesn't provide extra fps, it provides whatever fps it does at the real rendering resolution, and then upscales that image to a higher output resolution to provide a sharper image.

Anyway, there are another two misguided notions I keep seeing in this thread for years now (going back to the NX days in fact), resulting in a blind wish for more power.

The thinking that the next Nintendo console needs to be more powerful and that Nintendo thinks the same way and feels like it needs to deliver that. Probably stems from a desire to have both a PS/Everything-Current-Gen-Machine and Nintendo in one or a desire for Nintendo games at higher graphics quality. Valid desire, but unrealistic thinking.

If anything the Switch, just like the Wii before, has proven and reassured Nintendo that it doesn't. It doesn't need to match the graphical fidelity and quality of the strongest competitor, and it doesn't need to make their own games look hyperrealistic. They sold loads, made more money than the other two powerhouse consoles combined, that means they did everything perfectly, why would they need to do anything differently? A few fps drops and lower resolution didn't seem to stop people from buying Switches or follow up games. Satisfaction didn't decrease and kill the console after a year or two. They can feel confident that all they have to do is stay the course and improve on what they have.

The Switch 2 will take care of that, thanks to dlss all it really needs to do is hit 768p (it's a 4:3 screen probably, same size and manufacturer as the old iPad mini Retina. No way are they making a surfboard of a device with an 8 inch 16:9 screen, where the action in the center is half the size, just to accommodate dead space left and right that no one looks at but needs to be rendered. The iPad mini is tiny and this would leave room for a smaller lite version down the line, with a screen that's the same height as the current 7 inch oled.) in handheld and 1080p docked and they have fixed the most perceptible problem of mushy resolution that everyone notices. Worst case it could do 720p for Impossible 3rd party games and it would still look decent dlss'd to FHD or 4K. The GPU is good enough to enable ports with good graphics, unlike some of the Switch ports.

Which brings me to the second misconception: Nintendo consoles need to match the other consoles because they are missing ports because of insufficient graphical power. In reality, what has always prevented ports were the non-standard storage solutions used by Nintendo, which made them either expensive (high risk) or outright impossible. Cartridges and disks with too little space to fit the games of the time, a high investment in expensive cartridges requires a large sunk cost. Bypassing physical and going digital only was also made Impossible by too little onboard storage to even fit the games of the time and be confident every console owner could buy and install it, not to mention an online store that is behind the times, all made for a situation in which 3rd party ports are outright impossible or a huge risk with almost guaranteed losses. That is the reason for the historical lack of ports, not graphics power.

Sorry for the long post I meant to keep it brief.

PS
Thanks to the aspect ratio, 768x1024 has less pixels than 720p, 770k vs 920k. It's exactly half the potential screen resolution which is dlss performance mode and could serve for demanding games.
864p matches 720p roughly in pixels and is close to dlss balanced, 1050x1400 is halfway between 720p and 1080p and could be an option for less demanding games and higher image quality with dlss quality mode.
But this is just my personal speculation. Maybe they go with weird resolutions for dlss based on percentages. Maybe I'm wrong about 4:3 and they go with 16:10. Maybe 21:9, for that immersion. Maybe it's a triple screen setup.
Joking, obviously.
 
Coming out of my self-imposed ban to do a quick post. If any of the below had already been shared (I only check this thread sporadically), sorry for the duplication.


Thank you. Finally someone else noticed. This Mochizuki tweet is the one (1) single source of all subsequent re-reports of a Capcom “unannounced game, which would sell millions, by March next year”. It was never corroborated by the primary source or any other major news outlets (content aggregators are hardly news organizations).

This joins a long list of other gaming facts for which Mochizuki was the one (1) single source: Sharp was making an LCD panel for Switch 2, Nintendo’s FY03/24 forecast did not include any new hardware, PS5 broke even in 2021, Furukawa said Switch in mid lifecycle in 2022, Nikkei predicted Nintendo wouldn’t release a new model in FY02/23 (which came true but Nikkei did not predict that), Furukawa “declined to comment” on the next-gen in 2022, and Hosiden “withdrawn” FY forecast due to Switch production difficulties, just off the top of my head.

Speaking of the conspicuous missing Capcom game, in today’s Nikkei article regarding the Tokyo Game Show:
Yeah the first thing I did when I heard this was supposedly information from Capcom's IR presentation was read the actual presentation and transcripts only to find, "shocked", that yes, the quote no one could exactly point to seemed to not actually exist. Which definitely made me quite suspicious of the entire thing.

As for the thing on games missing from TGS. There's definitely more at TGS than there was last year. Level 5 certainly is helping but even from Square Enix, Konami and the like. It could be that games that this person knows are in the pipeline is what is missing though.
 
0
if mass production and announcement is imminent come November the Funcles are gonna have an idea of this before anyone else, i'd imagine...
 
The Switch 2 will take care of that, thanks to dlss all it really needs to do is hit 768p (it's a 4:3 screen probably, same size and manufacturer as the old iPad mini Retina. No way are they making a surfboard of a device with an 8 inch 16:9 screen, where the action in the center is half the size, just to accommodate dead space left and right that no one looks at but needs to be rendered. The iPad mini is tiny and this would leave room for a smaller lite version down the line, with a screen that's the same height as the current 7 inch oled.) in handheld and 1080p docked and they have fixed the most perceptible problem of mushy resolution that everyone notices. Worst case it could do 720p for Impossible 3rd party games and it would still look decent dlss'd to FHD or 4K. The GPU is good enough to enable ports with good graphics, unlike some of the Switch ports.
Why would they do this
 
Haven't read the last 3 pages (thread is still moving fast) but I'm wondering about what kind of performance we can expect in some PS4 titles that could be ported to Switch NG.

Let's take FFVII Remake as it's a game I love and it's been rumored as an upcoming port.
Is it reasonable to expect a 60fps port of this game?

It's always been capped at 30fps even on PS4 Pro, but considering NG's much better/modern CPU and DLSS to save performance, I feel like it's something that could be done without much compromises on visual fidelity and/or final resolution.
It would make it comparable to the PS5 version at first glance, yes.

Of course it's hypothetical as we don't know if the game will be ported, and there may be better examples out there, but in my opinion it's a great way to know what we can reasonably expect on this console; concrete examples with PS4 games that could run with same or even better performance.
I could ask the same thing about Death Stranding for instance, though I'd be more cautious with this one as it's an open world game so I'm not expecting more than 30fps.

Steam Deck can run FF7Remake at 60fps, and in raw power the Switch 2 will be comparable to the steam deck
 
But is this the original ass scratching/wall staring experience, or the NG+ version?

I can't do a full wall of text yet, but I've got a line to throw out there!
"I expect it to be made on either a node of the last normal Gate-All-Around FET generation or the first forksheet FET generation"

Doesn't that sound rather bold and/or fancy at first glance? But nah, it really isn't. Time to demystify my own statement!

So the first underlying assumption behind that statement is that normal GAAFET lasts only two generations/nodes before we move to forksheets. Ergo, I'm saying (first GAAFET node) +1 or +2. The second assumption is a Switch 3 in 2031/2032.
For TSMC, that's N2, which is currently scheduled to start volume production in 2H25. For Intel, that's 18A, which is supposed to ship product by the end of 2025. So let's generalize as 'volume production in 2025'.
If we assume a cadence of 2 years, then +1 would be ~2027 and +2 would be ~2029. Either the ~2027 or ~2029 nodes are plausible for a 2031/2032 device.
If we assume a cadence of 3 years, then +1 would be ~2028 and +2 would be ~2031. Then the +1's the likely generation.
Either way, I don't see a ~2025 node being used by the 2031/2032 device, unless progression has halted, which is a Bad Thing for cutting edge tech land in general.

---

Ok, leaving out the quotes to avoid bloating this post up too much.
But, regarding the hypothetical what if Microsoft went with ARM for their 2020 Series consoles...
I'm actually not sure how much further the N1's can be pushed above 3.5 ghz.

From here:
03_Infra%20Tech%20Day%202019_Filippo%20Neoverse%20N1%20FINAL%20WM15.jpg

Yea, that's only two points, but 2.6 ghz@1w and 3.1 [email protected] suggests that it's already noticeably past the juicy part of the curve. I'm not confident on how much higher fmax is.
(long time readers observing the data points related to area efficiency may be able to guess what just got added to my wishlist :p)

...of course, once the possiblity of customization comes in, then uh... I'm not sure the question can be adequately answered anymore.
So, thanks to the c variants of Zen 4 and later cores, we know that you can sacrifice fmax to be able to shrink the core to an extent (...while also making interesting adjustments to your freq/power curve at the same time, apparently). So what about the opposite? Expand the core to push fmax up? It seems plausible enough, but I don't know how far you can take that.
(there is plenty of room to work with though; wikichip entry for Zen 2 claims that a Zen 2 core + L2 cache takes up an estimated 3.64 mm^2; triple that of an N1 with 512 kb L2)
So right now, customization makes everything a bit of a mystery, but there isn't some immediate console-ready ARM CPU that Microsoft would have been able to use that would have given them that much of a benefit of switching to Arm. However, its basically impossible at this point to know what will happen in 2028. Interesting to see how much room Zen has to be optimized though, I do wonder if Zen6 will just end up being good enough that Microsoft won't feel the need to deal with all the possible road blocks of switching to ARM just for a smaller performance boost, quite yet at least.

I continued thinking about that slide, and the lack of context given for listing BOTH Zen6 and the possible customized ARM CPU. As in, we don't know which is the main candidate and why it hasn't been committed to yet. But as others have brought up, the idea of Microsoft of all companies being the one to take the leap still feels rather odd. The rest of Microsoft has been rather slow and unprepared for the transition to RISC, whereas Sony has no real stake in x86 keeping them away from ARM in the future. It honestly feels to me like the divisions of Microsoft just aren't communicating? The core of Xbox's strategy is that whatever you play on Game Pass, you can play on console AND PC. So, where is PC? The idea that we will have PC gaming on ARM by 2028 seems laughable. The idea that developers, including Microsoft's own, won't have to worry about developing for x86 on PC from 2028 until the next, next generation is down right impossible to me. The entire point of switching to x86 was to make console development easier by minimizing the differences from PC gaming. Console architecture has always differed from time to time, but it has never led PC to differ as well, PC has been happily x86 since the early 80s, it doesn't follow the whims of consoles. And so I can't help but feel we are looking at a Microsoft that doesn't care if their own devs have to make games for both ARM and x86, secretly has been working to make Windows on ARM the near future without telling a soul (highly doubt), or the Xbox team is just hoping and wishing Windows on ARM will take off without talking to the Windows team.

But regardless, the more I think about Xbox on ARM, the weirder and weirder it all seems and the more I understand my initial reaction of pure shock and confusion. If PC remains x86 though, devs are going to have to get those games to run on x86, and there are still going to be a lot of x86 CPUs of differing power levels they will have to target. And in that world, none of this hypothetical matters, and Nintendo can happily move along with their portable ARM CPU punching above its weight compared to the x86 CPUs devs are writing their games to run on.
 
0
Why would they do this
Much bigger picture of the relevant part of the screen, the hero and objects in the center. Its equivalent to a 9.7 inch 16:9 screen with a little missing left and right. No one looks there anyway. It would make all the copycat devices look tiny, and provide a clear upgrade for Switch owners. The aspect ratio allows for a higher resolution while still keeping GPU strain in check. Or something else.
I don't know, ask them. When the screen size and manufacturer leaked, someone posted a link to a database, and the only 7.91 inch screen by that company I could find at a quick glance was a 1536x2048 4:3 panel. It makes a lot of sense to me and wouldn't surprise me.
 
About the start of mass production topic, that's usually the moment where everything gets as leaky as Ubisoft.

So we should know "soon" enough. ^^

As for release date, I’m still on the same team as long ago.

#TeamEarliestRealisticReleaseDateAccordingToFamiboards

So March 2024… at the moment

I'm Team ASAP. Looks like our teams do the same thing, wanna merge them?

Techies analyzing the SoC after the release of the Nintendo Jonathan:
n58NNn9.jpg

God i wish my suit game would be as tight as Keanu Reeves.

God i wish i would look like him.

Salsa this salsa that but where the lamb sauce?

How about we agree on a nice Aioli?
 
How accurate did that end up being?
I don't know. I can't find the video again in which he mentions this. However, I remember that it involved Wolfenstein comparisons. I looked for all contents made by Rich on DF that involved that title and Switch overclocking and couldn't find the mention again. Sorry.
The calculation error is in the number of cores.
The architecture change described above led to a theoretical doubling of the number of FP cores, but like he already described, in practice it doesn't work out like that because calculating graphics requires Integer cores as well.

When calculating FLOPS, if you want to be objective and not just deceive yourself, you need to always use 64 per SM/CU, not the theoretical 128 as introduced by the new Int/FP unit. Hence only 768 Cores for 12 SMs.


Also, while I'm here:
Core for core and clock for clock, same number of CUs clocked at the same fixed mhz, meaning at the exact same theoretical TFLOPS, RDNA2 delivers almost exactly 50 % more FPS than GCN. Meaning a 4 TFLOPS GCN GPU will be matched in real world performance by a 2.7 TFLOP RDNA2 GPU.

Also: SM for SM and clock for clock, Ada Lovelace delivers the same performance in raster graphics workloads (i.e. regular graphics: no RT, DLLS, Frame Generation - those are where it has been enhanced) as Ampere. The difference is that the much better 4N process removes the shackles from Ampere's architecture introduced by the 8N process, allowing it to clock much higher and pack more cores for the same use of energy. Hence: Ampere at 4 nm equals Ada Lovelace as far as regular graphics performance goes, just without the improvements like frame generation and whatever else was improved.

Also:
CU to SM and clocked roughly the same, RDNA2 has on average 72% the graphics fps performance as Ada Lovelace with the same number of SMs to it's CUs.

Also: Clock for Clock and CU for CU, RDNA3 is only 5% faster than RDNA2.

Also:
Ada Lovelace (or Ampere at 4nm) with 24 SM achieves the same performance in fps at the same raw graphics output (no rt, dlss), as RDNA3 does with 32 CU. The same is true at 46SM - 60CU and at 76SM - 96CU. They all clock around the same. Giving a performance equivalent of roughly 75%, 77% and 79%, respectively.
Hence: 12 SM Ampere @4nm are equivalent to 16 CU RDNA3, and 17CU RDNA2, with a similar clock or power constraints. Or, with the same number of CUs, it would have to clock roughly 33% higher, 39% for RDNA2.
For example, to match a 12 SM GPU @ 1Ghz, a 12CU RDNA2 chip would need to be clocked @1.39 Ghz.

So to recap, RDNA2 has 50% more graphics performance (fps output at identical settings at identical TFLOPS) than GCN. RDNA3 has 5% more than RDNA2. Ada Lovelace or Ampere@4nm has 33% more than RDNA3, and 39% more than RDNA2, and roughly 105 - 110 % more than GCN, or put another way, it's 2.05 times as fast.

All of this is without any DLSS.

Knowing all of this, we can calculate an honest comparison. Remember, for the numbers of cores always use: number of SM/CU x 64.

For example, a hypothetical 1.84 TFLOP GCN GPUs graphics performance is matched by a RDNA2 GPU with theoretical 1.27 TFLOPS, and an Ampere@4nm GPU with 883 - 898 GFLOPS (no, this is not a typo).
So, a hypothetical 12SM Ampere GPU would need to be clocked at 575 - 585 Mhz to achieve the same graphics performance as a notional 1.84 TFLOP GCN GPU.
Thank you so much for basing these comparisons across architectures, generations and nodes on a mathematically more solid ground. I don't see any error in your calculations at a first glance (I mean the pure math, I don't know about the validity of the premises).

One caveat I see is that those seemingly give a lower bound for Ampere architecture's performance. According to another poster quoting Digital Foundry, games would run faster thanks to the new architecture (not twice as fast mind you, but by a certain percentage. I think 30% was the number they've thrown). So in theory, you would need (on average) to fulfill 75% of the requirements you listed (either in terms of cores or clocks) to match the equivalent performance on GCN.

edit: here is the
reference of the 30% advantage claim

edit 2: nevermind. Thraktor answered that question, both GCN and Ampere have the same architecture in that regard.

Furthermore, the memory concerns must still be addressed: the PS4 features a memory bandwidth of 176 GB/s. Drake is projected to have 102 GB/s at disposal. How does that bottleneck rendering with and without DLSS at play?
Why would they do this
I think your answer is a bit dismissing the efforts our friend produced in his post. But to be frank, I also wonder why they went with a 4:3 ratio in their argument.
 
Last edited:
Except Video Games Chronicle never said that.

What Video Games Chronicle said is that The Matrix Awakens: An Unreal Engine 5 Experience showcased by Nintendo and Epic had advanced ray tracing enabled, and the visuals were comparable to PlayStation 5 and the Xbox Series X|S, with respect to The Matrix Awakens: An Unreal Engine 5 Experience.

(And no, comparable is not the same as superior.)
I'm way late, catching up. But I don't know if someone pointed this out. It was Nate who said he heard specifically the RT capabilities were as good if not better than PS5/X/S. Everyone said comparable visuals, Nate said as good if not better RT specifically.
 
Actually how many triangles can the Series and Ps5 output? How many can the PS4 and the Xbox one consoles output?

Anyone know?
https://www.anandtech.com/show/1599...ft-xbox-series-x-system-architecture-600pm-pt
https://images.anandtech.com/doci/15994/202008180221421_575px.jpg
  • XBOX ONE: 1.6 Gtris/sec
  • XBOX ONE X: 4.4 Gtris/sec
  • XBOX Series X: 7.3 Gtris/sec (4 tris per clock, 1825MHz)

- PS5: I guess 8.9Gtris/sec (4 tris per clock, 2233MHz)

Tegra X1 has 2 Polymorph Engines. (1 Polymorph Engine per 1 SM)

https://www.nvidia.com/content/pdf/product-specifications/geforce_gtx_680_whitepaper_final.pdf
Note on page 7
2 Polymorph Engines generate 1 triangle per clock
- Nintendo Switch: 0.384Gtris(384MHz), 0.768Gtris(768MHz)

1 Ampere TPC(2 SM) has 1 Polymorph Engines. (12 SM has 6 Polymorph Engines)

- 12 SM T239: I guess 3 triangles per clock, 1.5Gtris(500Mhz), 3Gtris(1000MHz)
 
Last edited:
Yes, but not remotely close to the joycons drift case.


And you know exactly why Nintendo fixed them for free. Because of the massive joycons drift backlash and lawsuit in the US. Otherwise they'd have treated you just the same as Sony, which in fact what happened to me because I don't live in the US. They didn't even bother acknowledging the drift after I sent it to them and made me pay for shipping and other costs. Asshats.

In the end I fixed it myself using the cardboard fix. But even this didn't last long.
I've had more drifting issues on my PlayStation controllers than my joy-cons. Bothe my PS4 controllers I had started drifting within months when I got my PS4 Pro in 2018. In fact, my Joy-cons from my launch 2017 switch still don't drift. My pro controllers have started, at least one of them. I played that thing almost entirely in handheld too.
 
0
Steam Deck can run FF7Remake at 60fps, and in raw power the Switch 2 will be comparable to the steam deck
Oh. I should've checked that first lol, that means it's clearly something they can "easily" achieve then.
Having experiences at least on par with PS4 even in portable mode sounds very exciting already, but the perspective to have better performance on many games (not expecting all of them obviously) sounds delightful.

Bring the salsa, bring the birds, bring the pistachio ice cream, bring them all.

I'm hungry too now
 
The problem with doing that is that not all devs will use the machine as efficiently as it can be used. Switch 2 might be close to PS4 in raw power, but if used correctly will likely outclass it by a fair margin.

But that will always be on a game by game basis because not everyone will bother using more than just the raw power.
This!☝️
Some games will port and scale nicely and others won't. We already see this with Wii U - > Switch ports. Despite Switch being more performant than Wii U, some ports scale nicely to the new hardware (MK8 DX, BoTW) while others don't (Bayo 1 + 2). Comparisons are hard to be made because there's always some trade-offs between the machines.
in that case, would going for a fair-er comparison for NG switch be a good idea?

as in, can i use something like steam deck to see if a 3rd party game is workable or not on NG, & how it'd look & run there if so
 
in that case, would going for a fair-er comparison for NG switch be a good idea?

as in, can i use something like steam deck to see if a 3rd party game is workable or not on NG, & how it'd look & run there if so
PS4 or SteamDeck are fine. If the games run on them, it will be more than fine to run on Switch NG. Although with the caveat that a game that runs on PS5/XSX can also be ported to Switch NG, but with some cuts here and there. It's all up to developers effort.

As I said, I like to think of Switch NG as a modern PS4+ with better CPU, more RAM, faster IO and access to a cheat code that allows it to target/output resolutions way higher than what PS4 or SteamDeck can achieve.
 
0
Lemme play the hair in the soup.

Technically it could also mean the game releases before ReDraketed.
Technically.

But honestly, i doubt that. But i wanted to get it out of the way.

Now, how Dangen defines "near release" aka how do they view the launch window, is the interesting part.
Again, I'm behind, catching up. But when I read near release, I think it could also mean the game may not come day and date on Switch 2, but still near the release of the other versions.
 
Please read this new, consolidated staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited by a moderator:


Back
Top Bottom