• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Do you have audio editing experience and want to help out with the Famiboards Discussion Club Podcast? If so, we're looking for help and would love to have you on the team! Just let us know in the Podcast Thread if you are interested!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

That's what I was asking: I suppose it lies more with Nvidia making the SoC Nanite/Lumen/Niagra, etc. compliant, but we really don't have anything to go by in terms of any fancy graphical feature set apart from suspecting DLSS and that it might feature RT cores in some capacity.
Well, Lumen is not supported on PS4 or Xbox One, so we’re not totally blind on what is required.

And if Nintendo wants to continue enjoying ports of 3rd-party titles, they and Nvidia will need to make sure what they’re taping down is capable of playing nice with these UE5 features, because devs are definitely going to want to use them whenever possible.
 
Well, Lumen is not supported on PS4 or Xbox One, for one.

And if Nintendo wants to continue enjoying ports of 3rd-party titles, they and Nvidia will need to make sure what they’re taping down is capable of playing nice with these UE5 features, because devs are definitely going to want to use them whenever possible.
I wasn't even arguing about PS4 or XBox One. I was asking if there were any reports about Dane using more contemporary feature sets and support for stuff like Lumen, etc.

So far the only thing I've read was Nate hearing about the latest kits having RT cores in some capacity. Dunno if that still holds true, though.
 
I wasn't even arguing about PS4 or XBox One. I was asking if there were any reports about Dane using more contemporary feature sets and support for stuff like Lumen, etc.

So far the only thing I've read was Nate hearing about the latest kits having RT cores in some capacity. Dunno if that still holds true, though.
It gives us a defined floor to work with, though, which is the important thing. Dane being equivalent to PS4 in capabilities won’t cut the mustard to get these features, that’s at least a start.
 
Quoted by: SiG
1
It gives us a defined floor to work with, though, which is the important thing. Dane being equivalent to PS4 in capabilities won’t cut the mustard to get these features, that’s at least a start.
I mean that's a given. What I've been asking was if there were any new reports about the successor's touted feature sets given it being perportedly a Lovelace architecture.

Then there's the whole situation with the chip shortage too.
 
I mean that's a given. What I've been asking was if there were any new reports about the successor's touted feature sets given it being perportedly a Lovelace architecture.

Then there's the whole situation with the chip shortage too.
Until we see how Nintendo and Nvidia customize the Dane SoC, that’s totally unknowable; we can make guesses based on what current Orin chips in the right TDP range are capable of, but even that is a really rough guess when part of the customization we expect Nintendo to ask for will be taking things OUT of the Orin SoCs we know about that aren’t relevant to the intended use case of video games (like the suspected swap out of A78AE CPU cores for A78C cores), and who knows how much those reductions make room for other things that would make this possible.

All we can determine is what it would need to make it happen.

With that said, here’s the documentation from Epic on Lumen requirements:
Lumen is developed for next-generation consoles (PlayStation 5 and Xbox Series S / X) and high-end PCs. Lumen has two ray tracing modes each with different requirements:
  • Software Ray Tracing:
    • Video cards using DirectX 11 with support for Shader Model 5.
    • (Currently requires an NVIDIA GeForce GTX-1070 or greater card for performance in Early Access. Lumen has many options for scaling down but requires further development recommended for use.)
  • Hardware Ray Tracing:
    • Windows 10 with DirectX 12 support.
    • (Video cards must be NVIDIA RTX-2000 series and higher, or AMD RX-6000 series and higher.)
People who are smarter than me with this stuff can pick this apart to give an idea of precisely what Dane would need to have to support Lumen, which is likely the most demanding of UE5’s fancy features. From there, we can determine if what we already heavily expect will be sufficient or not.
 
Last edited:
Given UE5 supports Switch granted not all of its feature, it would have to be a huge miss for the Switch succ to not support UE5

Nintendo is not going to drop U5 support, why would they do that? it’s safe to say that Dane uArch is probably not powerful enough to drive those more advanced stuff. Even if the Dane uArch has feature set that is several generations ahead of the uArch on PS4/Boner
 
Nintendo is not going to drop U5 support, why would they do that? it’s safe to say that Dane uArch is probably not powerful enough to drive those more advanced stuff. Even if the Dane uArch has feature set that is several generations ahead of the uArch on PS4/Boner
I agree. it's quite possible Switch 2 UE5 ports will have some features turned off, and or Nintendo will pay Epic to make a Switch 2 specific UE5 fork like they did with Switch with a bunch of stuff optimized to work around things the SoC handles better.

Given UE5 is already very scaleable supporting current gen, mobile and next-gen, it would have to be a huge miss on Nintendo's part to have a device that doesn't support UE5. So yeah i was /am confused by the discussion. The bigger question probably is 'how well' support will be and if there will be a Switch 2 specific UE5 settings
 
I think it will all come down to Nvidia figuring out how to nullify the performance penalties incured by RT without needing DLSS to compensate. With that said, Tensor cores have been used primarily as denoisers, but it would seem rather inefficient that the ray-tracing implementation has to go through two stages in the rendering pipeline before being output.

If mobile RT is a thing, it would have to consolidate the the steps into one process while limiting power draw.
"without dlss" is a folly. if you can use DLSS in your game, you're better off using it than not. doesn't matter how light the game it. one of the benefits of Ampere's tensor cores is concurrency, so while an image is being draw, it can denoise another image. this helps with efficiency greatly.

Nintendo is not going to drop U5 support, why would they do that? it’s safe to say that Dane uArch is probably not powerful enough to drive those more advanced stuff. Even if the Dane uArch has feature set that is several generations ahead of the uArch on PS4/Boner
no, it's not safe to assume that given we know nothing of Ampere's low end performance, Cuda cores, Tensor cores, and RT cores
 
0
Well, Lumen is not supported on PS4 or Xbox One, so we’re not totally blind on what is required.

And if Nintendo wants to continue enjoying ports of 3rd-party titles, they and Nvidia will need to make sure what they’re taping down is capable of playing nice with these UE5 features, because devs are definitely going to want to use them whenever possible.

Lumen not supported on PS4 and XboxOne are probably more to do with architecture features than just raw horsepower, Maxwell used in TX1 is still far newer than the GCN tech used in PS4/XboxOne. I fully expect full UE5 features to be available in many cell phone games down the road (not saying that the games will be top tier demanding) but as long as the features are supported by the architecture the bigger question becomes what is the minimum level of SSD speed needed for Nanite and VRS to fully be utilized on a mobile device...
 
Last edited:
I agree. it's quite possible Switch 2 UE5 ports will have some features turned off, and or Nintendo will pay Epic to make a Switch 2 specific UE5 fork like they did with Switch with a bunch of stuff optimized to work around things the SoC handles better.

Given UE5 is already very scaleable supporting current gen, mobile and next-gen, it would have to be a huge miss on Nintendo's part to have a device that doesn't support UE5. So yeah i was /am confused by the discussion. The bigger question probably is 'how well' support will be and if there will be a Switch 2 specific UE5 settings

Funny enough we have clearly seen that RDNA2 is in its infancy when it comes to RT performance, so PS5 and Series X definitely struggle to showcase any meaningful impressive RT over previous generation desktop cards available from Nvidia.

Going forward this will constantly be a balancing act of how much hardware gets dedicated to specific features, based on how important they are viewed to the future of games development moving forward and what will streamline the architecture.
 
Funny enough we have clearly seen that RDNA2 is in its infancy when it comes to RT performance, so PS5 and Series X definitely struggle to showcase any meaningful impressive RT over previous generation desktop cards available from Nvidia.

Going forward this will constantly be a balancing act of how much hardware gets dedicated to specific features, based on how important they are viewed to the future of games development moving forward and what will streamline the architecture.
Yeah, it's really hard to say how RT would perform on such a low power device, but Ampere generally blows RDNA 2 out of the water when it comes to fairly like for like comparisons of RT performance on PC. How much the hardware RT acceleration helps with Lumen performance will probably be one of the bigger factors in whether or not the feature comes to Dane.
 
0
Lumen not supported on PS4 and XboxOne are probably more to do with architecture features than just raw horsepower, Maxwell used in TX1 is still far newer than the GCN tech used in PS4/XboxOne. I fully expect full UE5 features to be available in many cell phone games down the road (not saying that the games will be top tier demanding) but as long as the features are supported by the architecture the bigger question becomes what is the minimum level of SSD speed needed for Nanite and VRS to fully be utilized on a mobile device...
I agree, but you still need to know what architecture features need to be there and how capable said features need to be and that's still a big shrug without the specifics.

I'll even say that having a more architecturally-robust SoC may be the way that Nintendo tries to keep pace with PS and Xbox moving forward without sacrificing thermal or battery performance. It worked with the TX1, don't see why it wouldn't work with Dane, especially considering how many new bells and whistles it's likely to feature.
 
0
Tech demo is cool and all but I don't need my games to have hyper realistic graphics, give me good gameplay over graphics anyway, I dont want the future of gaming to be cinematic simulators
 
Tech demo is cool and all but I don't need my games to have hyper realistic graphics, give me good gameplay over graphics anyway, I dont want the future of gaming to be cinematic simulators
My take has always been: if this is what could be done to achieve realism, that tells me that so much more can be done with non-realism.
 
Tech demo is cool and all but I don't need my games to have hyper realistic graphics, give me good gameplay over graphics anyway, I dont want the future of gaming to be cinematic simulators
Don't think what you saw in the demo doesn't apply to cartoony games. I've seen way too many people make that mistake
 
Quoted by: SiG
1
Is the matrix demo released on series s? That's the baseline that switch 2 will have to meet. 4 TFLOPs GPU and it's 8 core RDNA2 processors..
Hooold ya horses there.

While I will say 4 RT cores of Orin may or may not beat the 8 Ray Accelerators in the Series S's GPU (It more or less likely will match it if it's Ampere RT cores rather than double-perf Lovelace RT cores or something)

GPU perf outside of that doesn't need to be nearly as high.

For reference, 1 TFLOP of Ampere roughly converts to 4.1TFLOPs of Polaris.

The PS4 Pro was 4.2 TFLOPs.

If they are targeting 1TFLOP Ampere for Docked Dane performance, that would put it GPU-wise around the PS4 Pro before DLSS.

So with DLSS that should give it a fair bit of margin to more or less match or surpass the Series S GPU-wise, the limiting factor becoming memory bandwidth and CPU if they are pushing it to the limits already somehow (I have my doubts on that ever happening outside of things like 120fps modes until late gen)

And that's just Pure FP32, Mixed Precision should bring its effective TFLOP value higher as Orin has double FP16, and removal of that is the least likely thing to happen uArch wise for Dane.
 
Don't think what you saw in the demo doesn't apply to cartoony games. I've seen way too many people make that mistake
This issue is "and how does this apply directly to future Nintendo hardware?".

We're falling into the same pitfall as WUST all over again in thinking Nintendo's hardware must be in the ballpark of
"That's the baseline that switch 2 will have to meet. 4 TFLOPs GPU and it's 8 core RDNA2 processors.."
...or bust, because clearly that's outside the expectations of what could be done with ARM atm.

Let's not forget to take into account power draw, and not to mention a new variable that recently introduced itself: Production capacity. The global chip shortage has definitely put a noose around what could be cheap and viable. I'm not even sure if Nintendo/Nvidia will switch to another process node outside of the supposed 8nm (and thus a potential to his said higher clocks) if it means getting less chips for the holliday season. Heck, I'm even willing to bet they might stick with 12nm if it means cheaper and more readily available.

This baseline is set too high for people's expectations. It's the same flawed mentality of "Nintendo must compete with PS5/XBSX powerwise if it wants to remain relevant" that we have to ditch in these discussions.
 
I have no idea how you're getting any of that out of my comments. I have never said anything regarding Dane running that demo. my expectations have been fairly rock solid at GTX 1030-1050 levels of performance before DLSS.

my statements have been about how Dane could utilize the individual elements of that demo, such as emissive area lights. the comment you even quoted has nothing to do with the rest of your statement. so don't be putting words in my mouth
 
0
I'm not even sure if Nintendo/Nvidia will switch to another process node outside of the supposed 8nm (and thus a potential to his said higher clocks) if it means getting less chips for the holliday season. Heck, I'm even willing to bet they might stick with 12nm if it means cheaper and more readily available.
I don't think that's necessarily going to be cheap, considering that Nintendo and Nvidia would need to redesign Dane with TSMC's IP in mind, especially since Samsung's the only semiconductor foundry company that offers a 8 nm** process node, which is based on Samsung's 10 nm** process node. (I know that GlobalFoundries also has a 12 nm** process node, which is based on Samsung's 14 nm** process node. But I don't think Nvidia has been a customer of GlobalFoundries.) And the cost of a transistor gate for a 12 nm** process node is only ~6.29% cheaper than a transistor gate for a 7 nm** process node and ~1.4% cheaper than a transistor gate for a 10 nm** process node. So the cost savings for using a 12 nm** process node is not very high.
 
0
For reference, 1 TFLOP of Ampere roughly converts to 4.1TFLOPs of Polaris.

The PS4 Pro was 4.2 TFLOPs.

If they are targeting 1TFLOP Ampere for Docked Dane performance, that would put it GPU-wise around the PS4 Pro before DLSS.
Okay I'm normally silent in this thread and just trying to follow along but if you don't mind my asking: why such a big difference? I'll admit I'm really undereducated on this stuff but as far as I've read TFLOPs are a measurement of graphic performance, is that right? Why would the measurement vary so drastically to where 1 TFLOP on one processor is 4 TFLOPs on another?

Sorry for the n00b question.
 
Okay I'm normally silent in this thread and just trying to follow along but if you don't mind my asking: why such a big difference? I'll admit I'm really undereducated on this stuff but as far as I've read TFLOPs are a measurement of graphic performance, is that right? Why would the measurement vary so drastically to where 1 TFLOP on one processor is 4 TFLOPs on another?

Sorry for the n00b question.
TFLOPs are sort of effected by efficiency changes between uArchs among a number of other factors.

Generally newer uArchs can do the same thing older ones did with a lower TFLOP number which is a sign the uArch overall is more efficient per-FLOP.

So in this example Ampere is around 4X as efficent as Polaris per-FLOP.

Which we have to remember, Polaris was derived from GCN which was in the OG PS4 back in 2013.

RDNA is a effective clean break from that, so while RDNA1 has a more modest jump over later GCN generations like Polaris and Vega, RDNA2 with it's infinity cache got a giant leap.


Although oddly enough NVIDIA sort of went backwards with their efficiency per-FLOP.

Ampere has way more TFLOPs than Turing, but Turing cards still compete or even beat Ampere, same with some Pascal cards.

EX: The 2080Ti has 13.45 TFLOPs, the RTX 3070 which NVIDIA themselves said is equivalent to the 2080Ti has 21.7 TFLOPs.

TFLOPs aren't a directly comparable performance metric.

1.84 GCN1.1 Tflops of the PS4 isn't directly comparable to the 4.2 TFLOPs of Polaris in the PS4 Pro which isn't directly comparable to the 4TFLOps of Infinity Cacheless RDNA2 of the Series S which isn't directly comparable to 4 TFLOPs of True RDNA 2 on Desktop.
 
Okay I'm normally silent in this thread and just trying to follow along but if you don't mind my asking: why such a big difference? I'll admit I'm really undereducated on this stuff but as far as I've read TFLOPs are a measurement of graphic performance, is that right? Why would the measurement vary so drastically to where 1 TFLOP on one processor is 4 TFLOPs on another?

Sorry for the n00b question.
Teraflops are, at best, a proxy metric that is only really directly comparable between GPUs of the same architecture. There's a lot more to how a GPU performs than it's theoretical peak compute.
 
TFLOPs are sort of effected by efficiency changes between uArchs among a number of other factors.

Generally newer uArchs can do the same thing older ones did with a lower TFLOP number which is a sign the uArch overall is more efficient per-FLOP.

So in this example Ampere is around 4X as efficent as Polaris per-FLOP.

Which we have to remember, Polaris was derived from GCN which was in the OG PS4 back in 2013.

RDNA is a effective clean break from that, so while RDNA1 has a more modest jump over later GCN generations like Polaris and Vega, RDNA2 with it's infinity cache got a giant leap.


Although oddly enough NVIDIA sort of went backwards with their efficiency per-FLOP.

Ampere has way more TFLOPs than Turing, but Turing cards still compete or even beat Ampere, same with some Pascal cards.

EX: The 2080Ti has 13.45 TFLOPs, the RTX 3070 which NVIDIA themselves said is equivalent to the 2080Ti has 21.7 TFLOPs.

TFLOPs aren't a directly comparable performance metric.

1.84 GCN1.1 Tflops of the PS4 isn't directly comparable to the 4.2 TFLOPs of Polaris in the PS4 Pro which isn't directly comparable to the 4TFLOps of Infinity Cacheless RDNA2 of the Series S which isn't directly comparable to 4 TFLOPs of True RDNA 2 on Desktop.

Teraflops are, at best, a proxy metric that is only really directly comparable between GPUs of the same architecture. There's a lot more to how a GPU performs than it's theoretical peak compute.

GOTCHA. I swear, thanks to this thread I've learned more about computing this year than I had in the previous 25 years of being a "computer nerd." Thanks so much!
 
0
right now, the theory is that there's a lack of power for mobile devices. I'm almost certain the PS4 and Xbox One can support nanite, but they choose not to due to age. hell, mobile probably can too

Lumen is the big one, however, and I can see why that's cut out of mobile. now whether it can run on last gen machines? I think it can, but you'd be pretty limited in what you can do after that. Dane being there in the game performance, but with RT acceleration assistance should help it. but we just don't completely know yet
It’s probably just a timing thing more than anything. They put focus on the devices that likely would use it first, and then the others. UE is popular in content creation but even more so popular in gaming.


Mobile will probably get a version of Lumen later. And Nanite maybe?

Dane would just be the only mobile SoC to make use of all hardware features from the word GPU think for a while at a higher fidelity than other mobile contemporaries.
 
0
Until we see how Nintendo and Nvidia customize the Dane SoC, that’s totally unknowable; we can make guesses based on what current Orin chips in the right TDP range are capable of, but even that is a really rough guess when part of the customization we expect Nintendo to ask for will be taking things OUT of the Orin SoCs we know about that aren’t relevant to the intended use case of video games (like the suspected swap out of A78AE CPU cores for A78C cores), and who knows how much those reductions make room for other things that would make this possible.

All we can determine is what it would need to make it happen.

With that said, here’s the documentation from Epic on Lumen requirements:

People who are smarter than me with this stuff can pick this apart to give an idea of precisely what Dane would need to have to support Lumen, which is likely the most demanding of UE5’s fancy features. From there, we can determine if what we already heavily expect will be sufficient or not.
Hm, that’s interesting. For the software based RT, it says Shader Model 5 or greater.


switch has Shader model 6.4 I think 😜

You know what that means! ;)
 
Hm, that’s interesting. For the software based RT, it says Shader Model 5 or greater.


switch has Shader model 6.4 I think 😜

You know what that means! ;)
It means you missed the part underneath that where it says minimum hardware spec for the software ray tracing option is a GTX1070, which is a card using the Pascal architecture. ;)
 
0
Err, how exactly does this relate to Future Nintendo technology?

Do we have confirmation that Dane can run UE5/Lumen/Nanite as its baseline? I could see Nvidia toute is as "The first SoC to be fully compatible with Unreal Engine 5 with ray tracing and global illumination features enabled".
UE5 discussion is probably more relevant in regards to talking about Nintendo and Nvidia than the billion posts about Qualcomm/Samsung/Apple/etc and smaller process nodes.
 
I think you're setting your expectations waaaay to high.
I think you guys are misinterpreting what I am saying. S is the weakest next gen home console and developers have to support it as the lowest common denominator. This gives some leeway for switch 2. With DLSS enabled, GPU won't be an issue, but CPU might.

will be interesting to see how ps4 base performs also.
 
if the series S averaged 8xxp in the Matrix demo (and this is with reduced fidelity), I can see a further reduction in fidelity allowing for 360p/540p on Dane (theoretically). the biggest stress point is Lumen and I haven't seen just how much it can scale
 
0
TFLOPs are sort of effected by efficiency changes between uArchs among a number of other factors.

Generally newer uArchs can do the same thing older ones did with a lower TFLOP number which is a sign the uArch overall is more efficient per-FLOP.

So in this example Ampere is around 4X as efficent as Polaris per-FLOP.

Which we have to remember, Polaris was derived from GCN which was in the OG PS4 back in 2013.

RDNA is a effective clean break from that, so while RDNA1 has a more modest jump over later GCN generations like Polaris and Vega, RDNA2 with it's infinity cache got a giant leap.


Although oddly enough NVIDIA sort of went backwards with their efficiency per-FLOP.

Ampere has way more TFLOPs than Turing, but Turing cards still compete or even beat Ampere, same with some Pascal cards.

EX: The 2080Ti has 13.45 TFLOPs, the RTX 3070 which NVIDIA themselves said is equivalent to the 2080Ti has 21.7 TFLOPs.

TFLOPs aren't a directly comparable performance metric.

1.84 GCN1.1 Tflops of the PS4 isn't directly comparable to the 4.2 TFLOPs of Polaris in the PS4 Pro which isn't directly comparable to the 4TFLOps of Infinity Cacheless RDNA2 of the Series S which isn't directly comparable to 4 TFLOPs of True RDNA 2 on Desktop.

I think Nvidia really made a mistake with the memory allocations in the Ampere cards, so yes the 3070 in raw specs looks like its not a big enough leap over the 2080 ti (just for close theoretical flops sake the 3060 ti is much closer to the 2080 ti in actual performance).

How much of the 3070 not performing up to where it should be is related to memory bandwidth issues? Nvidia releasing newer versions of their Ampere cards with some doubling in RAM says just about all that needs to be said really.
 
0

The Matrix Awakens is effectively three different demos in one: an incredible character rendering showcase, a high-octane set-piece and an ambitious achievement in open-world simulation and rendering. However, perhaps the biggest surprise is that all of these systems co-exist within the one engine and are intrinsically linked - but with that said, it is a demo and not a shipping game. There are issues and performance challenges are evident. Possibly the most noticeable problem is actually a creative decision. Cutscenes render at 24fps - a refresh rate that does not divide equally into 60Hz. Inconsistent frame-pacing means that even at 120Hz, there's still judder. Jumps in camera cuts can see frame-time spikes of up to 100ms. What's actually happening here is that the action is physically relocating around the open world, introducing significant streaming challenges - a micro-level 'fast travel', if you like. Meanwhile, as stated earlier, speedy traversal and car crashes see frame-rate dip. Combine them and you're down to 20fps. It's at this point you need to accept that we're still in the proof of concept stage.

The performance challenges also impact resolution too. However, thanks to Unreal Engine 5's temporal super resolution (TSR) technology the 1404p-1620p rendering on Xbox Series X and PS5 looks suitably 4K in nature. However, in the most intense action, 1080p or perhaps lower seems to be in effect, which really pushes the TSR system hard. At the entry level, it's incredible to see Xbox Series S deliver this at all but it does so fairly effectively, albeit with some very chunky artefacts. Here, the reconstruction target is 1080p, but 875p looks to be the max native rendering resolution with pixel counts significantly below 720p too. It should be stressed that TSR can be transformative though, adding significantly to overall image quality, to the point where Epic allows you to toggle it on and off in the engine showcase section of the demo. Series S does appear to be feature complete, but in addition to resolution cuts, feature reduction in detail and RT does seem to be in play.

But from our perspective, The Matrix Awakens is a truly remarkable demonstration of what the future of games looks like - and to be honest, we need it. While we've seen a smattering of titles that do explore the new capabilities of PlayStation 5 and Xbox Series hardware, this has definitely been the year of cross-gen. Resolutions are higher, detail is richer, frame-rates are better and loading times are much reduced - but what we're seeing in The Matrix Awakens is a genuine vision for the future of real-time rendering and crucially, more streamlined production. The Matrix Awakens is free to download from the Xbox and PlayStation stores and it goes without saying that experiencing it is absolutely essential.
Assuming Epic Games plans on having The Matrix Awakens available to run on the DLSS model*, based on how the Xbox Series S version of The Matrix Awakens is described, I suspect that the DLSS model* version of The Matrix Awakens is probably going to have a max native rendering resolution of 720p, with pixel counts probably being between around 360p to around 540p, without DLSS enabled. And the assets for the DLSS model* of The Matrix Awakens are probably not going to be as good as the PlayStation 5 and Xbox Series X versions of The Matrix Awakens, and perhaps the Xbox Series S version of The Matrix Awakens as well.
 


Assuming Epic Games plans on having The Matrix Awakens available to run on the DLSS model*, based on how the Xbox Series S version of The Matrix Awakens is described, I suspect that the DLSS model* version of The Matrix Awakens is probably going to have a max native rendering resolution of 720p, with pixel counts probably being between around 360p to around 540p, without DLSS enabled. And the assets for the DLSS model* of The Matrix Awakens are probably not going to be as good as the PlayStation 5 and Xbox Series X versions of The Matrix Awakens, and perhaps the Xbox Series S version of The Matrix Awakens as well.


If DLSS model* can get within spitting distance of Series S version I'd be damn thrilled. It so unbelievably far removed from anything current Switch is doing that I just cannot fathom it.


Anybody care to explain what this might mean? Could this just be business as usual ie. they're constantly changing and 'validating' their APIs irrespective of the new model ?
 
Anybody care to explain what this might mean? Could this just be business as usual ie. they're constantly changing and 'validating' their APIs irrespective of the new model ?
It means they are hiring for that position. So yes, business as usual. Could be more. Could be less. We'll never know
 
It means they are hiring for that position. So yes, business as usual. Could be more. Could be less. We'll never know

It's also interesting just to see everything slowly trickle out that seems to be pointing to Nintendo ramping up for what's next.
Like the news just the other day about them expanding in Japan and everything just seems like it's around summer 2022.

Nintendo is expanding its game development base.

The company will lease approximately 8,500 square meters in a Kyoto City building under construction next to its headquarters and will move into the building in May 2022.
In addition, a new building for game production will be built on the site of Nintendo Kyoto Research Center (Kyoto City) on the site of the former headquarters as early as 2022.
In the past, the company has often outsourced game development, but it will increase the number of its employees.
 
0
TFLOPs are sort of effected by efficiency changes between uArchs among a number of other factors.

Generally newer uArchs can do the same thing older ones did with a lower TFLOP number which is a sign the uArch overall is more efficient per-FLOP.

So in this example Ampere is around 4X as efficent as Polaris per-FLOP.

Which we have to remember, Polaris was derived from GCN which was in the OG PS4 back in 2013.

RDNA is a effective clean break from that, so while RDNA1 has a more modest jump over later GCN generations like Polaris and Vega, RDNA2 with it's infinity cache got a giant leap.


Although oddly enough NVIDIA sort of went backwards with their efficiency per-FLOP.

Ampere has way more TFLOPs than Turing, but Turing cards still compete or even beat Ampere, same with some Pascal cards.

EX: The 2080Ti has 13.45 TFLOPs, the RTX 3070 which NVIDIA themselves said is equivalent to the 2080Ti has 21.7 TFLOPs.

TFLOPs aren't a directly comparable performance metric.

1.84 GCN1.1 Tflops of the PS4 isn't directly comparable to the 4.2 TFLOPs of Polaris in the PS4 Pro which isn't directly comparable to the 4TFLOps of Infinity Cacheless RDNA2 of the Series S which isn't directly comparable to 4 TFLOPs of True RDNA 2 on Desktop.
how much more efficient was Polaris vs gcn?
 
We'll soon see how low Ampere can go in a consumer product. But the 2GB will hurt it

 
We'll soon see how low Ampere can go in a consumer product. But the 2GB will hurt it

@Aisaka_MKII mentioned finding GA10B more than a year ago, which I suspect is the GPU name for Orin.
 
0
Polaris is GCN. Vega supplanted Polaris, which Supplanted (Southern) Islands. All of these were various flavors of GCN
Right, but I got a little confused when he said ps4 gcn isn't comparable to Polaris PS4 pro. Was wondering the efficiency rate and any new tools it has over regular ps4.. only thing I know is, is that Polaris on pro has mixed precision mode, which og PS4 doesn't have.
 
Hi everyone, this is my first post on Famiboards and I've been reading you all on resetera since the first lockdow in spring 2020 (sorry for the possibles mistakes, I'm french). I like this kind of topic but I don't know much about it, so could someone tell me with simple words how powerful could the Switch 2 be if it was ereleased like in march 2024 ? I mean compared to a Xbox One or a PS4 and considering the president said last year that they want to invest in edge technology.
 
Right, but I got a little confused when he said ps4 gcn isn't comparable to Polaris PS4 pro. Was wondering the efficiency rate and any new tools it has over regular ps4.. only thing I know is, is that Polaris on pro has mixed precision mode, which og PS4 doesn't have.
looking around, it seems you'd be comparing a 7870 and an RX 470/480. of course, I can't really find any sort of proper comparison. or at least, not one I'm willing to put that much time into

Hi everyone, this is my first post on Famiboards and I've been reading you all on resetera since the first lockdow in spring 2020 (sorry for the possibles mistakes, I'm french). I like this kind of topic but I don't know much about it, so could someone tell me with simple words how powerful could the Switch 2 be if it was ereleased like in march 2024 ? I mean compared to a Xbox One or a PS4 and considering the president said last year that they want to invest in edge technology.
still around Xbox One, which is the same if it was released next year. maybe closer to PS4 if they got a node shrink
 
Hi everyone, this is my first post on Famiboards and I've been reading you all on resetera since the first lockdow in spring 2020 (sorry for the possibles mistakes, I'm french). I like this kind of topic but I don't know much about it, so could someone tell me with simple words how powerful could the Switch 2 be if it was ereleased like in march 2024 ? I mean compared to a Xbox One or a PS4 and considering the president said last year that they want to invest in edge technology.
I think in the absolute worst case scenario, the DLSS model* in TV mode would be roughly on par with the Xbox One S, but without DLSS enabled; and in the absolute best case scenario, the DLSS model* in TV mode would be relatively close to the Xbox Series S, but with DLSS enabled.
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom