• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Do you have audio editing experience and want to help out with the Famiboards Discussion Club Podcast? If so, we're looking for help and would love to have you on the team! Just let us know in the Podcast Thread if you are interested!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

I wonder what GPU Nintendo studios use to train their in-house neural networks and use that ML-capability for game development and idea crafting

Would it be nVidia GPUs? 🤭

Or AMD GPUs? Of older? 🤔

Would it be likely that they slowly changed over to that if they were AMD in-house? But if the AMD GPUs work fine, would they even bother switching over?


I’d assume Sony and Microsoft use AMD GPUs themselves, Sony used ML for Spider-Man IIRC
Fair enough, but considering the new tech of Orion and timing of release of Orion and speculated release for Drake/switch 2, i do wonder if Nintendo will make break even at $400, especially if it's at 5nm node.

TX1 was at least 2 years old when it released for switch..
I don’t think that’s much of a concern, the A15Bionic is in a 429 dollar phone that uses more premium materials and has a camera and a 5G modem.
 
Last edited:
I wonder what GPU Nintendo studios use to train their in-house neural networks and use that ML-capability for game development and idea crafting

Would it be nVidia GPUs? 🤭

Or AMD GPUs? Of older? 🤔

Would it be likely that they slowly changed over to that if they were AMD in-house? But if the AMD GPUs work fi


I’d assume Sony and Microsoft use AMD GPUs themselves, Sony used ML for Spider-Man IIRC
I'm willing to be it was Nvidia gpus, simply because they're the defacto leader unless you start looking at other HPC options
 
You either believe they know of 11 different developers have dev kits, or you think they are wrong/making it up.
Uh, no. That's not true at all.

I don't know why you're just refusing to understand what I'm saying.
 
0
I know it’s been discussed briefly a few times by several people here, but what do we think a Switch 3 would be built on? Let’s give it another 6 or so years after Drake- late 2028/ early 2029. How would it compare to PS5 & XBS?
Whatever latest arm cortex and Nvidia uarch on 2028.
 
I know it’s been discussed briefly a few times by several people here, but what do we think a Switch 3 would be built on? Let’s give it another 6 or so years after Drake- late 2028/ early 2029. How would it compare to PS5 & XBS?
That’s really hard to say right now.


Technology is going to look pretty different by then compared to now, as right now we are ending one type of era and entering another type of era of technological advancements that look at ways that differ from the traditional scenario to go beyond just node shrinks.
 
That’s really hard to say right now.


Technology is going to look pretty different by then compared to now, as right now we are ending one type of era and entering another type of era of technological advancements that look at ways that differ from the traditional scenario to go beyond just node shrinks.
What do you mean by “beyond just node shrinks”? Is that referring to changing the chip material? I think I read something about graphene I believe?
 
I know it’s been discussed briefly a few times by several people here, but what do we think a Switch 3 would be built on? Let’s give it another 6 or so years after Drake- late 2028/ early 2029. How would it compare to PS5 & XBS?
Mobile gaming is at least a generation (6-8 years) behind home console so.. maybe a 10-12 flop console for switch 3 on 1.5nm if cards are played right? I suspect docked mode would be 10-12 while handheld is 4-6 TFLOPs. But perhaps we'll get a mobile gaming device from another company that reaches that in their handheld mode at 1080p or 2k.

Nothing is guaranteed or that simple of course. This is my ass talking. Just ballpark guesses based on current trajectory assuming 1.5nm will be out by then.
 
Last edited:
I know it’s been discussed briefly a few times by several people here, but what do we think a Switch 3 would be built on? Let’s give it another 6 or so years after Drake- late 2028/ early 2029. How would it compare to PS5 & XBS?
There's basically two scenarios for Switch 3. Either they do a shorter upgrade cycle and build something based on Atlan, or they skip over that one, and build something that's too early to reliably predict.

It can be really hard to predict how this sort of tech is going to develop more than a few years out. Something worth keeping in mind is that Nvidia hadn't even announced RT acceleration or released any cards featuring tensor cores (which, at the time, were a datacenter only feature) yet when the current Switch launched, and now we expect both of those to feature in Switch 2.
 
Last edited:
What do you mean by “beyond just node shrinks”? Is that referring to changing the chip material? I think I read something about graphene I believe?
I don't think anybody knows what form the next breakthrough in microchip technology will take. There are a lot of experimental ideas, but it's all a bit "quantum computer"-y to me at the moment -- stuff that sounds great on paper, but with no apparently viable path to enter the mainstream, let alone take over the unfathomable global workload currently resting on existing technologies.
 
I wonder what GPU Nintendo studios use to train their in-house neural networks and use that ML-capability for game development and idea crafting

Would it be nVidia GPUs? 🤭

Or AMD GPUs? Of older? 🤔

Would it be likely that they slowly changed over to that if they were AMD in-house? But if the AMD GPUs work fine, would they even bother switching over?


I’d assume Sony and Microsoft use AMD GPUs themselves, Sony used ML for Spider-Man IIRC

I don’t think that’s much of a concern, the A15Bionic is in a 429 dollar phone that uses more premium materials and has a camera and a 5G modem.
They may not be using any local hardware.

In the industry I work in we use cloud compute to train machine learning models due to its scalability and scheduling options etc. Something like Microsoft Azure or Amazon Web Services. Then it's down to how the cluster is configured as to whether its trained on CPU or a GPU cluster is setup.

If you are doing simpler machine learning tasks a cpu cluster may be fine. Neural networks however benefit greatly from having a dedicated GPU or 2 or however many you want in the cluster. As nvidia GPUs are mostly plug and play with most widely used Neural Network libraries such as tensor flow I'd imagine the clusters contain nvidia GPUs. I think our GPU clusters use nvidia Tesla GPUs on azure.
 
0
Yup, outside of things like DLSS where its designed to run on specific hardware it doesn't matter what the model is trained on.

You can train deep learning models on your own machines as long as you have enough memory and time. Having more powerful hardware for training just reduces the time to train a model for the most part, this becomes very important when it comes to parameter tuning. That step can take forever on weaker hardware.
 
0
I know it’s been discussed briefly a few times by several people here, but what do we think a Switch 3 would be built on? Let’s give it another 6 or so years after Drake- late 2028/ early 2029. How would it compare to PS5 & XBS?
One of the challenges with that question is that we don't really know what kind of device Nintendo would release in 2028/29. I think pretty much everyone here is confident that Nintendo's next device will be a very similar form-factor to the Switch, which makes it easier to gauge what kind of performance would be feasible, but there's no guarantee the same thing would be true at the end of the decade. Maybe sales of the next model trail off, and Nintendo want to try something new. Or maybe there's just some new technology which Nintendo will build the device around, changing the form-factor from what we've got now (and therefore potentially changing the size and power consumption, which determines what kind of performance is feasible).

That being said, what I'd really like to see at the end of this decade is hardware designed around ray tracing as the primary rendering method, not just the rasterised games with some RT bells and whistles that we're getting now. We've already got a couple of examples of this in Quake II and Minecraft, both of which have very simple geometry and support fully path traced graphics today if you've got capable enough hardware. There's obviously a big jump in performance needed to get games with modern geometric complexity running in real time on a fully ray traced renderer, but we're also pretty early in the life of dedicated RT hardware, so there may be some technological jumps in the meantime which make it more feasible (my guess is a shift from path tracing to hardware-accelerated Metropolis light transport).
 
I
Want
News/Leaks/Nate Drake's podcast/pizza/Chantilly cream
.

All of that please. Or any of those, and preferably one of the first three.
 
I don't think anybody knows what form the next breakthrough in microchip technology will take. There are a lot of experimental ideas, but it's all a bit "quantum computer"-y to me at the moment -- stuff that sounds great on paper, but with no apparently viable path to enter the mainstream, let alone take over the unfathomable global workload currently resting on existing technologies.

There's a viable path for some new tech, some of which are currently under active development. Mostly a combination of specialized computing units, a unification in computing and memory units beyond the Von Neumann architecture (memristors, artificial synapses etc.), and possibly the use of new materials with exotic properties beyond silicon such as 2D dichalcogenide though the latter is probably for the longer term.
Quantum computing is great but not really viable nor needed for most application.
 
What do you mean by “beyond just node shrinks”? Is that referring to changing the chip material? I think I read something about graphene I believe?
A node shrink can only do so much, so now they are going for different ways that while they benefit from the node shrink, the different ways of having an increased performance are what’s being looked into.

AMD is doing an MCM for their RDNA3 (supposedly) as an example.

Large amounts of cache are going to be more normal I think.

Things that go beyond just having a better node.

It’s not really easy to know what they would have later on and technology is moving in a different direction.

End of one era and into a new era.

Wonder if stacking could be viable for a console though…🤔🤔🤔

Sidenote: with respect to node, with 8 and 7 and 5nm, the R&D gets progressively more expensive but that’s an upfront cost, the wafer may not be so expensive. The amount of usable does you get per wafer will vary, with 5nm having the most and 8nm having the least due to space constraints. But you would need to spend for more wafers on the 7nm and even more on the 8nm to make the same batch as the 5nm, even assuming an +90% yield rate. And the size of each chip gets progressively bigger the lesser so you could be spending more per unit. 7 (or 6nm) and 5nm aren’t off the table, but 8nm I wouldn’t 100% rule out, just take the 3 nodes as more of a… “it could happen anywhere here” type of thing.

Granted, this ends the extent of my knowledge on the business and logistic side of things based on publicly available information that has been disclosed and piecing things together, so please do not take this as 100% gospel. But more of a… food for thought.
 
Last edited:
0
One of the challenges with that question is that we don't really know what kind of device Nintendo would release in 2028/29. I think pretty much everyone here is confident that Nintendo's next device will be a very similar form-factor to the Switch, which makes it easier to gauge what kind of performance would be feasible, but there's no guarantee the same thing would be true at the end of the decade. Maybe sales of the next model trail off, and Nintendo want to try something new. Or maybe there's just some new technology which Nintendo will build the device around, changing the form-factor from what we've got now (and therefore potentially changing the size and power consumption, which determines what kind of performance is feasible).

That being said, what I'd really like to see at the end of this decade is hardware designed around ray tracing as the primary rendering method, not just the rasterised games with some RT bells and whistles that we're getting now. We've already got a couple of examples of this in Quake II and Minecraft, both of which have very simple geometry and support fully path traced graphics today if you've got capable enough hardware. There's obviously a big jump in performance needed to get games with modern geometric complexity running in real time on a fully ray traced renderer, but we're also pretty early in the life of dedicated RT hardware, so there may be some technological jumps in the meantime which make it more feasible (my guess is a shift from path tracing to hardware-accelerated Metropolis light transport).
We'll essentially get a portable rtx 3060 at 20 watts on a 1.5nm node,
along with 8 A99 CPUs at 2GHz per core, LPDDR6 RAM and 256GB of storage space. Believe it
🧐
 
0
a comparison of Lumen vs RTXGI


It's really good. You can achieve results that look close to the Epic Lumen GI settings at first glance and performs much faster (52 fps to 72 fps in my project) You can adjust the intensity of the RTXGI lighting in a way that it looks close to Epic Lumen settings without losing performance.
Still, it's pretty clear Lumen has higher quality, it covers more detail and shades even tiny stuff. But yeah the average user will likely not notice, RTXGI is a great alternative when you want to build high performance games.

When you compare RTXGI to Lumen's high settings though, the performance difference starts to get a bit less. Lumen has still higher precision I'd say, but the overall scene gets darker than both RTXGI at my settings and Epic Lumen.

Keep in mind this scene is just a small room, I have no idea how RTXGI handles open worlds with massive draw distance.

But there's a huge caveat with RTXGI: it doesn't handle reflections. You can see in my project I am using SSR with RTXGI. With Lumen however, even on high, you get off screen reflections without any hit to performance compared to SSR. So that's a huge bonus for Lumen.
If you'd enable RTXGI+RT reflections it will run much slower than Lumen. So Lumen at high settings probably offers the best performance/visual ratio. For most gamers though SSR are fine and with RTXGI you can achieve high quality lighting with great performance.
 


I don't know exactly how my job here became "post DF videos and defend DLSS over FSR" but here we are :LOL:

Interesting nuggets that might be relevant here
  • No expectation of 9.5 gen consoles from MS/Sony. That has gotten a lot of chatter around here, but DF sides with the naysayers
    • Last gen's "pro" models driven by the rise of 4k displays. 8k displays haven't seen the same adoption rates
    • There isn't room for a die shrink or consolidation in the current machines that would make a redesign/refresh cost efficient
    • Sales of the current machines are limited by supply, not by market desire.
    • We're still in the cross-gen period and will be for the foreseeable future
    • The true Next Gen feature on the horizon? Machine Learning hardware, and AMD doesn't seem positioned in the same way to ship it
  • More Upscaling Chatter
    • Repeat of what we've heard before
    • DLSS is superior visually, runs faster, uses less electricity, and less silicon
    • ML acceleration is a much more compelling hardware feature than more shader cores
    • Tensor cores have huge potential for gaming beyond graphics that isn't yet tapped
      • (see above)
    • ML hardware is coming, period. Too useful to enterprise clients and eventually desktop.
    • If it's there, games will figure out how to use it and DLSS is proof that they can
Worthwhile comparing all this analysis to Nintendo.
  • Nintendo is affected by supply chain problems but is also facing market saturation - they want to sell new switches to existing owners
  • 4K displays are untapped potential for Nintendo
  • NVidia is well positioned to deliver on ML hardware
  • Mobile hardware continues to evolve and represents cost savings and feature jumps for Nintendo
    • Nintendo has stated they’re looking into redesigns of internals to lower costs and bypass chip shortages already
  • The Switch’s cross-gen period amounted to a single launch title - the Switch hardware is being genuinely taxed and devs aren’t being limited by needing to support last gen.
 
a comparison of Lumen vs RTXGI


This is a good write-up and consistent with my findings but I disagree with the conclusion. Most people will notice SSR artifacts when they're obvious, but accurate looking global illumination is not going to be distinguishable from slightly less accurate looking global illumination to the average layperson. Most people do not understand the physics behind light transport or how color is even perceived (they're more likely to think objects have color instead of reflecting color) so they certainly are not going to understand how many bounces are in a scene and if they're reflecting correctly. Just look at how poorly people do with blind tests with phone camera picture comparisons. They have no idea what's "accurate". That being said, everyone can tell when specular reflections don't mirror the environment properly, so that one is more important to get right.

There's also the issue of visual latency. Motion perception is pretty good for most people, and the latency with Lumen is noticable and not something that can be ignored when talking about accuracy. There is nothing accurate about a perceptible delay in a single light source slowly illuminating a scene over time. Reality doesn't do that and everyone knows it. We should be comparing the light in motion, not as if it's stationary.

In my opinion, the most practical option is RTXGI (lower-quality) + RTX ray-tracing for reflections + DLSS. You get a huge boost in performance due to using lower quality GI that's good enough for most people and you pay for the performance where it's needed (in making sure the reflections that people are more visually sensitive to actually look correct).
 
Last edited:
In my opinion, the most practical option is RTXGI (lower-quality) + RTX ray-tracing for reflections + DLSS. You get a huge boost in performance due to using lower quality GI that's good enough for most people and you pay for the performance where it's needed (in making sure the reflections that people are more visually sensitive to actually look correct).
Not something I’ve personally investigated (no hardware for it) but I understand DLSS doesn’t (yet) handle ray traces reflections well. Or am I misspeaking?
 
Not something I’ve personally investigated (no hardware for it) but I understand DLSS doesn’t (yet) handle ray traces reflections well. Or am I misspeaking?

This is not necessarily true, in my experience. You have a lot of developers who are expecting NVIDIA to do all of the work for them when it comes to DLSS, so depending on the implementation, the engine's rendering pipeline, and developer competency, your RT reflections may or may not look decent.

Something I've noticed is that some devs don't properly enable the negative mip-mapping bias to the textures, which NVIDIA now advises that you do to improve image quality of DLSS and subsequently reflections in a DLSS'd image (they used to recommend to keep it at native, but that has now changed). More importantly, there are no one-size-fits-all parameter values for the reflection denoiser and temporal jitter. There are cases where you will need to reduce the value of the reflection parameter for temporal AA to mitigate blurred reflections, and reduce the value of the denoised reflection temporal accumulation parameter in certain cases where specular reflection information is being destroyed because of poorly optimized jittered samples.

Basically, you can have good-looking RT reflections with DLSS, you just need to know what you're doing.

EDIT:

Something else I should mention. The input resolution for the ray-tracing denoiser shouldn't necessarily be the same as what's typically good enough for the raster without RT reflections. The denoiser may need a higher quality source than the raster due to its sensitivity. I would definitely recommend increasing the input resolution for the denoiser if the quality doesn't seem up to par.
 
Last edited:


I don't know exactly how my job here became "post DF videos and defend DLSS over FSR" but here we are :LOL:

Interesting nuggets that might be relevant here
  • No expectation of 9.5 gen consoles from MS/Sony. That has gotten a lot of chatter around here, but DF sides with the naysayers
    • Last gen's "pro" models driven by the rise of 4k displays. 8k displays haven't seen the same adoption rates
    • There isn't room for a die shrink or consolidation in the current machines that would make a redesign/refresh cost efficient
    • Sales of the current machines are limited by supply, not by market desire.
    • We're still in the cross-gen period and will be for the foreseeable future
    • The true Next Gen feature on the horizon? Machine Learning hardware, and AMD doesn't seem positioned in the same way to ship it
  • More Upscaling Chatter
    • Repeat of what we've heard before
    • DLSS is superior visually, runs faster, uses less electricity, and less silicon
    • ML acceleration is a much more compelling hardware feature than more shader cores
    • Tensor cores have huge potential for gaming beyond graphics that isn't yet tapped
      • (see above)
    • ML hardware is coming, period. Too useful to enterprise clients and eventually desktop.
    • If it's there, games will figure out how to use it and DLSS is proof that they can
Worthwhile comparing all this analysis to Nintendo.
  • Nintendo is affected by supply chain problems but is also facing market saturation - they want to sell new switches to existing owners
  • 4K displays are untapped potential for Nintendo
  • NVidia is well positioned to deliver on ML hardware
  • Mobile hardware continues to evolve and represents cost savings and feature jumps for Nintendo
    • Nintendo has stated they’re looking into redesigns of internals to lower costs and bypass chip shortages already
  • The Switch’s cross-gen period amounted to a single launch title - the Switch hardware is being genuinely taxed and devs aren’t being limited by needing to support last gen.

Are the Nintendo Switch bullet points your thoughts, or did they actually bring up the Switch in that video as well.
 
@oldpuck @ILikeFeet

I just finished watching that DF Podcast.

It seems as if Alex is suggesting that ray-traced reflections aren't getting upsampled because they look the same as they do at the base resolution without DLSS. If that is what he's suggesting then I have to say, that is not correct. If the reflections don't look any better it's likely because of one of the reasons I've already mentioned. Ray-traced reflections are included in the input. Now, there are some things that may not be or aren't included, like billboards, some particle effects, Scene Capture reflections (Unreal), and basically lots of other things that don't have motion vectors. But ray-traced reflections are definitely included in the input unless the objects being reflected are one of the examples I just mentioned.
 
Are the Nintendo Switch bullet points your thoughts, or did they actually bring up the Switch in that video as well.
Sorry, should have been clearer - that's my analysis, and why I thought it was worth bringing up.

@oldpuck @ILikeFeet

I just finished watching that DF Podcast.

It seems as if Alex is suggesting that ray-traced reflections aren't getting upsampled because they look the same as they do at the base resolution without DLSS. If that is what he's suggesting then I have to say, that is not correct. If the reflections don't look any better it's likely because of one of the reasons I've already mentioned. Ray-traced reflections are included in the input. Now, there are some things that may not be or aren't included, like billboards, some particle effects, Scene Capture reflections (Unreal), and basically lots of other things that don't have motion vectors. But ray-traced reflections are definitely included in the input unless the objects being reflected are one of the examples I just mentioned.
I've heard this claim that DLSS handles RT poorly in the past, but I'm having trouble locating any clear statements on it. My understanding is that in 2.0 the RT input was purely future proofing, and that the data was discarded. Scanning through the 2.4 docs implies that is no longer the case. But this is all outside my area of expertise.
 
I've heard this claim that DLSS handles RT poorly in the past, but I'm having trouble locating any clear statements on it. My understanding is that in 2.0 the RT input was purely future proofing, and that the data was discarded. Scanning through the 2.4 docs implies that is no longer the case. But this is all outside my area of expertise.

There is some experimental work being done to discretely handle motion vector data regarding the source of the reflections (off-screen) in the future, but rasterized objects included in the main render target are upsampled and included in the input (including pixels that represent reflections). DLSS does not view secondary buffers like shadow and reflection buffers independent of the primary buffer as of now, but that doesn't mean that you aren't getting more pixel detail on reflections for upsampled images.
 
Speaking of RT reflections, I saw the Steam Deck video DF made a month ago with some games that were shown to be compatible with RT reflections on it.. The RT games had to be lowered to 540p at 30fps unlocked (Control), but it looks quite impressive regardless. Skip to 7:16 in the DF video


I didn't think much about RT (it was a low priority for me) on Switch 2 and just mobile gaming in general, but seeing Steam Deck using it in action has me kinda hyped. What's good about it is that Nvidia does RT better than AMD and has DLSS to boot.. It's gonna be exciting to see Nintendo use RT for the first time... Botw 2 and a Mario game will look gorgeous on it. And even third party games like Control (720p 30fps should be doable with DLSS).
 
Last edited:
Speaking of RT reflections, I saw the Steam Deck video Dad made a month ago with some games that were shown to be compatible with RT reflections on it.. The RT games had to be lowered to 540p at 30fps unlocked (Control), but it looks quite impressive regardless. Skip to 7:16 in the DF video


I didn't think much about RT (it was a low priority for me) on Switch 2 and just mobile gaming in general, but seeing Steam Deck using it in action has me kinda hyped. What's good about it is that Nvidia does RT better than AMD and has DLSS to boot.. It's gonna be exciting to see Nintendo use RT for the first time... Botw 2 and a Mario game will look gorgeous on it. And even third party games like Control (720p 30fps should be doable with DLSS).

Dad…?
 
Speaking of RT reflections, I saw the Steam Deck video DF made a month ago with some games that were shown to be compatible with RT reflections on it.. The RT games had to be lowered to 540p at 30fps unlocked (Control), but it looks quite impressive regardless. Skip to 7:16 in the DF video


I didn't think much about RT (it was a low priority for me) on Switch 2 and just mobile gaming in general, but seeing Steam Deck using it in action has me kinda hyped. What's good about it is that Nvidia does RT better than AMD and has DLSS to boot.. It's gonna be exciting to see Nintendo use RT for the first time... Botw 2 and a Mario game will look gorgeous on it. And even third party games like Control (720p 30fps should be doable with DLSS).

though that's probably using a power profile that'd be more similar to docked Drake
 
Yeah, sure, Rich Leadbetter Jr.
Aijght, if you are gonna out me, I'm gonna out @brainchild as Neil deGrasse Tyson's son, but that should have been obvious from the start 😅

That was really a typo.

though that's probably using a power profile that'd be more similar to docked Drake
Not sure if they took full advantage of the 1.6 TFLOPs on Steam Deck for that game. I hear 1 TFLOP being used a lot (not in the video).. Does anyone here with a Steam Deck know?

If Drake does get that 12 SMs on 6 or 5nm TSMC, then we really could get close to 3 TFLOPs... and have handheld mode at around 1 TFLOPs.. Which would be close enough for the GPU side. But still... possibly better RT and DLSS should at least give similar GPU performance 🤔
 
Last edited:
@oldpuck

Since I was just playing this game I figured I might as well take screenshots and provide some comparison images of the difference between RT reflections before and after DLSS. And well, the results speak for themselves...

Hitman 3 RT Reflections: 1080p (no DLSS) vs 4K DLSS Quality (1080p input resolution)

Hitman 3 RT Reflections: 4k (no DLSS) vs 4k DLSS Quality (1080p input resolution)

Source images:

007718bf-a8db-4bff-bc95-1a5c67fb7a3f.jpg


1080p (no DLSS)

6f1fd985-7930-48b4-9819-eaeb5737e8bb.jpg


4k (no DLSS)

17eb5a27-c50c-442d-9dfe-5bc0a0da3636.jpg


4k DLSS Quality (1080p input resolution)

Directly comparing them, you can see a significant difference between the 1080p reflections before and after DLSS. Of course, as I mentioned before, the source reflections need to be of sufficient quality in the first place, and a lot of games have their RT reflections at a quarter the resolution on the display resolution, so they're not going to look very good compared to higher resolution reflections.

Aijght, if you are gonna out me, I'm gonna out @brainchild as Neil DgGrass Tyson's son, but that should have been obvious from the start 😅

😂
 
@oldpuck

Directly comparing them, you can see a significant difference between the 1080p reflections before and after DLSS. Of course, as I mentioned before, the source reflections need to be of sufficient quality in the first place, and a lot of games have their RT reflections at a quarter the resolution on the display resolution, so they're not going to look very good compared to higher resolution reflections.



😂
This game also has issue with rt shadows and dlss.The contour of 47 woud cut into the shadow and just show the color of the wall behind.
 


I don't know exactly how my job here became "post DF videos and defend DLSS over FSR" but here we are :LOL:

Interesting nuggets that might be relevant here
  • No expectation of 9.5 gen consoles from MS/Sony. That has gotten a lot of chatter around here, but DF sides with the naysayers
    • Last gen's "pro" models driven by the rise of 4k displays. 8k displays haven't seen the same adoption rates
    • There isn't room for a die shrink or consolidation in the current machines that would make a redesign/refresh cost efficient
    • Sales of the current machines are limited by supply, not by market desire.
    • We're still in the cross-gen period and will be for the foreseeable future
    • The true Next Gen feature on the horizon? Machine Learning hardware, and AMD doesn't seem positioned in the same way to ship it
  • More Upscaling Chatter
    • Repeat of what we've heard before
    • DLSS is superior visually, runs faster, uses less electricity, and less silicon
    • ML acceleration is a much more compelling hardware feature than more shader cores
    • Tensor cores have huge potential for gaming beyond graphics that isn't yet tapped
      • (see above)
    • ML hardware is coming, period. Too useful to enterprise clients and eventually desktop.
    • If it's there, games will figure out how to use it and DLSS is proof that they can

I mean, thats about as milktoast as you can get with an analysis.
its almost more a stating of facts if you know how the tech space works, and where we currently are.

(BTW i think youre switch points are similar, so a +1 for them)
 
Aijght, if you are gonna out me, I'm gonna out @brainchild as Neil deGrasse Tyson's son, but that should have been obvious from the start 😅

That was really a typo.


Not sure if they took full advantage of the 1.6 TFLOPs on Steam Deck for that game. I hear 1 TFLOP being used a lot (not in the video).. Does anyone here with a Steam Deck know?

If Drake does get that 12 SMs on 6 or 5nm TSMC, then we really could get close to 3 TFLOPs... and have handheld mode at around 1 TFLOPs.. Which would be close enough for the GPU side. But still... possibly better RT and DLSS should at least give similar GPU performance 🤔
The problem with the steam deck is that it dynamically allocates power. So it's probably not hitting peak gpu clocks as more power is diverted to the cpu at times
 
Not sure if they took full advantage of the 1.6 TFLOPs on Steam Deck for that game. I hear 1 TFLOP being used a lot (not in the video).. Does anyone here with a Steam Deck know?

If Drake does get that 12 SMs on 6 or 5nm TSMC, then we really could get close to 3 TFLOPs... and have handheld mode at around 1 TFLOPs.. Which would be close enough for the GPU side. But still... possibly better RT and DLSS should at least give similar GPU performance 🤔

I have a Steam Deck. The clocks vary depending on usage, but when I've been monitoring it for graphically intensive games the GPU is usually somewhere between 1.0GHz and 1.3GHz (which would be between 1Tflops and 1.3Tflops). I haven't seen it hit 1.6GHz, but I'm not using the overlay that shows clocks that often. I believe it's also possible to lock the GPU clock at 1.6GHz, but that would likely result in throttling of the CPU clocks, or reduced battery life, or both. I'm quite happy with the performance of it, so I don't really feel the need to tinker with it to squeeze out any extra performance.
 
@oldpuck

Since I was just playing this game I figured I might as well take screenshots and provide some comparison images of the difference between RT reflections before and after DLSS. And well, the results speak for themselves...

Hitman 3 RT Reflections: 1080p (no DLSS) vs 4K DLSS Quality (1080p input resolution)

Hitman 3 RT Reflections: 4k (no DLSS) vs 4k DLSS Quality (1080p input resolution)

Source images:

007718bf-a8db-4bff-bc95-1a5c67fb7a3f.jpg


1080p (no DLSS)

6f1fd985-7930-48b4-9819-eaeb5737e8bb.jpg


4k (no DLSS)

17eb5a27-c50c-442d-9dfe-5bc0a0da3636.jpg


4k DLSS Quality (1080p input resolution)

Directly comparing them, you can see a significant difference between the 1080p reflections before and after DLSS. Of course, as I mentioned before, the source reflections need to be of sufficient quality in the first place, and a lot of games have their RT reflections at a quarter the resolution on the display resolution, so they're not going to look very good compared to higher resolution reflections.



😂
Doesn’t hitman use a render-to-texture system for reflections?
 
We're about to enter June and still no leaks or talk from Nintendo about new hardware.
If it's coming March 2023 then we're about 9 months away. Seems like we should be hearing about it by now.
If it does come out with Botw 2, it will by end of Spring, so not even guaranteed to be in March.

And tbe original switch had it's first reveal in October 2016, about 5 months before it's release.
 
We're about to enter June and still no leaks or talk from Nintendo about new hardware.
If it's coming March 2023 then we're about 9 months away. Seems like we should be hearing about it by now.
Not really?????
Like I feel people thinking "It must leak before it's announced" are getting a bit over their heads in the face of the logical idea that Nintendo may just have plugged leaks for the most part up to keep their investors happy?

Also OG Switch was Oct 2016 to March 2017, so 5 Months.
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom