• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Furukawa Speaks! We discuss the announcement of the Nintendo Switch Successor and our June Direct Predictions on the new episode of the Famiboards Discussion Club! Check it out here!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (New Staff Post, Please read)

It looks bad on a screen smaller than the Switch’s to me.

I’d rather we just have clean, stable looking games.
You're not going to be getting PS5/XBSraw performance numbers in any case, and I'd rather not have every launch title on the "Drake" looking and performing as bad as the aformentioned XC2 worst case scenario at launch.

I'd rather have DLSS utilized than none at all.
 
To be fair, they were both big Polaris believers back in the day. I don't remember who felt more adamant about it, but 10k was the Polaris Guy to me
So was OJ, both 10k and OJ realized they were wrong. Thing with supermetaldave is, he had a Nintendo employee contact who gave him a copy of the SDK manual from AMD that I guess Nintendo had been working on, he showed it off on his youtube channel and got his contact fired iirc... SMDave also knew for a fact that Switch wasn't the only device and AMD was making a console for Nintendo, he also would misunderstand tech like that he found out that Nintendo was working on PCIe drivers, and used it as evidence for his AMD console, but didn't understand that USB-C actually uses PCIe lanes for its communication and that Tegra X1 has PCIe lanes. Just for years after the Switch's reveal, SMDave kept pushing a narrative of paranoia, his videos at least use to start with "Think for yourself"... so yeah 10k was wrong once, SMDave was wrong for years after Switch was revealed.

OJ and SMDave actually watched the Switch reveal together, it is pretty good, you could tell instantly that OJ was ready to do the right thing there and just accept defeat, not SMDave though lol.
 
Last edited:
You're not going to be getting PS5/XBSraw performance numbers in any case, and I'd rather not have every launch title on the "Drake" looking and performing as bad as the aformentioned XC2 worst case scenario at launch.

I'd rather have DLSS utilized than none at all.
I never even brought up PS5 and XBS… I just want a clear step up over regular Switch in terms of image quality in handheld and 240p DLSS doesn’t achieve that in my opinion.
 
Quoted by: SiG
1
So was OJ, both 10k and OJ realized they were wrong. Thing with supermetaldave is, he had a Nintendo employee contact who gave him a copy of the SDK manual from AMD that I guess Nintendo had been working on, he showed it off on his youtube channel and got his contact fired iirc... SMDave also knew for a fact that Switch wasn't the only device and AMD was making a console for Nintendo, he also would misunderstand tech like that he found out that Nintendo was working on PCIe drivers, and used it as evidence for his AMD console, but didn't understand that USB-C actually uses PCIe lanes for its communication and that Tegra X1 has PCIe lanes. Just for years after the Switch's reveal, SMDave kept pushing a narrative of paranoia, his videos at least use to start with "Think for yourself"... so yeah 10k was wrong once, SMDave was wrong for years after Switch was revealed.

OJ and SMDave actually watched the Switch reveal together, it is pretty good, you could tell instantly that OJ was ready to do the right thing there and just accept defeat, not SMDave though lol.
Dave is in fact heavily suggesting in his latest videos that the next Switch will utilize an AMD chip. Which.. you know... not happening.
 
I've observed an interesting trend regarding 1st party Nintendo titles that make me wonder how they could implement FSR2.0 and DLSS2.x and have those solutions be swapped out on the fly depending on platform.

For starters, most Nintendo 1st party titles that were developed by EPD using their in-house engine don't seem to utilize any form of anti-aliasing solution. There are a few exceptions such as Super Mario 3D World which employed some form of AA, but other titles forgo this n favor of a very raw but clean image.

Recently Nintendo Switch Sports seems to have confirmed the usage of FSR1.0 in some capacity, but unlike FSR2.0 and DLSS, FSR1.0 requires a form of anti-aliasing for being applied to the image for upscaling. I haven't been able to try Nintendo Switch Sports to tell if the game does utilize anti-aliasing of any form, but I would be surprised if they still omitted this in favor for a sharper image.

Enter FSR2.0 and its input requirements being similar to DLSS2.x's (depth buffer, vector, and temporal), and I could see Nintendo changing the framework of their engines to easily accommodate an "FSR2.0 box" which can then be substituted for "DLSS2.x blackbox" when being played on the successor. Likewise, this also does bring the question any future ports of the game, as DLSS is considered Nvidia proprietary tech, which in this case could easily be swapped out again for FSR2.x or whatever future upscaling solution would present itself to Nintendo.
I never even brought up PS5 and XBS… I just want a clear step up over regular Switch in terms of image quality in handheld and 240p DLSS doesn’t achieve that in my opinion.
That's why I said 480p DLSS. 240p is what Xenoblade Chronicles 2 runs at in its worst case without any smart upscaling of any sort. 240p with DLSS would look considerably much better, but it's likely with the given boost in performance they could probably reach a consistent 720p images with maybe drops to 600p, which would appear unnoticable given its 600p -> 720p "Quality" DLSS (the video shown was "Ultra Performance"). Which brings me to my second point:

Both FSR2.0 and DLSS2.x are both great upscaling solutions which also happen to be good replacements for anti-aliasing as well: Both can literally kill three birds with one stone in terms of improving performance and image quality, and not needing to implement any anti-aliasing solution (such as the awful TAA the XC2 had) because it is essentially considered one.
 
Last edited:
I've observed an interesting trend regarding 1st party Nintendo titles that make me wonder how they could implement FSR2.0 and DLSS2.x and have those solutions be swapped out on the fly depending on platform.

For starters, most Nintendo 1st party titles that were developed by EPD using their in-house engine don't seem to utilize any form of anti-aliasing solution. There are a few exceptions such as Super Mario 3D World which employed some form of AA, but other titles forgo this n favor of a very raw but clean image.

Recently Nintendo Switch Sports seems to have confirmed the usage of FSR1.0 in some capacity, but unlike FSR2.0 and DLSS, FSR1.0 requires a form of anti-aliasing for being applied to the image for upscaling. I haven't been able to try Nintendo Switch Sports to tell if the game does utilize anti-aliasing of any form, but I would be surprised if they still omitted this in favor for a sharper image.

Enter FSR2.0 and its input requirements being similar to DLSS2.x's (depth buffer, vector, and temporal), and I could see Nintendo changing the framework of their engines to easily accommodate an "FSR2.0 box" which can then be substituted for "DLSS2.x blackbox" when being played on the successor. Likewise, this also does bring the question any future ports of the game, as DLSS is considered Nvidia proprietary tech, which in this case could easily be swapped out again for FSR2.x or whatever future upscaling solution would present itself to Nintendo.

That's why I said 480p DLSS. 240p is what Xenoblade Chronicles 2 runs at in its worst case without any smart upscaling of any sort. 240p with DLSS would look considerably much better, but it's likely with the given boost in performance they could probably reach a consistent 720p images with maybe drops to 600p, which would appear unnoticable given its 600p -> 720p "Quality" DLSS. Which brings me to my second point:

Both FSR2.0 and DLSS2.x are both great upscale solutions which so happen to also be good replacements for anti-aliasing too: Both can literally kill three birds with one stone in terms of improving performance and image quality, and not needing to implement an anti-aliasing solution (such as the awful TAA the XC2 had) because its baked in.
This literally started from me saying I hope games use a better base resolution than 240p, lmao. 480p is a good base resolution for 720p.
I’d argue that 240p DLSS upscaled to 720p looks better in some ways than XB2, but worse in others, like image stability.
360p isn’t ideal, but it will be acceptable for “impossible ports”.
 
Quoted by: SiG
1
This literally started from me saying I hope games use a better base resolution than 240p, lmao. 480p is a good base resolution for 720p.
I’d argue that 240p DLSS upscaled to 720p looks better in some ways than XB2, but worse in others, like image stability.
360p isn’t ideal, but it will be acceptable for “impossible ports”.
Should the leaked specs be indeed indicative of the final release of the successor, given the boost in performance and feature sets, I'd expect developers will have even more leeway when it comes to making more "impossible ports" happen.

Who knows, there might be some crazy ways where developers will be using the Tensor cores as part of a creative solution.
 
0

Dakhil, Alovon, 10k and myself go over Drake and the current known information. I think speculation can sometimes move us away from what we actually know, so hopefully this discussion helps frame what this successor is for everyone who watches it.

My first podcast appearance ever. Hope people enjoy it.
 
My first podcast appearance ever. Hope people enjoy it.
more-kylo-ren.gif
 
Dave is in fact heavily suggesting in his latest videos that the next Switch will utilize an AMD chip. Which.. you know... not happening.
He's still doing that after all these years? I thought surely he would have moved on by now.
I've observed an interesting trend regarding 1st party Nintendo titles that make me wonder how they could implement FSR2.0 and DLSS2.x and have those solutions be swapped out on the fly depending on platform.

For starters, most Nintendo 1st party titles that were developed by EPD using their in-house engine don't seem to utilize any form of anti-aliasing solution. There are a few exceptions such as Super Mario 3D World which employed some form of AA, but other titles forgo this n favor of a very raw but clean image.

Recently Nintendo Switch Sports seems to have confirmed the usage of FSR1.0 in some capacity, but unlike FSR2.0 and DLSS, FSR1.0 requires a form of anti-aliasing for being applied to the image for upscaling. I haven't been able to try Nintendo Switch Sports to tell if the game does utilize anti-aliasing of any form, but I would be surprised if they still omitted this in favor for a sharper image.

Enter FSR2.0 and its input requirements being similar to DLSS2.x's (depth buffer, vector, and temporal), and I could see Nintendo changing the framework of their engines to easily accommodate an "FSR2.0 box" which can then be substituted for "DLSS2.x blackbox" when being played on the successor. Likewise, this also does bring the question any future ports of the game, as DLSS is considered Nvidia proprietary tech, which in this case could easily be swapped out again for FSR2.x or whatever future upscaling solution would present itself to Nintendo.
Nvidia themselves actually just launched an open source toolkit for making it easier to integrate multiple of these advanced upscaling algorithms:

 
So was OJ, both 10k and OJ realized they were wrong. Thing with supermetaldave is, he had a Nintendo employee contact who gave him a copy of the SDK manual from AMD that I guess Nintendo had been working on, he showed it off on his youtube channel and got his contact fired iirc... SMDave also knew for a fact that Switch wasn't the only device and AMD was making a console for Nintendo, he also would misunderstand tech like that he found out that Nintendo was working on PCIe drivers, and used it as evidence for his AMD console, but didn't understand that USB-C actually uses PCIe lanes for its communication and that Tegra X1 has PCIe lanes. Just for years after the Switch's reveal, SMDave kept pushing a narrative of paranoia, his videos at least use to start with "Think for yourself"... so yeah 10k was wrong once, SMDave was wrong for years after Switch was revealed.

OJ and SMDave actually watched the Switch reveal together, it is pretty good, you could tell instantly that OJ was ready to do the right thing there and just accept defeat, not SMDave though lol.
Oh, I'm not jabbing 10k. People believe rumors, sometimes too invested and overconfident. We've all been wrong. I do get that feeling with SMD too and him taking it too far.
Out of all the Nintendo youtubers on hardware discussion, I like spawnwave the most. Dude is awesome and doesn't talk out of his ass with conspiracy vibes. He is factual.
 
Oh, I'm not jabbing 10k. People believe rumors, sometimes too invested and overconfident. We've all been wrong. I do get that feeling with SMD too and him taking it too far.
Out of all the Nintendo youtubers on hardware discussion, I like spawnwave the most. Dude is awesome and doesn't talk out of his ass with conspiracy vibes. He is factual.
Yeah spawnwave is definitely great
 
Jeez, either he's got a thing for AMD or someone over at Nvidia must have ran over his dog.
I still follow Daves videos because from time to time he does cover some information that isn't found anywhere else but he always seems to spin it to be this pro AMD narrative, it's a bit strange.

Like others have said, in his last video he used the existence of FSR in switch sports as support for his opinion that the next Nintendo console will have an AMD chip despite the fact that one of the key benefits of FSR is that its device agnostic.

Allowing your own bias to be on display like that just loses you credibility. Shame because he is actually good at finding information a lot of the time, especially when it comes to reporting on metroid development.
 
I still follow Daves videos because from time to time he does cover some information that isn't found anywhere else but he always seems to spin it to be this pro AMD narrative, it's a bit strange.

Like others have said, in his last video he used the existence of FSR in switch sports as support for his opinion that the next Nintendo console will have an AMD chip despite the fact that one of the key benefits of FSR is that its device agnostic.

Allowing your own bias to be on display like that just loses you credibility. Shame because he is actually good at finding information a lot of the time, especially when it comes to reporting on metroid development.
His stubbornness and need to double down despite being proven wrong is what makes me dislike him. Being able to admit defeat and apologize go without saying for me. He did his "NX will be powered by AMD spiel" up until the nvidia partnership was officially announced and even then he tried to find proof that we all have been bamboozled and AMD is the true partner lol. Or the time when he tried to convince everyone that Sony is going to announce a Vita successor at Tokyo Game Show 2018 or 2019, can't remember anymore. Speculating and theorizing is fine but at some point you gotta take that L if it doesn't pan out.
 
SuperMetalDave is certainly a character 🤣


I find him entertaining as an online persona even though it’s all wrong.

Then again, I don’t really watch him :p.

But it’s that willingness to remain on that train of thought and refusal to move on five years after the fact is sometimes comedic.

I respect the stubbornness, when you lose nothing of value.
 
0
This might be interesting: a large set of in-game fps tests for an iGPU called the Radeon 680M that has 3.69 TF of raw compute power, which is less performant by a good bit than the presumed Ampere GPU, possibly even if you go down to 3 TF for the docked mode. This should give a good baseline for what the chip can be expected to provide in docked mode (without DLSS).
 
Last edited:
This might be interesting: a large set of in-game fps tests for an iGPU called the Radeon 680M that has 3.69 TF of raw compute power, which is less performant by a good bit than the presumed Ampere GPU, possibly even if you go down to 3 TF for the docked mode. This should give a good baseline for what the chip can be expected to provide in docked mode (without DLSS).
Most of the resolutions are at 1080p, but I'm assuming the visual settings are PC high (a notch above ps4?).
 
Most of the resolutions are at 1080p, but I'm assuming the visual settings are PC high (a notch above ps4?).
Yeah, that's what it looks like. Most settings are towards the high side of the spectrum. I remember reading that Witcher 3 on PS4 was an equivalent of medium settings. So this is likely better visual settings than PS4 and an almost perfect 60 fps frame rate on this GPU. Edit: Oops, the perfect 60 fps was at 900p, 1080p runs closer to 50 fps.
 
Last edited:
Dave is in fact heavily suggesting in his latest videos that the next Switch will utilize an AMD chip. Which.. you know... not happening.

And that Nintendo and AMD are working together to implement FSR1.0 on the next console instead of DLSS because it’s easier to implement and gives a boost in performance 🤷
 
Yeah, that's what it looks like. Most settings are towards the high side of the spectrum. I remember reading that Witcher 3 on PS4 was an equivalent of medium settings. So this is likely better visual settings than PS4 and an almost perfect 60 fps frame rate on this GPU. Edit: Oops, the perfect 60 fps was at 900p, 1080p runs closer to 50 fps.
Thank you for keeping us updated about Rembrandt. It would be fantastic indeed if the next Switch would deliver raw performances around this ballpark. Now, if only we had indications about its on board memory...
 
0
Dave is in fact heavily suggesting in his latest videos that the next Switch will utilize an AMD chip. Which.. you know... not happening.
He loves to imply without outright saying it but won't hesitate to claim credit if it actually happened. When wrong, however, he defaults to "speculation" as a shield.
 
You need to stfu... :p
Lol, I'm being serious. I would have made a poll in a new thread but I don't want people on the forum getting the impression that I'm hitting at anything; I'm genuinely curious.

Currently, I'm considering different scenarios in which I could take advantage of DLSS on portable hardware. I'm not specifically talking about Nintendo hardware.
 
Lol, I'm being serious. I would have made a poll in a new thread but I don't want people on the forum getting the impression that I'm hitting at anything; I'm genuinely curious.

Currently, I'm considering different scenarios in which I could take advantage of DLSS on portable hardware. I'm not specifically talking about Nintendo hardware.
Because there are so many other portable devices supporting dlss.

Unless you are talking about laptops.
 
Unless you are talking about laptops.

Of course, though I'm talking about any future devices that would be using NVIDIA hardware as well. Now that Proton supports DLSS, it would be nice to see a new Steam Deck (or other portable steam machine) come with an NVIDIA version as well.
 
Of course, though I'm talking about any future devices that would be using NVIDIA hardware as well. Now that Proton supports DLSS, it would be nice to see a new Steam Deck (or other portable steam machine) come with an NVIDIA version as well.
I would still stay clear of the topic if I were you, just to be sure.

But I’m not you :)
 
How would you all feel about a game AI downsampling from a DLSS output resolution on a portable screen?

Just asking for science...

EDIT:

For clarity, I'm talking about something like DLSS + DLDSR on portable hardware.
Would it make a meaningful difference to image quality? I was under the impression that there wouldn't be much benefit in IQ from running DLSS at higher output resolutions and downsampling, but I could be wrong on that. Generally if the question is "Do you want better image quality?" I'm going to say yes, but I'm not sure how noticeable it would be in this case, particularly on a smaller screen.
 
I would still stay clear of the topic if I were you, just to be sure.

But I’m not you :)
As a developer, I have the right to ask prospective consumers what their expectations and preferences are for potential goods and services so long as I'm not asking about anything that's prohibited by my NDA. I'm fine.
 
Would it make a meaningful difference to image quality? I was under the impression that there wouldn't be much benefit in IQ from running DLSS at higher output resolutions and downsampling, but I could be wrong on that. Generally if the question is "Do you want better image quality?" I'm going to say yes, but I'm not sure how noticeable it would be in this case, particularly on a smaller screen.

It would make a meaningful difference to performance. Essentially, the performance required to render the DLSS result before downsampling would be less than it would be to render at a higher internal resolution before downsampling.

Ideally, the AI downsampling itself should be better at cleaning up image quality than the traditional method but the jury is still out on whether I can actually achieve that. Hopefully, I can.
 
It would make a meaningful difference to performance. Essentially, the performance required to render the DLSS result before downsampling would be less than it would be to render at a higher internal resolution before downsampling.

Ideally, the AI downsampling itself should be better at cleaning up image quality than the traditional method but the jury is still out on whether I can actually achieve that. Hopefully, I can.

I mean image quality relative to just using DLSS to whatever the screen resolution is, rather than using DLSS to a higher resolution and then scaling it down. That is, if you've got a 1080p screen, would using DLSS with an internal resolution of 720p and an output resolution of 1440p, then downsampling to 1080p, look noticeably better than just using DLSS with an internal resolution of 720p and an output resolution of 1080p?
 
I mean image quality relative to just using DLSS to whatever the screen resolution is, rather than using DLSS to a higher resolution and then scaling it down. That is, if you've got a 1080p screen, would using DLSS with an internal resolution of 720p and an output resolution of 1440p, then downsampling to 1080p, look noticeably better than just using DLSS with an internal resolution of 720p and an output resolution of 1080p?

I would gut guess the pixel density of most modern portable screens, including the current switch, even the OLED, would put this on the high cost low user reaction end of diminished returns.

But I can't really say for sure with confidence unless I see a side by side.
 
0
I mean image quality relative to just using DLSS to whatever the screen resolution is, rather than using DLSS to a higher resolution and then scaling it down. That is, if you've got a 1080p screen, would using DLSS with an internal resolution of 720p and an output resolution of 1440p, then downsampling to 1080p, look noticeably better than just using DLSS with an internal resolution of 720p and an output resolution of 1080p?

In that case, probably not, due to the increased likelihood of scaling artifacts. That isn't what I'm proposing though.

I'm proposing, for example, you have a 1080p portable device with an NVIDIA GPU (laptop, tablet, handheld, doesn't matter). You enable your device with the ability to use the equivalent to Deep Learning Dynamic Super Resolution. This will unlock the ability to use your native resolution as the base internal resolution for DLSS to upsample to up to 4 times its resolution (normally, you cannot do this if your device isn't capable of the equivalent to DSR or DLDSR). This is very different from rendering from a resolution lower than your native resolution and upsampling; effectively the method I'm suggesting uses AI-enhanced Ordered Grid Super Sampling Anti-Aliasing. It's very expensive when done traditionally but can be cost-effective when done intelligently.

The reason I'm asking about this is because rendering options on portable devices typically choose performance over quality. In this case, it will be more expensive than native rendering, but only by a little bit because of the overhead. I'm just wondering if you all will be interested in super high image quality on portable devices with a very slight hit to the performance as a DLSS alternative to pure performance improvements.
 
I mean image quality relative to just using DLSS to whatever the screen resolution is, rather than using DLSS to a higher resolution and then scaling it down. That is, if you've got a 1080p screen, would using DLSS with an internal resolution of 720p and an output resolution of 1440p, then downsampling to 1080p, look noticeably better than just using DLSS with an internal resolution of 720p and an output resolution of 1080p?

I'm basically asking if you all would be interested in a feature like this




on a portable device? Or would the slight hit to performance not be worth it?
 
How would you all feel about a game AI downsampling from a DLSS output resolution on a portable screen?

Just asking for science...

EDIT:

For clarity, I'm talking about something like DLSS + DLDSR on portable hardware.
Yes!

Especially if a system can max out the screen res with a ton of excess performance natively.

Even in cases where a system needs to drop to say 720p native versus 1080p screen, DLSSing to 1440p or 4K then DLDSRing would be far better IQ than DLSS 720p to 1080p alone
 
I'm basically asking if you all would be interested in a feature like this




on a portable device? Or would the slight hit to performance not be worth it?


I would need to see it, on and off on the portable devices, my initial gut check says it would be a boon on devices like laptops, but wasted more the smaller the screen/the higher the ppi goes.
 
Especially if a system can max out the screen res with a ton of excess performance natively.

This would be the ideal case. I would not recommend this feature for games that are struggling to run at native res.

I would need to see it, on and off on the portable devices, my initial gut check says it would be a boon on devices like laptops, but wasted more the smaller the screen/the higher the ppi goes.

Yeah, it's not going to be worth it for everyone. I guess we'll find out eventually. I'm just trying to future proof my pipeline right now.
 
This would be the ideal case. I would not recommend this feature for games that are struggling to run at native res.


Yeah, it's not going to be worth it for everyone. I guess we'll find out eventually. I'm just trying to future proof my pipeline right now.
Well the cool thing about DLSS+DLDSR is that you can sort of have your cake and eat it too.

Game can't run at full 1080p? DLSS from 540p up to 1440p/1620p with Ultra Performance mode, then DLDSR back to 1080p.

Better IQ than 540p to 1080p DLSS Performance, for marginal cost
 
Well the cool thing about DLSS+DLDSR is that you can sort of have your cake and eat it too.

Game can't run at full 1080p? DLSS from 540p up to 1440p/1620p with Ultra Performance mode, then DLDSR back to 1080p.

Better IQ than 540p to 1080p DLSS Performance, for marginal cost

I haven't done enough tests, but in my experience, ultra performance is not always a free lunch and artifacts will depend on the nature of the rendering inputs (which vary from game to game). Ideally, you want a really good quality source to downsample, otherwise you'd just be increasing the amount of artifacting already present in the original image (if you're using a deep learning version of DSR).
 
I haven't done enough tests, but in my experience, ultra performance is not always a free lunch and artifacts will depend on the nature of the rendering inputs (which vary from game to game). Ideally, you want a really good quality source to downsample, otherwise you'd just be increasing the amount of artifacting already present in the original image (if you're using a deep learning version of DSR).
Well in my testing 480p to 1440p is pretty close to 1440p with TAA in Control in regards to IQ, so a 540p Iternal likely is good enough for UP DLSS
 
Last edited:
Well in mytesting 480p to 1440p is pretty close to 1440p with TAA in Control in regards to IQ, so a 540p Iternal likely is good enough for UP DLSS

The problem is that even the slightest deviation from the pixel integrity of the original image will only be exacerbated by something like DLDSR. DLSS ultra performance is good enough in a lot of cases, but just teetering on that line. DLDSR is trained based on raw data, not upsampled data, so when it intelligently "smoothens" the image during the downsample, it is bound to make mistakes since there will already be pixels that it misreads as raw pixels. This isn't so much an issue with the normal DSR since no AI is involved in that version, but DLDSR is essentially making its best guess based on a best guess, which is less reliable when the algorithms are working with less information.

That being said, if users want the option to upsample from sub-native resolutions before downsampling to native, I'll definitely keep that in mind as well.
 
The problem is that even the slightest deviation from the pixel integrity of the original image will only be exacerbated by something like DLDSR. DLSS ultra performance is good enough in a lot of cases, but just teetering on that line. DLDSR is trained based on raw data, not upsampled data, so when it intelligently "smoothens" the image during the downsample, it is bound to make mistakes since there will already be pixels that it misreads as raw pixels. This isn't so much an issue with the normal DSR since no AI is involved in that version, but DLDSR is essentially making its best guess based on a best guess, which is less reliable when the algorithms are working with less information.

That being said, if users want the option to upsample from sub-native resolutions before downsampling to native, I'll definitely keep that in mind as well.
Yeah my main thing is options

"Prefer IQ": Push the Resolution as high as it can go with DLDSR or DLSS+DLDSR.etc

"Prefer Graphics": Use DLSS to increase graphical details at native output.

"Prefer Performance": Use DLSS to try to push to 60fps if the game runs at 30fps in the other modes

(Substitute DLSS with UE-TSR or FSR2.0 for Xbox Series or PS5)
Stuff like that
 
Yeah my main thing is options

"Prefer IQ": Push the Resolution as high as it can go with DLDSR or DLSS+DLDSR.etc

"Prefer Graphics": Use DLSS to increase graphical details at native output.

"Prefer Performance": Use DLSS to try to push to 60fps if the game runs at 30fps in the other modes

(Substitute DLSS with TSR or FSR2.0 for Xbox Series or PS5)
Stuff like that

Good to know your preference. Thank you for the feedback.
 
This is not to dredge up the pricing conversation again, but simply to show what's going on in the macro environment. Sony Japan announced that staring from April 1st (not a joke), they are raising the MSRP of 109 products. The price increase will range from 3% to 31%, spanning many product categories such as soundbars, Blu-ray players, microphones, memory card, etc. PS5 is not included, but I wonder if that's just a matter of time. (Going OT for a minute: If you're fortunate enough to have a secured job position, consider asking for a raise.)
 
Good to know your preference. Thank you for the feedback.
I will note on the idea of Upscaling then Downsampling and potential artifacts, why not let the user decide?

Design the mode with the headroom to go to the highest post-upscale res, but let the user decide what level they want

As some artficats/trade offs are better to some users than others (EX: Higher detail textures at the cost of blur, or no blur at the cost of muddier textures.etc)\

So user A may prefer DLDSR'd DLSS 1440p on a 1080p screen while another would prefer 1260p.before downsampling
 
Please read this new, consolidated staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited by a moderator:


Back
Top Bottom