Even in these hype interviews they literally just talk about it as a form of compression instead of a RAM saving breakthrough because the textures have to eventually end up in the RAM to be visible to the player.
It's possible to massively lower the size of assets while minimizing the quality loss as a way to save RAM, but (again) this would be a production step and there would be no point whatsoever to do this in real time.
A native rendering application would be something like Hey You Pikachu 2 if Nintendo didn't want to use a cloud solution because the native hardware is good enough (which RTX 2050 and above hardware is at this point)
OK I’m going to be clear with you, and this is probably the only time I’m going to be clear with you in the most direct format possible. This is all
speculation. None of this means it’s going to happen, the point is to theorycraft and come up with ideas. I simply stated a feature in which it does it in real-time will be
beneficial to a low-power system.
It doesn’t end up being in the RAM to be visible by the player, because the GPU is the one that displays it. The GPU is the one that’s doing the Upscaling in real time. If it goes back into the RAM as a higher quality texture,
that is not what I’m talking about and I beg you to not read this as me saying it is a decompression because this is not that. This is very clear, you are looking for a way to make this more complicated than it needs to be.
And in any case, I’m not even sure why you’re taking this as if I’m saying it’s gonna happen, the original question I was asked was about how this would be beneficial for a low power system beyond file size reduction, and someone asked me “would it?”. I responded in how it would be beneficial to a low power/low spec system… and you somehow have an issue with it for X Y Z reason that has nothing to do with what JoshuaJStone inquired about.
Which is, again, “this would have more benefits than simply lowering file size”
Why? Because the low res texture remains low res in memory, the GPU and the algorithm are Upscaling it real time to appear as though it is a higher resolution texture, when it is still a low res texture. Just like how DLSS Super Resolution is not actually native 4K, it is a lower resolution and it is being super sampled to 4K, that does not mean that it is 4K. DLSS to 4K ends up taking
less VRAM space than doing native 4K.
And I will repeat myself once again, because I feel like somehow you’re going to misunderstand what I’m saying and assume I am saying that this is what is going to happen: this is all
speculative discussion. It is speculative discussion about
something I was asked about. The
idea is not decompression, the idea is the GPU using its tensor cores will literally upscale the texture to
appear as though it is a higher resolution texture in real time, using an algorithm. That does not mean that it is High res texture. It is still a low-res texture internally, it is being displayed as a Hi-Rez texture to the user or what they perceive as a higher res texture. The eye is perceiving that as a higher resolution texture.
This means that
DLSS IS NOT USED IN THESE SCENARIOS.
This means using the Tensor cores for a different purpose