Ah, yeah, I see what you're saying. Yeah, the GPU needs all the data to render the frame, but the CPU doesn't have to wait for the frame to render to begin it's tasks for the next frame - unless it receives backpressure from the render queue, which is essentially a signal from the driver saying "woah woah woah, you're getting too far ahead, stop."
I'm sure DLSS can be amortized away in CPU, in many cases.
DLSS 2? No, it runs after the frame is rendered. Without a rendered frame, there is no image to upscale and no "blanks" to fill in.
Take a 1080p frame and try to upscale it to 4k. 4k is exactly 4 times as big as 1080p, so the fast, simply way to upscale it is to turn every pixel into a 2x2 square of pixels
Code:
1080p pixel => 4kpixel
* **
**
But that gets you jaggies, and you don't get any extra detail that wasn't in the first image.
Code:
1080p pixels => 4k pixels
* **
* **
* **
**
**
**
See how the small angled line on the left becomes a jagged stair step on the right? Anti aliasing comes around and says "hey, I can guess where edges are, and then smooth them out"
Code:
1080p pixels => 4k pixels
* *
* *
* *
*
*
You see how you get a smoother line, but it's not perfect? There are lots of antialiasing techniques, but there are limits. And again, you're getting smoother edges, but you aren't getting new details. That's why some folks complain that Anti Aliasing looks blurry - it smudges the edges of things that it thinks should be smoothed, but it can get it wrong, removing detail, without adding anything new.
You ever playing a video game and you see grass fizzle and pop? That's little tiny details that are smaller than a pixel. As you move around, some of these little subpixel details will pop into and out of existence. As a higher res, where the pixels are actually smaller, that's a detail that might stay on the screen all the time. DLSS watches old frames, and captures all that subpixel detail, and then uses AI to add it back to the current frame - but it needs the current frame's information to know
when to put that detail in and where.
Code:
1080p
frame 1 => frame 2 => frame 3
* * *
* *
* * *
See that pixel in the middle that vanishes and then comes back? Maybe this is a zigzag pattern that would show up in 4K. DLSS tries to catch that. But it takes a couple frames to have enough data to work with. That's why DLSS 2 makes things ugly for like 2 frames right after a camera cut.
Code:
1080p
frame 1 => frame 2 => frame 3
* * *
* *
* * *
...becomes 4k upscaled
frame 1 => frame 2 => frame 3
** ** *
** ** *
** *
** *
** ** *
** ** *
That's what's amazing about DLSS 2 (and FSR 2, and XeSS) - that it smartly finds the that the artist created zigzag and make a 4K version of it, despite the fact the zigzag itself never full appears in any of the frames it was working from. But you can see that the zigzag is tending to move from right to left in the last upscaled frame. That data came from the last 1080 frame, which just gave us an angled line moving from right to left.
DLSS needs the current frame's data in order to upscale. It doesn't just fill in the blanks with prior frame's data, it tries to learn what the underlying image is
supposed by combining the current, completely rendered frame, with information from the past.