The short answer is "everything uses ROPS". A medium sized answer involves getting deep into the woods of how rendering works (and pushing my own knowledge to the limit).
Imagine a grassy lawn, Actually, don't bother imagining it, here is a picture for you.
Okay, what resolution is the lawn? Not the resolution of my picture of the lawn, but the resolution of the lawn itself, in real life. Deep question, but roll with it for a second. From our point of view, the resolution is
infinity. No matter how close you get to a piece of grass, you never see the pixels. In fact, pixels don't exist, the real world isn't made of pixels.
Deep inside your GPU, it's actually the same thing. A 3D scene isn't made of pixels, it's made of 3D objects, which themselves are made of triangles, and just like the real world, triangles have infinite resolution. You can zoom a triangle in forever, and always get a smooth line and sharp corners.
But your screen is made of pixels. We need a way to convert from the
infinite resolution to
finite resolution. From that perfectly smooth triangle, to a pixelated one.
How do we do that? Well, let's go back to our lawn. Imagine that lawn* with a fence in front of it.
Okay, now what? Well, we've broken the infinite resolution lawn up into a pixel grid. Now we just need to determine the color of each pixel. How? By
sampling it. Essentially imagine taking a laser, and shooting right in the middle of each diamond on the grid, and whatever color you hit, that's the color of the whole pixel.
You can see right now how much detail you'd lose that way. Just from this picture alone, almost every pixel would be green, except for a few brown ones. All the details of the individual blades of grass would go away, a lot of the patchiness of the lawn would be replaced by a single boring green. And the few brown patches we got would be super blocky.
What if we wanted to smooth out the edges of things - anti alias them? Well, we could shoot multiple lasers just at the edges of objects we wanted to smooth out -
multi sampling. What if we wanted to do more than just show green? We could
blend colors together.
This whole process is called
rasterization. ROPs stands for Raster OPerators and this is their job. And when rendering a single frame of a video game, modern game engines don't just rasterize one image, they rasterize
dozens. An image for shadows, an image for highlights, an image for the background, an image for the foreground, an image for all the metal objects in a scene, and image for all the cloth objects, etc etc.
And these images are combined, and the ROPs do that as well,
blending all these different images (called "buffers") together to make the final frame you see on screen. That's the way that rendering has worked in GPUs since basically the very beginning.
Except... programmers are starting to figure they can do this process better than the hardware designers. Or at the very least, they can do better than the one-size-fits-all that the ROPs provide, and build a more efficient solution that is customized just for their game. Instead they're using the
shader cores to do some or all of these operations, reducing the lean on ROPs.
DLSS is a specialized example of that. Instead of drawing a 4k image using ROPs, you draw a 1080p image (with or without ROPs), and then upscale it using the tensor cores, totally separate from the old ROP pipeline. This moves huge chunks of work away from the ROPs
Does all that make sense?
* Or a very similar one, I'm kinda limited by Google image search here.