So, ultimately, what Nintendo is able to do will be dictated by the physics of "how much heat and electricity can we shove through this thing before it melts and/or has zero battery life." But if you're thinking about what Nintendo would do, I'd like you to consider how DLSS works.
The PS4 was a 1080p machine. 4K is 4x as many pixels. The Pro is a 4k machine, but it used a technique called "checkerboarding" to make a 4x as big an image with only 2x as much power.
DLSS only needs 10% extra power to make a 4K image. So if Nintendo lands at just ever so slightly more power than the PS4, they can give you a PS4 Pro experience.
The Xbox One was not a 1080p console. It tried, but it almost never delivered. That One X also used checkerboarding, but the One X had to be way way more powerful than the OG Xbox One, because it not only needed the extra power to get to 4k from 1080p, it needed more power just to get to 1080p in the first place.
So, if a 4k Switch hits the Xbox One level of power, then last gen games will still need to get some cuts in order to get up to 1080p, plus a little extra if you actually want the "4k" part of your 4k Switch. At the PS4's level of power, you need just need enough cuts to find that 10% of power. And if you manage 110%, you can still be well, well, well behind the PS4 Pro, but offer every last gen game in 4K.
Like I said, physics will determine if Nintendo can cross that gap. But if they can, then the benefit is unusually high
Additionally, and someone please correct me if I’m wrong, RT cores (and the low-effort BVH method they use to generate a ray-traced frame and get Tensor cores to de-noise into something pretty) mean that you save GPU processing that would go toward rendering lights and shadows while producing an image that would
never be possible on a PS4 or PS4 Pro. That GPU processing savings can then therefore go to other rendering techniques.
When people say it's comparable to PS4 or something, I sometimes get the impression that the amount of brute-force work pulled off of the CPU and GPU by these hardware-accelerated techniques is not factored in at all. And to a degree, I can understand why, these techniques aren't terribly well understood yet and we have no basis to understand how these techniques perform in dedicated gaming hardware yet.
But when one combines DLSS and RT via RT and Tensor cores, if you're rendering a game natively at 1080p60 on Drake, from what I've read (and please someone correct me if I'm wrong), these accelerators (so long as you have enough of them to render a particular scene)
seem like they'd potentially be capable of pushing that native render up to 4K with ray tracing without a framerate dip at a fraction of the GPU power consumption you'd use to do it all natively. Not even PS5 can make claims of being able to achieve a 4K image with RT and no frame rate cut in several games (Demon's Souls, Ratchet & Clank, Spider-Man and DMC5SE being four prime examples, pushing the resolution to UHD 4K with RT on brings the frame rate down to 30fps).
Devil May Cry 5:SE in particular stands out on this list, as that was a 1080p60 game on PS4 and the Special Edition re-release on PS5 does not have any notable visible changes to scene/character geometry or visual effects that I was able to see in comparisons, but even with PS5 being what it is and the game being what it is, Capcom was forced to choose between giving up 4K rendering, ray tracing or 60fps, because it was seemingly not possible to have all 3 simultaneously. Imagine what it means that, in certain circumstances, Nintendo hardware could present
fewer sacrifices to visual fidelity than the most recent consoles from Sony and Microsoft. (with the obvious counterpoint that PS5 and XBSX will always be able to push more geometry, of course)
I'm not here to make people overly hopeful, but it's still something worth factoring in, that when we talk about raw base performance, we may be effectively underselling what this can do.