Question to those who know about how well DLSS works in different configurations:
What's the lowest native resolution a game can be rendered at to then produce a decent looking final image for handheld mode?
Before anyone says "what's a decent looking image" I mean something you think most people would say is playable enjoyable and not intrusively blury. Use your reasonable judgement here
If the answer is: depends on the game and its art style and gameplay style then please give a couple of examples
Short answer is that, with a 1080p screen, from my experience 540p should be fine most of time time, and below that is when you start getting more noticeable artefacting. Being blurry isn't necessarily the worst thing, as it's in general going to be a lot less blurry than any other upscaling method, but visual artefacts are going to bother you before the blurriness does.
The long answer is that it depends. On art style, partly, but also on a lot of other things. One simple one is how quickly the camera (and objects within the scene) are moving. DLSS, like FSR 2, XeSS, etc, is a temporal upscaling algorithm, which means it takes data from several previous frames to produce the one you see. If the screen is completely stationary it can continually extract more data to work with for every pixel, as it jitters every pixel by a little bit each frame to take in more more detail, and as each pixel in this frame is perfectly aligned with the same pixel in the previous frame, it can use all this data to build a really sharp image. For this reason temporal upscalers always look best when the camera isn't moving.
The more movement in a scene the more temporal upscalers struggle. They use motion vectors (which are used to tell where each pixel should be based on movement from the last frame), which allow them to use data from previous frames, but these motion vectors are only estimates, so sometimes data is lost (or the wrong data is used) if the motion vector points to the wrong pixel. If there's only a small amount of movement between each frame (eg the camera is panning slowly), then it's easier for the temporal upscaler to do its job, but if there's a lot of movement between frames, for example with lots of different objects moving quickly in opposite directions across each other, then it's more difficult, and you're more likely to get artefacts.
DLSS is particularly good at handling these motion cases because it's not just relying on motion vectors. It does take them in, but as it's a machine learning solution it can also do pretty advanced pattern recognition to allow it to more accurately map data from one frame to the next. That said, it's still not magic. If there's a lot of movement in the scene, and it has very little data to work with (eg because it's relying on a 360p input image to reconstruct 1080p) it's going to struggle and you're going to get artefacts.
As an example, I played a good bit of Starfield on a Series X recently, which uses FSR 2 from 1440p internal to a 4K output resolution, and on static scenes it looks very good, basically indistinguishable from native 4K in many cases. When things move, though, it starts showing noticeable artefacts, with things like shimmering aliased edges, incorrect motion compensation (eg stars moving with the camera when they shouldn't) and other issues. None of these are massively detrimental, but they are noticeable. I've played a fair few PC games with DLSS with a 1440p internal resolution and a 4K output, and artefacting is almost never noticeable, even under fast motion, as DLSS just handles it much better than FSR 2 does.
The other major factor in how well temporal upscalers hold up is how much fine detail there is in the scene. Basically, if you take a game with the polygon count and texture detail of Mario 64 and use a temporal upscaler to get it from 1080p from 4K, it's not going to have much trouble. It's easy to track shapes and objects from one frame to the next when they're big and simple, but it becomes more difficult the smaller the details are. This is a particular issue for particle effects which may only be a couple of pixels in size. For example, if you've got an object that is two pixels in size at the 1080p output resolution, but your internal resolution is 540p, your temporal upscaling solution is only going to actually see that object once every two frames, which makes it difficult to reconstruct any information about it.
This is also why I think using DLSS in "ultra-resolution" mode, where it uses a 33% scaled input resolution, is going to be much more feasible in docked mode than portable mode, because if you have the same scene on a high res screen and a low res screen, there's inherently going to be more fine detail on the low res screen, relative to resolution. As an example, let's say there's an object that takes up 9 pixels on a 4K screen. If you're running DLSS with a 720p input here, DLSS will get on average one pixel's worth of data about it each frame, just enough to track it and extract some information. If you take the same scene and scale it down to a 1080p screen, the object is now about 2.25 pixels in size, and if you wanted to use DLSS from 360p up to that 1080p screen, then DLSS is only going to get a single pixel of information to work with once every four frames on average. That's not a lot to work with.
For a particularly extreme example of what happens when you've got way too much fine detail on screen for a temporal upscaler to keep up with, take a look at some footage of Immortals of Aveum on consoles. They use FSR 2, with an input resolution of 720p and an output of 4K on PS5 and XBSX, and even lower internal res on XBSS. They're already stretching FSR2 to the breaking point, but they also go incredibly heavy on particle effects, which are both fine detail and quickly moving, which is the worst case scenario for temporal upscaling. Unsurprisingly, FSR 2 can't keep up at all, and the game looks like a mess. I'm sure DLSS would do a better job here, but it would still struggle.
Immortals of Aveum leads me to my last point, which is that Switch 2 games are in the relatively unique position of being made for a console designed around temporal upscaling. This means developers can and will tweak their games to avoid the kind of cases where DLSS struggles to keep up, as it's the only way players will ever see the game. Immortals of Aveum is actually a curious case here, because it depends so heavily on FSR 2 on consoles, but at the same time they seem to have made a game which wasn't remotely made with the limitations of FSR 2 in mind. It really feels like the art team was working on high-end PCs running everything natively, and the entire production was too focussed on ticking off graphical features in UE5 to care about the image quality turning to hot garbage.
So, presuming Switch 2 devs don't make the same mistakes the Immortals of Aveum devs did, there's a chance we could see some developers push internal resolutions below 540p in handheld mode and still get good results, because they're careful to avoid the situations where DLSS would have trouble. Still, not all games are going to be able to do that, because of art direction or other reasons (you can't exactly limit fast movement in an F-Zero game), and are going to have to maintain a decent internal resolution to keep image quality from falling apart.