• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

My expectation for handheld?:
5935ji4k0p8a1.png

That would be fantastic. I'm keeping my expectations tempered though. I play docked 99% of the times though.
 
Last edited:
The [AMD Ryzen] Z1 Extreme, the Most Powerful Handheld APU On The Market Today(tm) is only 768 GPU cores.
How do AMD's RDNA 3 cores compare to Nvidia's CUDA 8.7 cores (Assuming that an Orin-based chip would use Orin's cores rather than the later Lovelace 8.9 and Hopper 9.0 cores)? Obviously clock speeds and TDP will significantly affect the final performance, but it seems like the Switch 2 may be more graphically capable than the ROG Ally, which would be insane if true.
 
Last edited:
Citation needed? I'm detailing my argument for how 8nm may have come about if it is 8nm. The Switch 2 was designed when all of NVIDIA's products were selling out at ridiculous prices. It's possible they expected this to continue and this charged a huge premium for their cutting edge node due to the projected opportunity cost.
And we know that at some point nintendo recalled dev kits from third parties unexpectedly. Seems like an easy explanation for that would be if nintendo recalled dev kits because the rapidly dropping price in processing chips meant they suddenly could afford to go with 4nm instead of 8nm and the performance improvement and, more importantly, battery life was enough to justify making the last minute change.
 
So 900p? That was the case for a lot of UE4 games on base ps4
I'd argue that 1080p was far more common on the PS4 than 900p, at least for games targeting 30fps (and that does include most UE4 titles). 900p was far, far more common on the XB1.

Resolutions on the pro consoles were pretty all over the place but the base consoles were weirdly consistent throughout the gen, with games running at other resolutions being in the minority. If I took a shot every time a DF analysis came out and said the PS4 version ran at 1080p and the XB1 version ran at 900p, I'd probably have liver damage by now.

If the Switch 2 really can match/exceed PS4 performance in handheld mode, then hitting native 1080p in last gen ports shouldn't be too much trouble.
 
Last edited:
So in your (revised, it sounds like now) thinking, do you think Switch 2 will also see similar behavior? (Slightly better FPS for SR+RR, over SR alone).
Not enough data. Right now we have benchmarks for one game, in a mode that is so intense it's only really viable on top tier GPUs. That game is built in a custom engine with a custom ray tracing engine. What we don't know:

  • How well RR scales with GPU performance.
    • We need a wide range of GPUs tested to see what that looks like, and right now pathtracing is so intense there is no viable way to get good data out of a midrange card
  • What RR scales with - is it input resolution? Output resolution? Number of rays cast?
    • We need a lot of game modes tested to find this out, and we just have the one in Cyberpunk.
    • Alternatively, Nvidia could just tell us, but they haven't yet (though they've hinted it's output resolution)
  • How do other denoisers perform?
    • Cyberpunk is an in-house engine, and CDPR have already said they're retiring it.
    • Hard to know how well CDPR's denoisers compare to UE5's, or RE Engine, or Northlight, or whatever custom solution Nintendo may or may not be working on

Imagine this -let's say RR scales with output resolution. Meaning the higher the resolution that actually makes it to your TV, the slower RR performs. But Bob's Best Engine(tm) has a denoiser that scales with the number of rays cast - meaning the more detailed you want your RT effects to be, the harder the denoiser has to work.

Bob has two sliders - one for output resolution, and one for number of rays cast - and a toggle for which denoiser to use. As Bob tries to dial in a good balance of frame rate and quality visuals, adjusting those two sliders even a tiny bit might wildly change which desnoiser is faster

In the real world, Bob doesn't have a denoiser - he's got like five. It's standard to have a different denoiser for each kind of RT effect you have. So it may be that RR basically always wins if you have every RT effect on, but if you only use a few, RR always loses.

This is why I'm waiting for RR in Fornite. Lumen is an RT engine that ought to scale really well down to Switch 2 level hardware, and is likely to be the basis of the majority of RT titles in the next gen. Lumen actually changes it's denoiser and RT solver on the fly to keep quality and performance high. If RR performs well on a mid-to-low range card in Fortnite, then we're almost definitely in business. If not, then RR may be as rare as FSR2 on Switch (ie, technically possible, but only one game uses it and it's heavily customized).

Can you expand on what you mean by "CPU component", which one?
Benchmarkers are coming back with varying results despite the fact they're basically all on a 4090 with identical game settings. There could be a lot of reasons for this, but the two most likely are 1) the benchmarks aren't identical or 2) the rest of their gaming rig isn't the same. Since Cyberpunk includes a benchmark, the most likely answer is 2.

Either RR or Cyberpunk's existing denoisers (or both) have some CPU component, and all these YouTubers with different CPUs in their testing rigs are getting slightly different results because of that.
 
Might have no way of knowing, but I wonder if the PS4/Xbox One was referring to handheld performance? And higher (like PS4 Pro/XBS - not XBX btw) for docked performance? They might very well be using the pessimistic performance number (so undocked).

At least the speculated undocked performance would put Switch 2 roughly on same level as PS4/Xbox One for undocked.

Someone else probably answered it for you, but when you think that handheld performance is the floor of the capabilities of the hardware itself, that’s extremely helpful for devs.

Similar to the Series S vs Series X situation, developers need to be able to design the game to run on the lowest possible profile, and scale it from there.

Handheld is the floor, while docked is the ceiling.

And I’m willing to bet with Drake, PS4/Xbone performance is the floor here, but with additional architectural improvements, plus optimizations. Not to mention they’ll have low-level API access in the form of NVN2, which its predecessor API made many “impossible” ports possible on the Switch.
 
That iPhone game still looks hollow, washed out, and has lower polygons which phone games tend to be while being sharper than Switch. It’s sharper, but the fidelity doesn’t look right. This is quite old, but this is what I mean:
AndroidIOSSwitch.png


My expectation for handheld?:
5935ji4k0p8a1.png

ks28reakyo8a1.png
If Nintendo can get Red Dead Redemption II, Elden Ring and Cyberpunk 2077 2.0 all out in the first six months of Switch 2’s life they will help immensely to sell the console as “with the times” to more core gamers even though two of those games were last gen and well one tried to be last gen and completely failed lol.

DLSS Performance/60fps or DLSS Quality/30fps options would also be delicious.

I wonder if certain Switch ports will enable DLSS to be ported to PC like Elden Ring for instance. Would be a good bonus for PC players!

@ILikeFeet

I’m not sure. It could well be Unity as it’s an indie game. It doesn’t happen every run but usually if I’ve gotten a burn effect or there’s just far too many enemies on screen. Memory bandwidth maybe?
 
@ILikeFeet

I’m not sure. It could well be Unity as it’s an indie game. It doesn’t happen every run but usually if I’ve gotten a burn effect or there’s just far too many enemies on screen. Memory bandwidth maybe?
looking it up, the original version was made in Godot while the mobile port used Unity. it's possible the console version also uses Unity as Godot can't have closed sourced code in it. so maybe the porting process was rough on the switch version
 

We are seeking a Sr Data Scientist to assist with the development of deep learning neural networks including, but not limited to, audio enhancement and computer vision. The role focuses on iterating over the training, quantization, and evaluation of neural networks implemented in PyTorch and/or TensorFlow.
 
I know nothing about the switch 2 outside of what was leaked and the generous individuals who explain how things work but I am curious about something.

Hidden content is only available for registered users. Sharing it outside of Famiboards is subject to moderation.
 
Computer vision? Could this be for biometric input for passkey usage which Nintendo recently introduced (passkeys)?

Passkeys require biometric authentication on a second device, not the one you're signing in on. So if you use passkeys with the Switch, you would use biometric authentication on your phone (or hypothetically even another device like a laptop with a fingerprint reader).
 
That's a bingo.

There's also another reason: the bottom line. Samsung 8N would have been more expensive to go with, when we look at per wafer cost. A major point that I think ItWasMeantToBe19 is conveniently glossing over.

On a per wafer cost, TSMC 4N is more expensive, but more denser which gives TSMC 4N "more bang for the buck". And it is an option that was available for Nintendo/nvidia at the time.

nvidia products are at present predominantly made on either SEC8N or TSMC 4N. Older ones are made on SEC8N, newer TSMC 4N.

I think some people aren't understanding the "denser" part and what that actually means.

Would it be just more simple to say you get more completed chipsets for every TSMC 4N wafer is a big part of it?

The reason being the TSMC chip is going to be half the size of the Samsung chip, which means you can fit approximately 2x as many chips on each wafer at TSMC. So even if they charge you 2x as much, you could in theory be getting 2x as many chips per wafer. I think some people don't understand that part.

If you need 200 hamburgers for a cartering event and one baker charges $40 per tray of burgers and the second guy charges $80 per tray ... well clearly the $40 guy is the better deal ... until you find out he can only bake 1/2 the number of burgers per tray. So you're really not saving any money at all.

And then you have the yield issue on top of that where a larger chip (Samsung) has more defect problems.
 
Listen, next gen updates are gonna be longer, as long as sales permit them. If a current gen console, even with a refresh, is lacking, the console maker is gonna release the successor sooner rather than later. And it’s not an argument of immediately releasing it, but rather, accommodating resources to hurry it up a bit. Companies do forecasts. If current gen sales are slipping and will slip harder, the console maker will accommodate.

That said, Nintendo can use the Switch as a roadmap for their next gen successor:
• Release first SKU;
>> Two years later
• Release a cheaper, limited SKU (Lite 2);
>> Two years later
• Release a $50 premium SKU that serves as a refresh to revitalize sales (OLED/Micro LED);
>> Three years later
• Release successor console.

If this lifecycle ends up working with the Switch 2, Nintendo has a recipe for success for life
 
Passkeys require biometric authentication on a second device, not the one you're signing in on. So if you use passkeys with the Switch, you would use biometric authentication on your phone (or hypothetically even another device like a laptop with a fingerprint reader).
I use passkeys currently. Using 2nd device to authenticate is an option in some cases (ie: if I'm using a public computer, I can opt to authenticate via my Iphone).

For most part, I use biometric input in the actual device I'm signing in (ie: TouchID on my laptop, for logins on my laptop via Chrome, and FaceID for logins via the browser in my Iphone directly).

I guess I was thinking of a camera somewhere on Switch 2, used for biometric input and for logging in to eShop kind of thing. That was something I suggested, but it was then pointed out to me this forum have seen something that makes us think there's not going to be a camera on Switch 2 (my understanding). Also, the job position says "audio enhancement", so I think it's unlikely anything to do with passkeys.
 
Last edited:
RingFit 2 using computer vision to judge your form is basically my dream.

(This would require a camera, either bundled with the Switch 2, or with the ability to use cameras from phones which I don't know if it's viable)
1. It would be viable to use your phone camera, yes.

2. Nintendo Switch Joy-Con (R) has a camera Ring Fit uses to get your pulse.
 
1. It would be viable to use your phone camera, yes.

2. Nintendo Switch Joy-Con (R) has a camera Ring Fit uses to get your pulse.

I'm extremely doubtful these cameras are good enough to get the visual data required to judge push-up form.

You need actually good cameras. The cheap method would be having RingFit take visual data recorded by a phone that was then transmitted to the Switch 2 and then this visual data would be analyzed by its machine learning algorithms. This seems technically very very hard and I'm not sure it's been done before.
 
We've talked about this particular subject a lot over the past few months, but I think this video hits the nail on the head in terms of where Nintendo is at and when we can potentially expect Redrakted NG to release.



Edit: My stupid ass forgot to post the link 🫠

Edit 2: Typo 🥴
 
Last edited:
Anyway, if Nintendo has good computer vision stuff for RingFit 2, I will be extremely excited.

Advanced computer vision algorithms to do foveated rendering in handheld mode would be much more useful for Nintendo's future though. It's never been done, but... It seems like maybe it could be done.

It's currently in VR and it's a game changer in terms of boosting performance. Obviously the eyes are further away and less constrained with a handheld... We'll see if it can happen though.
 
0
We've talked about this particular subject a lot over the past few months, but I think this video hits the nail on the head in terms of where Nintendo is at and we can potentially expect Redrakted NG to release.
What video
 
I hope drake can match PS4 resolutions in handheld on gen 8 games
So 900p? That was the case for a lot of UE4 games on base ps4
Worried it will run into limitations when it comes to quicker ports from those platforms that don’t necessarily utilize ampere’s feature set
I'd argue that 1080p was far more common on the PS4 than 900p, at least for games targeting 30fps (and that does include most UE4 titles). 900p was far, far more common on the XB1.

Resolutions on the pro consoles were pretty all over the place but the base consoles were weirdly consistent throughout the gen, with games running at lower other resolutions being in the minority. If I took a shot every time a DF analysis came out and said the PS4 version ran at 1080p and the XB1 version ran at 900p, I'd probably have liver damage by now.

If the Switch 2 really can match/exceed PS4 performance in handheld mode, then hitting native 1080p in last gen ports shouldn't be too much trouble.
There are 900p games on PS4, but almost all of them (Watch Dogs being the only exception I am aware of) are late cross-gen titles.

Will handheld be able to do PS4 resolutions consistently? I think @AshiodyneFX is probably right. I will spare you the OldPuck Spreadsheets on this one, but basically I have a range for my expectations of performance, and the bottom of that range is "Almost every PS4 game will need a little DLSS to get to 1080p in handheld" and the tippy-top of that expectation is "Almost zero PS4 games will need DLSS to get to 1080p".

This is one of the reasons I argued for a 720p screen for so long. I've become more chill about it, but if it's a real concern for you, consider that there are only six games on this list that are cross platform and don't have DLSS support, and only three of those don't have Switch and/or 360 versions already. I'm not too worried that we'll get a bunch of trash PS4 ports when the optimizations to support Switch NG are the same things that cross-platform games need to support PC well.
 
It's also the first version of a new technique, so I'd imagine it will improve quite a lot over subsequent versions, much like DLSS 2.0 did. One thing I've noticed is that it does a very good job handling how light and shadow respond to moving objects or moving lights, but falls over a bit on the moving objects themselves, with occasionally noticeable trails, or the phantasmal objects shifting in and out of existence in Alex's video. I feel like they really focussed on the former when creating and training the model, because it was one of the main problems DLSS RR was designed to solve, but may have taken the latter for granted a bit.
I somehow missed this post the first time through the thread, and it is a better answer than mine to how CPU could affect the RR numbers we have, and a really good post on RR generally

This particular paragraph pulls out both what's cool about RR and where it's (current) limitations lie. RT is really good at making quick moving lights look accurate. RT is also really good at making indirect lighting - like bounce light illuminating an area under a bridge - look good. But the denoisers want to do the exact opposite thing in these two cases. For a moving light, you want to see the change quickly without ghosting, so you should err on the side of throwing out older rays, instead of keeping them, even if it makes the image fuzzier. For indirect illumination, there isn't a way to cast enough rays to figure out how to light an area correctly in just one frame, it'll be too dark. So you should default to keeping the data from old rays around, even if it means a little ghosting.

One of RR's magic tricks is that instead of using separate denoisers for each of these situations, you just feed a lot of info into the AI model and it "does the right thing" in each case. Where it falls down is when it guesses wrong. RR's devs said they biased hard toward temporal stability, because fizzing and popping is really distracting, but I think it resulted in a lot of ghosting. It should be noted that DLSS 2.0 Super Resolution was exactly the same at launch, and rapidly improved. So there is room to grow here.
 
0
Computer vision you say? AR's back on the menu, boys!
Not quite up to Thraktor standards … here let me help

Ah, the reinvigorated realm of computer vision, a domain ceaselessly evolving at the intersection of artificial intelligence and visual perception. With advancements in deep learning architectures and the proliferation of high-resolution imaging sensors, the stage is set for a resurgence of Augmented Reality (AR) applications. This resurgence is underscored by the paradigm-shifting potential offered by the integration of computer vision algorithms, which empower AR systems to discern, interpret, and interact with the real-world environment in an unprecedented manner. As the precision and accuracy of object recognition, semantic segmentation, and pose estimation algorithms continue their exponential ascent, the resurgence of AR beckons with renewed vigor. It is a testament to the symbiotic relationship between computer vision and augmented reality, wherein the former furnishes the latter with a robust foundation of perceptual acumen, enabling a seamless fusion of the digital and physical realms. Thus, one might exclaim, "Computer vision, you say? AR's back on the menu, boys!" with an air of anticipation and exhilaration for the untapped vistas this convergence promises to unlock.
 
There are 900p games on PS4, but almost all of them (Watch Dogs being the only exception I am aware of) are late cross-gen titles.
A lot of early/mid gen big UE4 PS4 titles (especially JP ones) were 900p or a bit lower like DQXI (not S), Tekken 7 and KH3.
 
With regards to DLSS from really low resolutions, having tried multiple PC games on multiple monitors including a large screen TV ... my impression is even 720p-to-4K (Ultra Performance mode) is going to be good enough for most normal gamers.

Is it as good as native 4K? No. If you have a 150-200 watt 20, 30, or 40 series should you use Quality DLSS or native instead? Sure.

But we are talking about a 5-15 watt piece of hardware here. Instead of looking at 4K native in comparison, look at how much shittier 720p or 540p native looks. Then understand even Ultra Performance DLSS looks waaaaaay better. It looks way closer to 4K than it does to a 720p image.

And that IMO is going to be good enough for most people. Do some power lines shimmer in Death Stranding at ultra low resolution like 480p ... yeah but even these kinds of issues can be improved as the algorithm gets better and really even with that, the image quality is still way more pleasant compared to like Bayonetta 3 on the Switch which will make your eyes bleed because the image quality is so bad.

It's honestly so good as is, for a demanding port I honestly don't even know why a developer would even bother using a resolution higher than 540-720p undocked.
 
Computer vision you say? AR's back on the menu, boys!
more interesting is audio enhancement. maybe nintendo is experimenting with with low resolution audio for smaller file sizes

Audio enhancement? Could they use the Tensor Cores for something like 3D Audio or maybe compression stuff like that Meta codec?
audio super resolution


spectrogram.png


* Hidden text: cannot be quoted. *
the problem here is time for development. moving nodes is not a quick and easy process. the TX1 moving to 12nm was simpler because 12nm is a further refinement of the original 20nm. here we're talking about moving to a completely different company and IP. that's pretty much the same as starting from zero
 
If I may add a new point to this current discussion. I think a lot of people are obsessing with Nintendo going for something that would save them money because "they never sell their consoles at a loss."

This is simply untrue. The Wii U initially sold at a loss and it was widely publicized, so I don't get where the amnesia of that precedent being set is coming from. It made more business sense prior to the Switch for them to sell their systems at a profit because they weren't backed by large conglomerates like PlayStation and Xbox were, but the Switch has been their most successful platform ever and they're richer now than they've ever been. They also have a more diverse revenue stream than they did 10 years ago (Switch Online subscriptions, mobile game transactions, other endeavors such as their partnership with Universal) and that's important to keep in mind as well.

Sony and Microsoft sell their consoles at a loss at launch because they're expecting software sales and their online subscription to pick up the slack while they wait for manufacturing costs to drop below MSRP. Nintendo, now more than ever, can comfortably afford to do this (obviously within reason, I'm not insinuating they'd be okay with taking a sizable loss on consoles at launch) and so I don't think it's unrealistic to expect them to go against their instinct and pay a little more for something more premium.
I don't think anyone is forgetting this. Rather, it's because we remember this that a lot of us think that Nintendo wouldn't be willing to make such a gamble again, because doing it with the Wii U completely blew up in their face. Yeah, the Switch is in a much healthier place than the Wii was, which combined with the current software lineup should make for a far better transition to the NG. But I think it's fair for some of us to question if Nintendo would take the risk, even if it would turn out much better this time.

I agree that it is a possibility, but like you said, it's one that they most likely won't enjoy taking, and would probably be a last resort if they can't get into their desired price range for the device otherwise.
 
Power consumption is a non-concern. I've been using HDR displays on mobile phones for years, as have many people, and not had any trouble with battery life.

People need to remember High Dynamic Range is just that: a RANGE. 400+nit peaks will be small sections of the screen for short moments of time, in most applications.

The important thing with HDR isn't peak brightness anyway. It's COLOUR DEPTH.

Plus if you find those bright peaks are causing you conniptions? Turn the screen brightness down. You don't need to sacrifice colour depth to do that.
It’s true that HDR isn’t just about peak brightness, but it’s not directly about color gamut either, which is a different but related idea. HDR is about the ratio of brightest parts of the image to the darkest parts.

Perceptually, human vision is logarithmic. The dynamic range of an image is measured in stops, which is the log2 of the ratio between the intensity of the brightest part and the darkest part. Human vision is estimated to have around 14 stops of dynamic range. An SDR display typically has around 6 stops. But, for an example, an HDR display with a peak brightness of 2000 nits and a black level of 0.01 nits would have log2(2000/0.01) = 17.6 stops of dynamic range. That’s what really defines HDR.

When you’re encoding a digital signal, you quantize the intensities to integer values, like 8-bit. When the quantization is poor, you perceive the individual steps as banding. One naive way to avoid banding is to keep increasing the bit depth, going to 10-bit or 12-bit signals, but that’s bandwidth intensive. With HDR, the problem is even worse, because the range over which you need to avoid banding is much larger than SDR.

Here’s the trick: since perceptual vision is logarithmic, people have realized that it doesn’t make sense to use a linear quantization. What we need is for quantization points to be closer together in the darker parts of the image than in the brighter parts.

That’s where the transfer function comes in. A transfer function nonlinearly maps the camera signal into a space where quantization is well-distributed over the whole range (i.e., it can allocate more of the quantized values to darker parts of the image). Then, the display can apply the inverse transfer function to show the correct image.

The standard transfer function for SDR was gamma, a power law function. Gamma works well over a limited range, but for an HDR display, it’s not optimal, even if you allocate 10 bits. Instead, the PQ and HLG transfer functions are used for HDR. They are two different approaches:
  • PQ directly specifies the display brightness. However, the raw signal often needs to be tone-mapped by the display; for example, if the display can’t reach the specified peak brightness, then you want a nice roll-off in the upper part of the range. PQ is used with static (HDR10) or dynamic (HDR10+, Dolby Vision) metadata which specify how to do the tone-mapping.
  • HLG is hybrid log-gamma. Since human vision is perceptually logarithmic, log2 would be the intuitive ideal transfer function. The problem is that log isn’t well behaved near 0. HLG uses gamma near 0 and log over the rest of it’s range to solve this problem. Unlike PQ, it’s display independent; it doesn’t directly specify how bright the display should be.
So to recap, HDR is purely about the ratio of brightness to darkness that a display can show. It’s not directly about the color gamut or the bit depth; however, it does enable a wider color gamut and it requires a higher bit depth to avoid banding. Choosing a good transfer function helps avoid banding without requiring a large bit depth.
 
This is gonna sound like unrelated shitpost, but:

The Good Burger is getting a sequel

More proof that companies are moving towards expanding existing properties, rather than risking with new ones.

What does this have to do with Switch 2?

Just that: the successor console is gonna be called the Switch 2
 
Fami didn't need to tapeout a whole SoC to guesstimate that those specs on SEC8N would be overly power hungry.. Why would nvidia/nintendo? It makes a whole lot more sense to me that initially they were thinking of 8N, which is what their initial information would show because they were testing it out. But they would cancel that (hence a rumored cancelled SoC) when their much more in depth pre testing would show them exactly what Fami thinks, and switch over to TSMC4N to fully work on a tapeout. Is this not a possibility?
 
I'd argue that 1080p was far more common on the PS4 than 900p, at least for games targeting 30fps (and that does include most UE4 titles). 900p was far, far more common on the XB1.

Resolutions on the pro consoles were pretty all over the place but the base consoles were weirdly consistent throughout the gen, with games running at other resolutions being in the minority. If I took a shot every time a DF analysis came out and said the PS4 version ran at 1080p and the XB1 version ran at 900p, I'd probably have liver damage by now.

If the Switch 2 really can match/exceed PS4 performance in handheld mode, then hitting native 1080p in last gen ports shouldn't be too much trouble.
I imagine we'll get alot of native 1440p games for PS4 ports instead of 4k likr PS4 Pro (checked board rendering) because we won't have the raw speed and mixed precision like Polaris and Maxwell to help ... but upscaled to 4k with DLSS anyway.
There are 900p games on PS4, but almost all of them (Watch Dogs being the only exception I am aware of) are late cross-gen titles.

Will handheld be able to do PS4 resolutions consistently? I think @AshiodyneFX is probably right. I will spare you the OldPuck Spreadsheets on this one, but basically I have a range for my expectations of performance, and the bottom of that range is "Almost every PS4 game will need a little DLSS to get to 1080p in handheld" and the tippy-top of that expectation is "Almost zero PS4 games will need DLSS to get to 1080p".

This is one of the reasons I argued for a 720p screen for so long. I've become more chill about it, but if it's a real concern for you, consider that there are only six games on this list that are cross platform and don't have DLSS support, and only three of those don't have Switch and/or 360 versions already. I'm not too worried that we'll get a bunch of trash PS4 ports when the optimizations to support Switch NG are the same things that cross-platform games need to support PC well.
This got me thinking about RE village on iPhone 15/Pro Max. (the video that was posted yesterday). Even if handheld mode on switch 2 was possible to run PS4 ports in 1080p natively without any compromise, I imagine a lot of devs might just run 720p internally and scale it to 1080p with DLSS to help with power draw and battery life.. while adding extra fidelity every now and then.

I don't mind a 720p screen too much (it won't kill me), but considering 1080p is more future proof tech, and also complements non gaming stuff like streaming videos... Coupled with the fact that DLSS exists to upscale to 1080p anyway.. I think the benefits of a 1080p screen outweigh the negatives.
 
more interesting is audio enhancement. maybe nintendo is experimenting with with low resolution audio for smaller file sizes


audio super resolution


spectrogram.png



the problem here is time for development. moving nodes is not a quick and easy process. the TX1 moving to 12nm was simpler because 12nm is a further refinement of the original 20nm. here we're talking about moving to a completely different company and IP. that's pretty much the same as starting from zero

Isn't this just compression.

Oh, it is.


This is good, but it's way too hardware intensive currently to be useful.

These new near loss-less compression techniques are interesting, but just not close to console viable now or maybe even for a while. This seems more like PS7 stuff when decompression is so cheap (due to processors being more powerful) that they can use it.
 
I was team 720 for screen because I think 1080 with ps4 ish power will mean a lot of games will be 1080/30 while with 720 the extra power could be used to run games at 60fps. (Dlss both cases)
If it was a pure handheld 720p would be the best, but Nintendo wants something close to visual parity for both modes outside of resolution. 720 vs 4k (9x resolution) might have been too much.
 
I was team 720 for screen because I think 1080 with ps4 ish power will mean a lot of games will be 1080/30 while with 720 the extra power could be used to run games at 60fps. (Dlss both cases)
1080p after DLSS produces comparable or better image quality to a native 720p image with lower SOC power consumption, while targeting 720p after DLSS gives generally unsatisfactory image quality. When DLSS is in the picture, 1080p is a pretty good sweetspot balancing battery life and image quality.
 


This Death Stranding video is a good example of DLSS at low resolution. It's cool, but it can't go as far as you want on low resolution.

The best thing to do for most games is probably:

540p internal DLSS'd to 1080p in handheld mode
810p internal DLSS'd to 4K in docked mode

Which works well.

DLSS 360p to 720p doesn't work as well as DLSS 540p to 1080p and DLSS 540p to 4K is not amazing.

So a 1080p handheld screen makes sense.

A 7.91 inch screen also would only have a pixels per inch of 185.66 at 720p whereas it's 278.5 at 1080p. Close to 300 is pretty much want you ideally want. The Switch base model was 237, the OLED was about 210.
 
Last edited:


This Death Stranding video is a good example of DLSS at low resolution. It's cool, but it can't go as far as you want on low resolution.

The best thing to do for most games is probably:

540p internal DLSS'd to 1080p in handheld mode
810p internal DLSS'd to 4K in docked mode

Which works well.

DLSS 360p to 720p doesn't work as well as DLSS 540p to 1080p and DLSS 540p to 4K is not amazing.

So a 1080p handheld screen makes sense.

A 7.91 inch screen also would only have a pixels per inch of 185.66 at 720p whereas it's 278.5 at 1080p. Close to 300 is pretty much want you ideally want. The Switch base model was 237, the OLED was about 210.

It would be interesting to see an updated version of this video, as it's already 2 years old. DLSS has come quite a way since then, but has it improved on the lower resolutions?
 
If it was a pure handheld 720p would be the best, but Nintendo wants something close to visual parity for both modes outside of resolution. 720 vs 4k (9x resolution) might have been too much.
I still don't think 4K output will be common enough to matter. 1440p output might be the defacto after upscaling. 1800p might come second
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom