• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.
  • Do you have audio editing experience and want to help out with the Famiboards Discussion Club Podcast? If so, we're looking for help and would love to have you on the team! Just let us know in the Podcast Thread if you are interested!

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

Changing of console branding on packaging is only indicative of Nintendo realising the 3DS and Wii U are dead and buried.
It's an assumption though. There's no reason to change branding even if 3DS and Wii U is dead and buried; newly bought amiibos aren't going to suddenly stop working for 3DS's and Wii U's still in operation out there.
 
I'm still unclear on what the benefits (from both performance on CPU/GPU or detail) of mesh shaders are, but they do seem to be real benefits.
according to Capcom, mesh data compression of 40% or higher and finer grain geometry culling. other benefits are hardware geometry generation for primitives (like grass), lod blending
 
It's an assumption though. There's no reason to change branding even if 3DS and Wii U is dead and buried; newly bought amiibos aren't going to suddenly stop working for 3DS's and Wii U's still in operation out there.
The 3DS and Wii U being dead is a reason to change the branding. They don't need to specify that amiibos work on Wii U on the back of the box until the end of time.
 
Because Nintendo and the Switch is now synonymous. When there's only one relevant platform, nobody is going to question wether the Amibo works for it.
Still doesn't explain the reasoning for why they need to do the change NOW. This isn't something corporations/marketing dept decide just on a whim "You know what, I aint' feeling it.. let's just blank out the back of amiibos packages, Switch and Nintendo is synonymous!"

There's a better explanation: futureproofing. By removing console-specific branding, they don't have to change the back of amiibos packages again once Switch 2 is released. Avoiding a repeat of the situation that happened with Switch and amiibos between March 2017 and June 2017.

No, it's not suggestive of Switch 2 being announced soon or anything (it could be months away), in case anyone wants to use that type of retort with me. It's an observation, all of us can make what we will with that observation.
 
Last edited:
Question to those who know about how well DLSS works in different configurations:

What's the lowest native resolution a game can be rendered at to then produce a decent looking final image for handheld mode?

Before anyone says "what's a decent looking image" I mean something you think most people would say is playable enjoyable and not intrusively blury. Use your reasonable judgement here 👍

If the answer is: depends on the game and its art style and gameplay style then please give a couple of examples :)
 
Question to those who know about how well DLSS works in different configurations:

What's the lowest native resolution a game can be rendered at to then produce a decent looking final image for handheld mode?

Before anyone says "what's a decent looking image" I mean something you think most people would say is playable enjoyable and not intrusively blury. Use your reasonable judgement here 👍

If the answer is: depends on the game and its art style and gameplay style then please give a couple of examples :)

540p for handheld.
 
Question to those who know about how well DLSS works in different configurations:

What's the lowest native resolution a game can be rendered at to then produce a decent looking image for handheld mode?

Before anyone says "what's a decent looking image" I mean something you think most people would say is playable enjoyable and not intrusively blury. Use your reasonable judgement here 👍

540p would be my guess strictly for handheld.
 
Question to those who know about how well DLSS works in different configurations:

What's the lowest native resolution a game can be rendered at to then produce a decent looking final image for handheld mode?

Before anyone says "what's a decent looking image" I mean something you think most people would say is playable enjoyable and not intrusively blury. Use your reasonable judgement here 👍

If the answer is: depends on the game and its art style and gameplay style then please give a couple of examples :)
360p on handheld, 540p in docked mode

because we got games that low on Switch now with worse TAA solutions (like Xenoblade 3)
 
Question to those who know about how well DLSS works in different configurations:

What's the lowest native resolution a game can be rendered at to then produce a decent looking final image for handheld mode?

Before anyone says "what's a decent looking image" I mean something you think most people would say is playable enjoyable and not intrusively blury. Use your reasonable judgement here 👍

If the answer is: depends on the game and its art style and gameplay style then please give a couple of examples :)
I'd say 360p is the minimum before DLSS can't present a coherent image without glaring IQ issues. DLSS Ultra Performance 360p to 1080p for Handheld might be a possibility for the heaviest games. Here are some examples:

From @Alovon11:

DLSS 3.5


DLSS 2.1


From ResetEra user Enkidu:






General DLSS Testing (Old video):

 
Last edited:
360p on handheld, 540p in docked mode

because we got games that low on Switch now with worse TAA solutions (like Xenoblade 3)
OK thanks! So here's a question, should every game that can hit 540p in docked be able to hit 360p in handheld?

If no, then does that mean some games theoretically could be too demanding for handheld mode but playable in docked, and therefore we could potentially see some games only support docked mode?
 
0
Why are we talking about 540p in 2023
We're talking handheld/handheld mode right?

540p on a 8 inch or smaller screen looks decent to me. The question was what is the lowest image that one would consider "decent" on a handheld. The answers are going to be subjective. I don't think I'm going to care too much about any differences, if I'm able to tell them apart, between 540p vs 720p on a smaller-than-8-inch screen.
 
0
The numbers are not universally agreed upon and just broadly don't work super well. It's best to just be explicit about which systems you're referring to.

I would say yes and no.

It is widely understood (to the point of near universal acceptance) that when Nintendo entered the home console market in 1983, literally after the infamous gaming crash, that began the 3rd Generation of Video Game systems.

I do agree that saying which platforms were part of said generation are useful to the average user. Also in fairness, since Sony is the only one consistent on this, PlayStation consoles are the easiest means to convey which generation of consoles people are referring to.

Since I brought up Sony, PSX came out during the 5th Generation of consoles, and we are currently in the 9th Generation with the PS5.

Again, I agree it’s easiest to refer to the individual consoles when talking about the different generations, but that doesn’t imply we shouldn’t also include which numerical generation they’re referring to.
 
I'd say 360p is the minimum before DLSS can't present a coherent image without glaring IQ issues. DLSS Ultra Performance 360p to 1080p for Handheld might be a possibility for the heaviest games. Here are some examples:

From @Alovon11:




From ResetEra user Enkidu:






General DLSS Testing (Old video):


Excellent thanks these videos are really useful! It's pretty mind blowing how solid these output images look based on 360p input!

So yeah the reason I ask is because I'm wondering whether its possible that a game could run in docked at let's say 540p, which will result in a good image for a TV, but maybe that same game can't run at the minimum 360p in handheld? Is that scenario possible, and would that then mean that we could see docked only games? 🤔
 
Question to those who know about how well DLSS works in different configurations:

What's the lowest native resolution a game can be rendered at to then produce a decent looking final image for handheld mode?
This is just pure preference + game specifics. You take a PS2 game, leave it at it's native SD resolution and throw DLSS ultra performance at it and it will look fantastic. Do the same thing for a AAA downport, and people will say it looks like hot garbage.

It's also a question of "what settings is the game running." 60fps is going to look a lot better than 30fps, and not just because of smoothness, but because it increases the amount of information that DLSS has to work with.

I've seen Death Stranding running at 30fps, upscaled to 1080p from 360p, and it looked fine-if-not-great on a 14 inch screen at laptop viewing distance. With the same settings, I imagine it looking even better at 8 inches in size, to hide some of the upscaling artifacts.

I've seen Control 720p upscaled from a mere 240p, both at 60fps, and it looked tolerable, but Control was running on max settings including RT. The opening cutscene looked positively gorgeous, but Jesse came on and her hair looked like a mess, but again, I can imagine a lot of that vanishing at 8 inches of screen size. Though the sample-and-hold behavior of an LCD is a bit of a double edged sword there.

So yeah, 240p is at least functional at least for me, on some games, with otherwise high settings and 60fps. If you can stand that many grains of salt.

NB: I've not actually played through either of these games, though I really want to set aside time for Control - this was purely playing through the opening for testing purposes.
 
0
Why are we talking about 540p in 2023
We're talking about games being "pre rendered" at 360p, 540p, 720p, etc and then being upsampled (DLSS) to 1080p, 1440p, 4K, etc.

And that's just the reality of things. There are games on PS5 that are being rendered at 540p, 720p, 867p, etc and then being upsampled. Switch 2 will be a far cry from PS5 horsepower, as it's way weaker. Thus, games that are running on PS5 will need to be rendered at lower resolutions than PS5 on Switch 2, but DLSS will save the day for the Nintendo machine.

Heavy games running at 360p, 540p, 720p, etc on Switch 2 will be common. Switch 2 is still a tablet portable console, heavily constrained by the necessity of running on battery.

T239 as a tablet GPU/SoC is insanely strong, powerful and a generational leap for Nintendo. But compared to PS5 or PC GPUs, it's a severely weak chip that would have trouble running most modern games at decent framerates or resolutions. Much less RayTracing. The reason why it will be so strong on Switch 2 is because it will be used to its fullest, on a dedicated platform and with DLSS magic to improve image quality and resolution.

Context matter.
 
About the patent from a few days ago; while it's almost for a product that will not come to market, it could still be somewhat relevant to the Switch 2, if that patent was for an original Switch successor that was replaced with the current one like how INDY was replaced with NX. The fact that they were considering not having joy-cons anymore means there could be a possibility the current Switch 2 might not have them either.
 
It's an assumption though. There's no reason to change branding even if 3DS and Wii U is dead and buried; newly bought amiibos aren't going to suddenly stop working for 3DS's and Wii U's still in operation out there.
this is to future proof future consoles of Nintendo, in a way Nintendo preparing themselfs just in case Switch sucessor sucessor feature Amiibo use
 
About the patent from a few days ago; while it's almost for a product that will not come to market, it could still be somewhat relevant to the Switch 2, if that patent was for an original Switch successor that was replaced with the current one like how INDY was replaced with NX. The fact that they were considering not having joy-cons anymore means there could be a possibility the current Switch 2 might not have them either.
what patent? what did it describe?
 
So yeah the reason I ask is because I'm wondering whether its possible that a game could run in docked at let's say 540p, which will result in a good image for a TV, but maybe that same game can't run at the minimum 360p in handheld? Is that scenario possible, and would that then mean that we could see docked only games? 🤔
Yes, it's entirely possible. And 540p <-> 360p fits into the performance difference between Portable and Docked, which will probably be around 2x. Developers will optimize their games so that it can reach their targets, be it performance or resolution. We already have as vast history of games on Switch, even from Nintendo, that exhibit vast differences between portable and docked, as to reach the desired framerate and/or resolution.

As for your second point, yes. We'll see portable only or docked only games. That's something that already happens with current Switch. But unlike what you expect, it won't be due to performance but rather platform characteristics, be they input type, accessory, etc.
Games that don't require a specific platform characteristic, will not skip one or another mode of play. Developers will have a broad range of optimizations paths to tune performance for each profile.
 
I'd say 360p is the minimum before DLSS can't present a coherent image without glaring IQ issues. DLSS Ultra Performance 360p to 1080p for Handheld might be a possibility for the heaviest games. Here are some examples:

From @Alovon11:

DLSS 3.5


DLSS 2.1


From ResetEra user Enkidu:






General DLSS Testing (Old video):


Yeah, feel people really underestimate DLSS or are extremely particular about how low the internal resolution is without consideirng how well it can resolve?

Like sure, implementation matters (IMHO Control has a better implementation of DLSS than Cyberpunk, thus why DLSS 2.1 is more stable than Cyberpunk running 3.5, at the cost of sharpness due to the older version of the algorithm)

But this is a console, so NVIDIA and Nintendo are going to have documentation on the best practices for implementing DLSS to ensure the highest degree of Performance/Quality.

Main stickler that I hope will get ironed out with that is developers not properly managing LOD Biases based on output resolution which is a common issue on some implementations on PC for DLSS (FSR also but still)
 
Foldable Switch? or an Ayaneo Flip?

ayaneo_flip_ds.jpg

Dual Screen gaming is dead imo, and Nintendo will not revisit it.

But even in the case of a Switch 2 sku with a folding hinge, I don’t see how that’d work with say tabletop gaming, or even in the case of removable joy cons.

Which makes me suggest if there is a folding hinge built into the system, it would be for a Switch 2 Lite that does not have the issues I mentioned above.

It could then suggest at launch, Nintendo may provide both the Hybrid SKU, and a handheld only SKU with the folding hinge to protect the screen.
 
Question to those who know about how well DLSS works in different configurations:

What's the lowest native resolution a game can be rendered at to then produce a decent looking final image for handheld mode?

Before anyone says "what's a decent looking image" I mean something you think most people would say is playable enjoyable and not intrusively blury. Use your reasonable judgement here 👍

If the answer is: depends on the game and its art style and gameplay style then please give a couple of examples :)

Short answer is that, with a 1080p screen, from my experience 540p should be fine most of time time, and below that is when you start getting more noticeable artefacting. Being blurry isn't necessarily the worst thing, as it's in general going to be a lot less blurry than any other upscaling method, but visual artefacts are going to bother you before the blurriness does.

The long answer is that it depends. On art style, partly, but also on a lot of other things. One simple one is how quickly the camera (and objects within the scene) are moving. DLSS, like FSR 2, XeSS, etc, is a temporal upscaling algorithm, which means it takes data from several previous frames to produce the one you see. If the screen is completely stationary it can continually extract more data to work with for every pixel, as it jitters every pixel by a little bit each frame to take in more more detail, and as each pixel in this frame is perfectly aligned with the same pixel in the previous frame, it can use all this data to build a really sharp image. For this reason temporal upscalers always look best when the camera isn't moving.

The more movement in a scene the more temporal upscalers struggle. They use motion vectors (which are used to tell where each pixel should be based on movement from the last frame), which allow them to use data from previous frames, but these motion vectors are only estimates, so sometimes data is lost (or the wrong data is used) if the motion vector points to the wrong pixel. If there's only a small amount of movement between each frame (eg the camera is panning slowly), then it's easier for the temporal upscaler to do its job, but if there's a lot of movement between frames, for example with lots of different objects moving quickly in opposite directions across each other, then it's more difficult, and you're more likely to get artefacts.

DLSS is particularly good at handling these motion cases because it's not just relying on motion vectors. It does take them in, but as it's a machine learning solution it can also do pretty advanced pattern recognition to allow it to more accurately map data from one frame to the next. That said, it's still not magic. If there's a lot of movement in the scene, and it has very little data to work with (eg because it's relying on a 360p input image to reconstruct 1080p) it's going to struggle and you're going to get artefacts.

As an example, I played a good bit of Starfield on a Series X recently, which uses FSR 2 from 1440p internal to a 4K output resolution, and on static scenes it looks very good, basically indistinguishable from native 4K in many cases. When things move, though, it starts showing noticeable artefacts, with things like shimmering aliased edges, incorrect motion compensation (eg stars moving with the camera when they shouldn't) and other issues. None of these are massively detrimental, but they are noticeable. I've played a fair few PC games with DLSS with a 1440p internal resolution and a 4K output, and artefacting is almost never noticeable, even under fast motion, as DLSS just handles it much better than FSR 2 does.

The other major factor in how well temporal upscalers hold up is how much fine detail there is in the scene. Basically, if you take a game with the polygon count and texture detail of Mario 64 and use a temporal upscaler to get it from 1080p from 4K, it's not going to have much trouble. It's easy to track shapes and objects from one frame to the next when they're big and simple, but it becomes more difficult the smaller the details are. This is a particular issue for particle effects which may only be a couple of pixels in size. For example, if you've got an object that is two pixels in size at the 1080p output resolution, but your internal resolution is 540p, your temporal upscaling solution is only going to actually see that object once every two frames, which makes it difficult to reconstruct any information about it.

This is also why I think using DLSS in "ultra-resolution" mode, where it uses a 33% scaled input resolution, is going to be much more feasible in docked mode than portable mode, because if you have the same scene on a high res screen and a low res screen, there's inherently going to be more fine detail on the low res screen, relative to resolution. As an example, let's say there's an object that takes up 9 pixels on a 4K screen. If you're running DLSS with a 720p input here, DLSS will get on average one pixel's worth of data about it each frame, just enough to track it and extract some information. If you take the same scene and scale it down to a 1080p screen, the object is now about 2.25 pixels in size, and if you wanted to use DLSS from 360p up to that 1080p screen, then DLSS is only going to get a single pixel of information to work with once every four frames on average. That's not a lot to work with.

For a particularly extreme example of what happens when you've got way too much fine detail on screen for a temporal upscaler to keep up with, take a look at some footage of Immortals of Aveum on consoles. They use FSR 2, with an input resolution of 720p and an output of 4K on PS5 and XBSX, and even lower internal res on XBSS. They're already stretching FSR2 to the breaking point, but they also go incredibly heavy on particle effects, which are both fine detail and quickly moving, which is the worst case scenario for temporal upscaling. Unsurprisingly, FSR 2 can't keep up at all, and the game looks like a mess. I'm sure DLSS would do a better job here, but it would still struggle.

Immortals of Aveum leads me to my last point, which is that Switch 2 games are in the relatively unique position of being made for a console designed around temporal upscaling. This means developers can and will tweak their games to avoid the kind of cases where DLSS struggles to keep up, as it's the only way players will ever see the game. Immortals of Aveum is actually a curious case here, because it depends so heavily on FSR 2 on consoles, but at the same time they seem to have made a game which wasn't remotely made with the limitations of FSR 2 in mind. It really feels like the art team was working on high-end PCs running everything natively, and the entire production was too focussed on ticking off graphical features in UE5 to care about the image quality turning to hot garbage.

So, presuming Switch 2 devs don't make the same mistakes the Immortals of Aveum devs did, there's a chance we could see some developers push internal resolutions below 540p in handheld mode and still get good results, because they're careful to avoid the situations where DLSS would have trouble. Still, not all games are going to be able to do that, because of art direction or other reasons (you can't exactly limit fast movement in an F-Zero game), and are going to have to maintain a decent internal resolution to keep image quality from falling apart.
 
I know there has been a lot of speculation with regards to new Joy-Con, but what about a new Pro Controller?

The current pro controller we have for Switch is almost perfect in my eyes; great battery life, comfortable and fairly robust all round, so I can't really see what they can do to improve it. Unless they don't make a new one, and continue producing the current one we have?
 
I know there has been a lot of speculation with regards to new Joy-Con, but what about a new Pro Controller?

The current pro controller we have for Switch is almost perfect in my eyes; great battery life, comfortable and fairly robust all round, so I can't really see what they can do to improve it. Unless they don't make a new one, and continue producing the current one we have?
A D-pad that isn't terrible and analogue triggers

Also a headphone jack as the person me mentioned.
 
I know there has been a lot of speculation with regards to new Joy-Con, but what about a new Pro Controller?

The current pro controller we have for Switch is almost perfect in my eyes; great battery life, comfortable and fairly robust all round, so I can't really see what they can do to improve it. Unless they don't make a new one, and continue producing the current one we have?

Weak rumble, bad d-pad and lack of headphone jack would be my complaints about the current one. Remedy those and I'm pretty happy.

Analog triggers would be another popular ask.
 
I'd only ask for a headphone jack if Nintendo still can't be bothered to pay the AptX Bluetooth codec licensing for wireless headphones. At least the wifi latency for sound wouldn't be nearly as bad as SBC currently is
 
0
Weak rumble, bad d-pad and lack of headphone jack would be my complaints about the current one. Remedy those and I'm pretty happy.

Analog triggers would be another popular ask.
Oh yeah, I completely forgot about the D Pad. Having to use tape to try and fix it was terrible 😭
 
Excellent thanks these videos are really useful! It's pretty mind blowing how solid these output images look based on 360p input!
I would warn away from making too solid a conclusion from YouTube here. YouTube can remove detail, which can both make these images look worse, but also cover up some instability artifacts, as they disappear into the compression noise.

So yeah the reason I ask is because I'm wondering whether its possible that a game could run in docked at let's say 540p, which will result in a good image for a TV, but maybe that same game can't run at the minimum 360p in handheld? Is that scenario possible,
Anything is possible, but it's not much likely. I'm assuming you're talking about input resolutions (ie, before DLSS is applied)? 360p is less than half the pixels of 540p. It would have to be a very unusual game to not get down there, even at half GPU power.

There may be games that look like garbage for reasons other than resolution. RT effects often happen at lower resolution/fidelity than the game itself, for example. There can be a drop off point where dropping the resolution has knock on problems like ghosting/poor light responsiveness/blurry reflections and shadows, which might make the game unacceptable down there.

The reverse can also be true. The exact same artifact on an 8 inch screen will look very different on 60 inch TV. LCD introduces some motion blur which can smooth over low framerates that would look much worse on an OLED.

and would that then mean that we could see docked only games? 🤔
We may get games that are limited to one or the other (or have game modes on one or the other) but not for power reasons, but control reasons - the touchscreen in Mario Party or the motion controls in Skyward Sword. I would assume that Nintendo would shut down any games which only operate in one mode or the other for graphics reasons, as it would be damaging to the whole platform.

I would also expect games which only support one mode or the other to sell pretty poorly.
 
0
Introducing the DRAKE DLSS ESTIMATOR 6000 :



This new version has some massive changes from the last one, and the results are quite different.
Significantly more accurate (probably) and significantly more optimistic than the last one.
For 4K, the new calculator indicates usually around a 35% smaller time than the old one. Reasons for that detailed in the technical section below.

Let's go quickly over the main changes before going into the details.

Patch notes :
-Complete replacement of all DF data by official Nvidia data
-less variables, which means a smaller margin of error
-Introduced 1080p, removed 1440p
-Linear function used previously replaced by a 3rd degree polynomial (most important)
-Making it look slightly cooler

First thing I want to talk about is a correction : I claimed in a "technical stuff" section in a previous post that DLSS was not just linked to output resolution; I was wrong. This assumption came from the DF data in their second video, but the data in that video turned out to be... meh. Main issue is that the methodology worked perfectly for the purpose of the video, which was comparing FSR to DLSS, but the data is just not good for our use case.

The old calculator was entirely based on that DF data, and that data has been completely phased out of the calculator. Now the calculator uses the data from the document I linked and said was "The One document to rule them all". First advantage is, as said previously, that less variables are involved, reducing the number of ways this can go wrong. But that's not really what's important. That document is from Nvidia, and has data for a lot more GPUs, although only Ampere has been used here for reasons I'll detail later. This means the data is more accurate, but the most important is that the bigger number of GPUs tested allows us to verify if the cost of dlss scales linearily with performance, as supposed until then.
It does not.
It is hard to come to a conclusion as the data is still not as much as I'd like, but using Excel, we can see the curves are akin to a polynomial. I have, as such, asked Excel to make polynomials out of the curves. They ended up being of the third degree, simply because there’s only 4 GPUs whose data I can use.
Here are the curves, with the X axis being the tensor performance and the Y axis the speed of the DLSS calculations (not how long it takes, but the speed; a higher value means faster so a lower ms count).



That change in the way scalability is calculated makes a MASSIVE difference to the results, and is the main factor of change compared to the old calculator.
Here is an illustration : the intersection is the 3080; X coordinate is tensor performance, Y coordinate is the speed of the DLSS calculations.
The orange curve is how I previously thought it would scale, blue curve is how it actually scales.




To give you an idea, Drake would be around 0.07 on the X axis. We see that this new method of calculation heavily benefits Drake.

Using this data, I discovered a lot of things. First of all, the scaling of DLSS speed depending on resolution. It scales kinda linearily, but very loosely. I’ll first talk about the 1080p to 4K difference. I have noticed that the more powerful a GPU is, the smaller the difference between 4K and 1080p speed is; and the less powerful a GPU is, the more that difference tends to be the same as the difference between the number of pixels (so 4x). For example, For Drake, the difference between 1080p speed and 4k speed is very close to 4x because Drake is very not powerful, but for a 3090 it's more like 2.9x.
4K to 8K is significantly weirder. It starts at around 4.3x, then goes up, then down. At first I thought this was some proof the Excel predictions were just not working, but after looking at the Nvidia data, we do see 4k to 8k going up then down.






It isn’t shown in this graph, but I also made 1080p to 1440p, which also looks very weird : starts at 2.2x difference, goes down to 1.5x then starts going up again. So 1080p to 4k looks normal and makes sense, but 4k to 8k and 1080p to 1440p are much weirder for not apparent reason. Btw, talking about 1440p…


I removed it because I’m not sure how accurate it is. Even just looking at the raw data through a graph, the 1440p speed as a function of tensor performance looks... wacky (cf first graph, the orange curve). It looks like the other curves but on steroids. And the fact that the 1080p to 1440p difference curve also looks wacky does not convince me of the validity of my results. I do think in the future I might add back 1440p; but I want to be more sure of what’s going on before, and I want to first put in place the enhancements I mention towards the end.

Now is the time to explain why I didn't use Turing to improve the prediction. I mean, I say now's the time but any point could be the time, it's not linked to any of the other points. In the document, 3 Turing GPUs are benched : 2060 super, 2080 laptop and 2080 TI. First problem : laptop GPUs have special power limiting software that makes it probable the laptop gpu was not running full speed. And we don't know what speed it's running at. This is also reflected in the graphs below, where we see the 2080 laptop is way closer to the 2060 super than it should be.





But that means for Turing we only have 2 GPUs left. How do you infer a curve out of that ? You don't really. That's not enough. 4 is already limiting, so 2? Don't even think about it. And that is why I didn't take into account Turing GPUs in the calculator.

I'll now talk about the sparsity improvement. You may recall that earlier, we used DF's data with a 2060 to estimate the improvement from sparsity. Turns out that improvement massively depends on the card : the faster the card, the smaller the difference. The resolution also plays a role, but a minor one. As such, at the 2060 performance level, we see a 25% improvement from sparsity. 25% is what was found with DF data with a 2060, so that was close. On the other hand, at the 2080 ti performance level, the improvement is merely 9%.


Now let’s talk limitations of my simulations.


First of all, because of the data itslelf. We are talking about official data from Nvidia, but they said they mesured that experimentally by running a command prompt. But anyone who has ever done physics knows that experimental measurements will always be imperfect. Who knows if the GPUs were boosting as they should. Who knows if the antivirus was running in the background. We can never rule out the possibility that the data is not perfect, even if it comes from Nvidia.



This problem may be the reason why some of the behaviors in the graphs are really weird. Or it's not. I do not know. Maybe the wackiness of the 1440p results come from there. Maybe the weird 4K to 8k speed difference is also caused by that. We’ve talked about how the weird 4K to 8K curve isn't some product of Excel hallucination, but is actually in accordance with what the data shows – and maybe that’s how it behaves in real life. Or maybe there is some slight error in the data, that snowballed into those curves. That’s ultimately something we can’t really know, unless we can have extensive testing from several sources.



The second problem is simply that the Excel predictions take only 4 GPUs into account, limiting the precision. We can already see it in the first picture, the predictions don't completely line up. This is even more apparent if I show you what the predicted curves look like when extending further :





As you see, it goes to shit very, VERY quickly. At around 1.6 it starts becoming unrealistic ; and to give you an idea, on this scale 1=RTX 3080 tensor performance, and the highest actual data from nvidia we have is the 3090 at around 1.2. So it goes to shit real quick. I don’t think that means that what comes before in the curve is useless; the curve still has to intersect at the origin. 0 tensor performance means 0 speed. And there aren’t infinite ways to go to that (I mean yes there are, but shut up you got the point) ; meanwhile, on the right side of the curve Excel doesn’t have any indication of where to go and this explains how the curve can become so unrealistic.



Can this be improved ? Yes.

What we need is more GPUs. Ideally, less powerful and more powerful than what we have now. I have no clue of where to find info about slower GPUs, but I do know where to start with higher end GPUs : I know techpowerup has done some testing on a 4080 for DLSS. This would allow us to see an actual data point at like 1.6, which would go a long way for improving the predictions. Important thing to note is that we can’t rely on Turing to extend the graph, because Turing doesn’t benefit from sparsity, and the improvement gains from that are unpredicatble as we've seen before; on the other hand, Lovelace tensor cores are basically Ampere tensor cores but with FP8. Considering DLSS most likely does not use that, we can use Lovelace cards to extend the graph. Although what would be best would be a lower end card as, you know, we’re trying to predict the performance of Drake. 4080 results may help for that, but 3050 results would be significantly more appreciated, as it would be much more relevant.

In the next version of the calculator, I will try adding 4080 data to make the predictions more accurate, and I will also continue searching for more data that could help us in our quest of predicting Drake DLSS performance.



You have finally reached the end of the technical stuff. Hope you enjoyed it and if you have any question, don't hesistate. There are many things I glossed over, and I’d be happy to give you more details if you want.

I have a few suggestions to improve this:
1. Rather than having sliders, have the numbers be entered via a text box, allowing specific input values.
2. While a 1080p benchmark is definitely more useful than 1440p, it would be nice to still have it available, especially if it’s in tandem with FSR1 to get it up to 4K, as some have suggested.
Other than that, amazing job.
 
0
Portable console similar in form factor to the Switch Lite, but with no directional buttons and only 1 control stick. Still had a cartridge slot, vents, etc.
If someone could cite the patent I could take a look, but my first impression is that Nintendo like most folks uses generic images when describing the "embodiment" or the potential product so the focus can be on what "claim" or idea that is being patented.

So we would need to know what the patent covers to see if the image has any relevance outside of, "this idea would work on a device." "Like this one which happens to look like a OG Neo-Geo Pocket."
 
0
I was re-reading Dakhil's OP and just had a question about the camera sensor rumour from Era; was there anything to suggest (or has anyone else brought up the idea) it was simply referring to the IR sensor on the bottom of the right joy-con? Given the success of RFA being the only game to successfully use it (outside of 1-2 Switch entries) it makes sense it would be brought back for the sequel, but trying to imagine yet another wasted 0.3MP camera sensor as if it wasn't a bad enough idea on DSi/3DS/Wii U seems strange as hell that they keep trying to bring back
 
0
I would say yes and no.

It is widely understood (to the point of near universal acceptance) that when Nintendo entered the home console market in 1983, literally after the infamous gaming crash, that began the 3rd Generation of Video Game systems.

I do agree that saying which platforms were part of said generation are useful to the average user. Also in fairness, since Sony is the only one consistent on this, PlayStation consoles are the easiest means to convey which generation of consoles people are referring to.

Since I brought up Sony, PSX came out during the 5th Generation of consoles, and we are currently in the 9th Generation with the PS5.

Again, I agree it’s easiest to refer to the individual consoles when talking about the different generations, but that doesn’t imply we shouldn’t also include which numerical generation they’re referring to.
It may surprise you to learn that even the NES being 3rd generation is somewhat controversial because the earlier generations are subject to Wikipedia fudging details to make a cleaner narrative, like how they squish the Atari 2600 and 5200 into a single generation somehow.

The reality is that the numbered console generations are one of the more egregious examples of citogenesis in Wikipedia's history, and the fundamental concept of them doesn't make a lot of sense. It is not reasonable to expect consoles to cleanly sort themselves into universal, cross-vendor generations, because they just don't, and any attempt to do so is going to have to fudge a lot of details and erase a lot of nuance. The Switch is a perfect example of this, since the overall landscape around it has been continuously shifting since the system launched. There was a period of about 15 years or so where the numbers worked out sort of cleanly, but that's over and the numbers were having a negative effect on the overall discourse even before that.
 
Dual Screen gaming is dead imo, and Nintendo will not revisit it.

Dual screen gaming started and ended with Nintendo being the sole company utilising it (?)

Still... I wouldn't call it dead forever, only recently I see a lot of development being done around dual screen laptops and phones, now a handheld again as well. I don't think they'll really take off but eh.

If Nintendo would go the route of a dual screen again it could add a distinction to future third and first party games that could nudge customers to that specific version.

The DS was a great success (not necessarily because of the dual screen) but I don't see a reason why they wouldn't do it again and make their games more unique again, personally I thought they struck gold with it.

The only big reason against a dual screen imo would be that it becomes more work creating a layer for single screen docked gameplay vs handheld dual screen gameplay.
 
I think Tales of Arise's graphics are thanks to the use of UE4 and the huge budget invested in it for a Japanese game.
To bring Xenoblade's graphics up to the same level, Nintendo would need to invest a huge budget.

With Switch 2, Monolith Soft should either revamp their engine or move to UE5.
no,xenoblade 3 and tales of arise had pratically the same team size
 
Still doesn't explain the reasoning for why they need to do the change NOW. This isn't something corporations/marketing dept decide just on a whim "You know what, I aint' feeling it.. let's just blank out the back of amiibos packages, Switch and Nintendo is synonymous!"

There's a better explanation: futureproofing. By removing console-specific branding, they don't have to change the back of amiibos packages again once Switch 2 is released. Avoiding a repeat of the situation that happened with Switch and amiibos between March 2017 and June 2017.

No, it's not suggestive of Switch 2 being announced soon or anything (it could be months away), in case anyone wants to use that type of retort with me. It's an observation, all of us can make what we will with that observation.
Future proofing really seems like the best answer. It is a little odd that there are no images on the back, but if anything that reinforces future proofing. Drake is coming, so the instructions to marketing could have been "remove specific device branding".

I'll run check a 2022 case shortly, but having marketing on the back referencing the Wii U and 3ds then would be weird.
 
0
The DS was a great success (not necessarily because of the dual screen) but I don't see a reason why they wouldn't do it again and make their games more unique again, personally I thought they struck gold with it.

The only big reason against a dual screen imo would be that it becomes more work creating a layer for single screen docked gameplay vs handheld dual screen gameplay.
You said it.
More work for developers (especially third parties), higher costs and so on
 
0
Dual screen gaming started and ended with Nintendo being the sole company utilising it (?)

Still... I wouldn't call it dead forever, only recently I see a lot of development being done around dual screen laptops and phones, now a handheld again as well. I don't think they'll really take off but eh.

If Nintendo would go the route of a dual screen again it could add a distinction to future third and first party games that could nudge customers to that specific version.

The DS was a great success (not necessarily because of the dual screen) but I don't see a reason why they wouldn't do it again and make their games more unique again, personally I thought they struck gold with it.

The only big reason against a dual screen imo would be that it becomes more work creating a layer for single screen docked gameplay vs handheld dual screen gameplay.
The big reason you cite really is a BIG reason. Assuming the first principle behind the Switch was to merge development teams/methods and the device to champion that approach was the Switch, dual screen by necessity was out. A Switch with the same chipset it has today, but a second screen, would be a monstrosity.

Perhaps it returns in a future Switch device using Miracast (which the Wii U used) and a Wifi/Miracast enabled dock. I would personally love that, because I REALLY loved the asynchronous gameplay concept from the Wii U.
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom