• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

Here’s a question for you: do you think Nintendo will mention at all the word “4K” on Switch NG reveal?
I think they'd prefer to use the term "Ultra HD gaming", or some variation thereupon.

For the first time since the GameCube, Nintendo is launching a successor which bases its appeal nearly solely in being "That Nintendo Thing But More Powerful", and calling Nintendo Switch "HD gaming" and Super Nintendo Switch "Ultra HD gaming" gets the point across in a way friendly to consumers and the hardcore types alike. It also alleviates some of the problems with using a purely technical specification like "4K", since if their games are only pushing above 1080p in some instances, hey, it's still MORE than full HD!

In short, I don't think they'll out and out say "4K", but the increased output resolution will definitely be referenced.
 
I think subsequent reporting on that from Nate suggested the 60fps 4K was a bonus and the intention of the BOTW was the seamless loading.

And this does lead me to an obvious point. 4K 60fps for BOTW may be very doable for the hardware, but how would 4K 60fps work for a modern 9th gen game?
There is no universal answer to this question since DLSS can be appropriately scaled for quality (visual accuracy) vs performance (frame thoroughput). There's numerous adjustable factors between the execution modes and source inputs that the developer is responsible for managing that affects the final outcome. I expect that the target resolution and frame rate of "4K 60FPS" to be reasonably achievable if the appropriate adjustments are made, but whether that is actually achieved is dependent on what the developer prioritizes.
 
0
I'm thinking, we know that the cost of DLSS scales with the output resolution, so regardless of the internal resolution the cost will always be the same.
Basically using the ultra performance mode from 720p > 2160p costs the same as upscaling from 1800p > 2160p.
But why does it have to be like this?
Ok, on the PC we have a standardization of the algorithm. But perhaps it wouldn't be possible, on customized hardware, to have an algorithm in which the developer could choose the quality of the upscaling? In order to better balance and find a sweet spot between internal resolution / output resolution / cost in milliseconds?
Let's say in a hypothetical scenario, a lighter game, a developer is getting 1440@70 consistently before DLSS, but for the costs associated with upscaling to 4K using DLSS would be a performance loss, but at the same time you are basically leaving part of the silicon in the console unused.
Why not have the option of spending those extra 2.38ms you're getting, and being able to reach 4K, even though with an upscaling that's not as good as the standard performance mode, but which at least delivers extra quality practically for free.
 
After looking at that Digital Foundry video that talked about how Hello Games used a heavily-customised version of FSR2 for Switch to improve NMS, it makes me wonder what Nintendo, Nvidia and third-party developers could do to customise DLSS in similar fashion to improve performance. After all, FSR2 is generally heavier than DLSS, so if FSR2 can work on Switch...
 
Crush the internal render resolution so DLSS can do 4K. Optimise further; crush assets. Render at 1080p and do a computationally cheap upscale. DLSS to 1440p then FSR1.0 to 4K.
I would not use FSR 1.0 in conjunction with DLSS.

In fact, I would rather render at 1080p and just use DLSS to scale straight to 4K with no additional upsampling as I've mentioned. It doesn't matter if Ultra Performance has to be used to get this done. The results would be far cheaper computationally than your method with better image quality to boot.
 
After looking at that Digital Foundry video that talked about how Hello Games used a heavily-customised version of FSR2 for Switch to improve NMS, it makes me wonder what Nintendo, Nvidia and third-party developers could do to customise DLSS in similar fashion to improve performance. After all, FSR2 is generally heavier than DLSS, so if FSR2 can work on Switch...
third parties can't customize DLSS like FSR. Nvidia has to be the one to do that.
 
Here’s a question for you: do you think Nintendo will mention at all the word “4K” on Switch NG reveal?
Absolutely. 4k resolutions aren't exactly a niche technology anymore, it's a pretty easy way to sell a product as being high resolution. Television adverts and packaging use it as a selling point, mobile phones use it as a cool new feature, YouTube and other websites use it as being a super high quality option. Most people at this point are aware of it and it wouldn't fly over heads in the way something like "uses NVIDIA DLSS tech!" would. I sincerely doubt most Switch 2 titles will actually reach 4k, but it's a good marketing point, especially for something that isn't a traditional home console.
 
I'm thinking, we know that the cost of DLSS scales with the output resolution, so regardless of the internal resolution the cost will always be the same.
Basically using the ultra performance mode from 720p > 2160p costs the same as upscaling from 1800p > 2160p.
But why does it have to be like this?
Ok, on the PC we have a standardization of the algorithm. But perhaps it wouldn't be possible, on customized hardware, to have an algorithm in which the developer could choose the quality of the upscaling? In order to better balance and find a sweet spot between internal resolution / output resolution / cost in milliseconds?
Let's say in a hypothetical scenario, a lighter game, a developer is getting 1440@70 consistently before DLSS, but for the costs associated with upscaling to 4K using DLSS would be a performance loss, but at the same time you are basically leaving part of the silicon in the console unused.
Why not have the option of spending those extra 2.38ms you're getting, and being able to reach 4K, even though with an upscaling that's not as good as the standard performance mode, but which at least delivers extra quality practically for free.
The reason being that any machine learning model (whether used for DLSS or for ChatGPT) ultimately boils down to a pre-defined sequence of linear algebra problems, so any considerations for optimizing the execution of the model needs to be defined in the model itself before it is executed. Execution time is inherently non-deterministic because there are so many uncontrollable factors that can affect that metric, so this effectively introduces a undesirable random factor to an algorithm that wants to be as predictable and consistent as possible. DLSS supports multiple execution modes that correlate to desired performance characteristics, but there isn't going to be a bespoke mode tailored for a specific game unless the developer specifically requests Nvidia to make one.
 
0
third parties can't customize DLSS like FSR. Nvidia has to be the one to do that.

Fair enough. In that case, I hope Nintendo and Nvidia have been working on exactly that - a version of DLSS customised around the Switch 2 hardware, if not around individual games.
 
0
I have a question for you guys: Are you two aware of DLSS being able to upsample on a single axis rather than both? I remember some PS4 titles checkboarded on a single axis to save performance. Could DLSS do the same?
No, DLSS needs a real pixel grid. But single axis is "standard" for checkerboarding. It only saves performance because you're rendering 1/2 the pixels. DLSS Quality also renders 1/2 the pixels, just as a real pixel grid.

Here’s a question for you: do you think Nintendo will mention at all the word “4K” on Switch NG reveal?
The Switch reveal only had two works in it "Nintendo" and "Switch." I do not expect "4K" to be mentioned.
 
0
DLSS as we know it has to work on a variety of graphics cards with numerous hardware variables. Why are we expecting the SNG implementation of DLSS to be off the shelf PC implementation? I find it hard to believe that Nintendo has image reconstruction technology on Switch that takes reconstructs 540p to 1080p with Xenoblade Chronicles 3, Dying Light uses something similar, and yet DLSS can't be customized for Drake? It's insulting to assume Nvidia, the current leader in reconstruction technology, wouldn't be able to optimize DLSS for a single piece of hardware where they know all the variables. It's going to be integrated into the API at the lowest possible level. This is still a console, and there are efficiencies to be found compared to PC where they cannot target bespoke hardware.
 
I have a question for you guys: Are you two aware of DLSS being able to upsample on a single axis rather than both? I remember some PS4 titles checkboarded on a single axis to save performance. Could DLSS do the same?
Checkerboarding a single axis? Wouldn't that just be interlacing?
 
0
Power has nothing to do with it. The eShop is web-based, so any content has to be downloaded. I'd rather not spend the bandwidth on something that isn't relevant to shopping.
giphy.gif
 
0
Here’s a question for you: do you think Nintendo will mention at all the word “4K” on Switch NG reveal?
In terms of a possible reveal where it's just the OG Switch's reveal again, probably not. But in an overview trailer/presentation after a tease, they'll kind of have to show that it can support 4k. Probably won't be the biggest marketing push like how Sony and Microsoft do it, but them talking about it is kind of needed.
 
0
DLSS as we know it has to work on a variety of graphics cards with numerous hardware variables. Why are we expecting the SNG implementation of DLSS to be off the shelf PC implementation?
DLSS is a neural network, which is designed for one piece of hardware - the tensor core. For the most part, you don't tune neural networks for different hardware, you make a new neural network if you want a different performance characteristics. And in the case of the one in DLSS, it represents hundreds of thousands of hours of compute time. Not only would it cost as much to build a customized DLSS as it did to make DLSS in the first place, there is no reason to believe that there are any optimizations to be made for Drake specifically.

The value of DLSS is that it shares one model that is trained on truly massive quantities of data. Forking DLSS would effectively lock Nintendo off from DLSS development going forward.
 
Here’s a question for you: do you think Nintendo will mention at all the word “4K” on Switch NG reveal?

Yes. "4K" is general nomenclature everyone knows now, and I expect Nintendo to throw everything and the kitchen sink at differentiation bullet points between the OG and next-gen Switch.
 
DLSS is a neural network, which is designed for one piece of hardware - the tensor core. For the most part, you don't tune neural networks for different hardware, you make a new neural network if you want a different performance characteristics. And in the case of the one in DLSS, it represents hundreds of thousands of hours of compute time. Not only would it cost as much to build a customized DLSS as it did to make DLSS in the first place, there is no reason to believe that there are any optimizations to be made for Drake specifically.

The value of DLSS is that it shares one model that is trained on truly massive quantities of data. Forking DLSS would effectively lock Nintendo off from DLSS development going forward.
Interesting. I also was wondering if it was possible for Switch 2 to have a more efficient customized version of DLSS. But sounds like NVIDIA already engineered a design that is platform-agnostic with respects to performance. Great work by them, but a slight bummer for my decades old "code-to-the-metal" hopes
 
Hi all! First post since signing up!

Just a question in regards to process size.
Would 8nm even be cost effective going forward over 6-10years? Otherwise what factors would cause using 8nm over 4nm to be more or less expensive?

cheers
 
As if we weren't sure enough, Doctre knocks it out of the part once again! The "introduce new features while maintaining all existing functionality" line makes me wonder about bc too. Does this also confirm backwards compatibility?
"Existing project" means whatever software NTD was working on for the new hardware. Maintaining existing functionality just means not breaking the existing features of that project when it underwent this supposed big performance-improving refactor at the hands of the profile author.

The mentions of TensorRT (no relation to ray tracing, btw) and TensorFlow probably indicate that at least one of the bullet points in that profile was a project for R&D into how ML inference can be used in games, beyond DLSS which is pretty much pre-packaged and doesn't need R&D outside of its integration with NVN2.
 


What it do bruh?

Took ownership of an existing project and redesigned to reduce complexity, vastly increase performance and introduce new features while maintaining all existing functionality. Quickly adapted to new technologies and domains, such as graphics and machine learning, to produce high-performance interactive experiences for research and development purposes. Analyzed machine code for security, performance, and debugging purposes. Developed low-level platform security solutions.

Skills: TensorRT-OpenGL Shading Language (GLSL) - Machine Learning - CUDA TensorFlow Debugging · C++. C# ARM Assembly Computer Graphics IDA Pro
Interesting. So this mentions an "existing project" and that this employee "vastly increased performance and introduced new features while maintaining all existing functionality". Since he mentions TensorRT, could he possibly be talking about work on a customized version of DLSS that's vastly faster while maintaining all existing functionality?

Either that or maybe he worked on porting the Matrix Demo using some AI magic that, according to the leaks, made it look comparable to PS5?
 
Interesting. So this mentions an "existing project" and that this employee "vastly increased performance and introduced new features while maintaining all existing functionality". Since he mentions TensorRT, could he possibly be talking about work on a customized version of DLSS that's vastly faster while maintaining all existing functionality?

Either that or maybe he worked on porting the Matrix Demo using some AI magic that, according to the leaks, made it look comparable to PS5?
I wonder if that was the special redone version of Breath of the Wild that was shown off.
 
Interesting. So this mentions an "existing project" and that this employee "vastly increased performance and introduced new features while maintaining all existing functionality".
I will gently break it to the room that there are massive projects that game developers work on that aren't games. Making development tools run much faster, with more functionality is a huge benefit, and one of the things that NERD and NST spend most of their energy on. There is a not-insignificant chance this person is working on tools to make Nintendo games, not on anything that actually runs on the Switch NG.

In fact, considering the mention of TensorRT there, I would almost guarantee this is the case. You can infer exactly nothing about the hardware or software of the Switch NG from this profile.
 
Quoted by: LiC
1
I think it’s time to treat the hypothetical Nintendo-customized DLSS as a myth.

The customizations that you could feasibly make, like reducing the number of channels in each layer of the architecture, would only make a marginal performance difference and will always penalize image quality. (And for anyone who’s read my older posts, I no longer believe that decreasing the total number of layers in the network is a good way to decrease the cost, for reasons that I may get into some other time).

The “optimized hardware” customizations that people keep dreaming up simply don’t exist; the tensor cores are the hardware optimization, and we know those are just Ampere tensor cores in T239. So I can only conclude:

Custom DLSS is dead, and we have killed him.
 
I will gently break it to the room that there are massive projects that game developers work on that aren't games. Making development tools run much faster, with more functionality is a huge benefit, and one of the things that NERD and NST spend most of their energy on. There is a not-insignificant chance this person is working on tools to make Nintendo games, not on anything that actually runs on the Switch NG.

In fact, considering the mention of TensorRT there, I would almost guarantee this is the case. You can infer exactly nothing about the hardware or software of the Switch NG from this profile.
NTD doesn't make games, but they do research how different technologies (particularly at the hardware level) can be used within games. Since the profile mentions "produce high-performance interactive experiences for research and development purposes," I'd say that's probably what was going on in this case, rather than it being related to toolchain development (though they certainly do a lot of that).

It's not all that exciting, any more than it would be if they made a sample interactive renderer that utilizes variable rate shading or something. But rather than saying you can't infer anything about the hardware from this profile, I'd say the profile reflects a set of features we already know the hardware has.
 
I think it’s time to treat the hypothetical Nintendo-customized DLSS as a myth.

The customizations that you could feasibly make, like reducing the number of channels in each layer of the architecture, would only make a marginal performance difference and will always penalize image quality. (And for anyone who’s read my older posts, I no longer believe that decreasing the total number of layers in the network is a good way to decrease the cost, for reasons that I may get into some other time).

The “optimized hardware” customizations that people keep dreaming up simply don’t exist; the tensor cores are the hardware optimization, and we know those are just Ampere tensor cores in T239. So I can only conclude:

Custom DLSS is dead, and we have killed him.
Who's we
 
As if we weren't sure enough, Doctre knocks it out of the part once again! The "introduce new features while maintaining all existing functionality" line makes me wonder about bc too. Does this also confirm backwards compatibility?
It doesn't confirm anything, but it does sound really good :D
 
Hi all! First post since signing up!

Just a question in regards to process size.
Would 8nm even be cost effective going forward over 6-10years? Otherwise what factors would cause using 8nm over 4nm to be more or less expensive?

cheers
No one knows exactly but for me it isn’t off the table.
 
0
Power has nothing to do with it. The eShop is web-based, so any content has to be downloaded. I'd rather not spend the bandwidth on something that isn't relevant to shopping.
Not necessarily. The music could be stored locally on the Switch and just play when you open the eShop. I imagine that was the case for the WiiU too. I mean the short jingle at the beginning or the sound effects when navigating the eShop are for sure also not Web based.
 
Here’s a question for you: do you think Nintendo will mention at all the word “4K” on Switch NG reveal?
I think Nintendo has been pretty careful about how they market their consoles for a while now. Advertising 4k without context is something I don't think they will do in a general sense. I imagine they could advertise it for backwards compatible switch games and/or a game to game basis. Or say something like "Up to 4K' or "Ultra HD."
 


Possible evidence of 8nm



Further confirmation of what we already know.

ok so i'm a little new to the convo but, why are we listening/linking to youtube vids as proof/confirmation of anything? are some of the people you linked to known for having good takes or actual info?

also, aren't the screenshots the second video is showing from linkedin? i thought it was talked about earlier how LinkedIn is not a real source for like... any sorta real info lol
 
ok so i'm a little new to the convo but, why are we listening/linking to youtube vids as proof/confirmation of anything? are some of the people you linked to known for having good takes or actual info?

also, aren't the screenshots the second video is showing from linkedin? i thought it was talked about earlier how LinkedIn is not a real source for like... any sorta real info lol
1. Yea Doctre is known for digging up some good info. Mostly from job listings/ linkedln profiles. Nobody is taking this as confirmation of anything, including him. Just something to discuss/ analyse.

2. Why should we disregard everything from LinkedIn?
 
1. Yea Doctre is known for digging up some good info. Mostly from job listings/ linkedln profiles. Nobody is taking this as confirmation of anything, including him. Just something to discuss/ analyse.

2. Why should we disregard everything from LinkedIn?
1. ok, good to know!

2. i didn't say everything should be disregarded but, to take anything from LinkedIn as an actual source of info seems odd to me given that it's basically the facebook equivalent of a job search board. i could edit my own LinkedIn to say whatever i wanted it to say but obvi that doesn't mean it's true.
that's all i'm saying - i wouldn't go to LinkedIn to try to corroborate things.
 
ok so i'm a little new to the convo but, why are we listening/linking to youtube vids as proof/confirmation of anything? are some of the people you linked to known for having good takes or actual info?

also, aren't the screenshots the second video is showing from linkedin? i thought it was talked about earlier how LinkedIn is not a real source for like... any sorta real info lol
Doctre was the one who first found the info about T239 being taped out in 2022 by looking around linkedin.

That being said you are right that it's been established that people can just make a fake job profile on linkedin, as was likely the case with that obviously fake Nintendo/Epic guy.
 
1. ok, good to know!

2. i didn't say everything should be disregarded but, to take anything from LinkedIn as an actual source of info seems odd to me given that it's basically the facebook equivalent of a job search board. i could edit my own LinkedIn to say whatever i wanted it to say but obvi that doesn't mean it's true.
that's all i'm saying - i wouldn't go to LinkedIn to try to corroborate things.
- Currently working on memory subsystems physical layout for Tegra soc on 8nm

is a weird thing to make up though. But could be for ORIN, maybe we shouldn't take the word "currently" to literally. Even if its for Drake it woudnt be that current.

Edit: or wait. Can memory subsystems be optimised in software after tapeout?
 
Last edited:
0
It’s an allusion, dearest, and a tongue in cheek one at that. But I talked to the boss, and they’re giving you an extra special exemption. You, specifically, are allowed to believe whatever you want, regardless of what I think is correct. Congratulations! 🎉 🎊 We’ll have cake in the break room. It’s your favorite kind, too.

I hope everyone else finds these claims credible: that the mathematical cost of executing a neural network with a given architecture is fixed and cannot be reduced; that the changes you can make to DLSS reconstruction network are limited in scope because of its convolutional autoencoder architecture and will always come with a tradeoff in quality; that the type of reconstruction DLSS is doing is independent of art style, so there wouldn’t be any special utility in training specifically on Nintendo games; that the different modes of DLSS, with the exception of Ultra Performance and DLAA, share the same model; and that T239 will use an identical hardware accelerator to the desktop GPUs and will therefore execute a neural network with the same performance per SM per clock.

I hope most people in the forum read this and agree that all of these reasons make it unlikely that a Switch-specific version of DLSS would be useful (not you though; remember, you have an exemption 🥳). Maybe I’ll be wrong. It would be nice to be wrong! And by the way, if anyone disagrees with any of these points and genuinely responds instead of just trying to get a gibe in, I’d be happy to have the conversation.
 
Imagine the first reveal trailer only features off screen footage exclusively (similar to switch 1 reveal) and is uploaded at 1080p, chaos would ensure
Not downplaying 4K or 1440p or anything but doing as much as showing a new Mario with higher polygon density for Peach's castle, Cyberpunk 2077 running on the kit, a new overhead Zelda with Link's Awakening HD level visuals at a higher framerate and a new Xenoblade with polygon-dense visuals that could give FF7:Remake pt 2 a run for its money would have more people talking than whether or not the YouTube video showcasing it should've been running at 4K.
 
Not downplaying 4K or 1440p or anything but doing as much as showing a new Mario with higher polygon density for Peach's castle, Cyberpunk 2077 running on the kit, a new overhead Zelda with Link's Awakening HD level visuals at a higher framerate and a new Xenoblade with polygon-dense visuals that could give FF7:Remake pt 2 a run for its money would have more people talking than whether or not the YouTube video showcasing it should've been running at 4K.
Now watch Nintendo do none of that
 
I'm all for more power but I like the uniqueness of Nintendo. Switch or Hybrid is just a given now. This is their preferred format. Works well, obviously more power but I also think that there will be something else with Switch 2! Some USP call it gimmick or whatever but something gameplay wise you don't get on anything else!
I also say that I think Switch 2 will be released no later than Summer 24.
 
0
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom