- Pronouns
- He/Him
Here's something kinda interesting. The previous two directs are starting to specify that they are for "Nintendo Switch". While ones before that never had a moniker designating the system it's for. Very inch resting...
Here's something kinda interesting. The previous two directs are starting to specify that they are for "Nintendo Switch". While ones before that never had a moniker designating the system it's for. Very inch resting...
Interesting in the same way removing all branding from Amiibo was. Whatever Nintendo has planned, we can rest assured they've at least had 7 months to prepare lol. That's at least enough to get Snipperclips 2 off the ground if their launch lineup is poor right?Here's something kinda interesting. The previous two directs are starting to specify that they are for "Nintendo Switch". While ones before that never had a moniker designating the system it's for. Very inch resting...
Huh, interesting... Guess this is the point of no return?Here's something kinda interesting. The previous two directs are starting to specify that they are for "Nintendo Switch". While ones before that never had a moniker designating the system it's for. Very inch resting...
Nintendo patents that are published before we see them in public are typically stuff they don't intend to actually use.Part of Nintendo’s own upsampling patent (and I think the ultimately biggest reason for said patent to exist, instead of it being for a bespoke upsampling solution as suggested) is the method on which neural networks are addressed, delivered, used and updated by console hardware. That patent describes neural networks being stored separate from game software (perhaps at the OS level) and passive neural network updates being distributed via install from Game Cards (among other options), so any games calling to the same neural network to use the hardware to upsample can theoretically benefit from iterative improvement to that neural network. It also seems to specify the ability to add to the neural network by collecting frame data on consumer devices passively during play to further train the neural network across the hundreds of millions of devices they intend to sell. Lastly, the patent mentions the ability for games to specify the neural network they intend to use to process the upsampled output, which indicates that Nintendo will be building their own neural network in addition to having the option to utilize the one Nvidia has been using.
DLSS as we understand it relies on a neural network to function, it's useless for doing on-the-fly upsampling without it.
And I'm going to borrow something I wrote on IB to explain why this is an important distinction in case someone wants to know:
Basically, how this works is you give a supercomputer with similar but more powerful capability to perform specific math equations (in this case, tensor math) a series of low-res images, then give it matching high-res "target images" that you want the low-res image to look like, and the supercomputer is tasked with finding the most efficient way mathematically to transform the low-res image into the target image. Now do that ad infinitum, pick out the methods that produced the results closest to the target images, and repeat with slight variations on the successful methods to try and improve the result. Then do that with thousands upon thousands (perhaps upon millions) of low-res and target images.
To describe it a bit like CGP Grey does but in context, a supercomputer is doing millions of practice reproductions from worse-quality images and ideal-quality images for reference with a bunch of variations of "artist bots". What it spits out is a "neural network", which is basically a bot or bots with the computer equivalent of muscle memory and pattern recognition, selecting only those bots that created images that near-flawlessly resembled the specified target images given to it in the least amount of time. The more you train, the better the bots. It's more involved than that, but you get the idea.
"DLSS" is taking a neural network created using Tensor cores in a supercomputer environment over time into an on-the-fly reproduction, creating a brand-new image only from recognizable-but-likely-different new low-res images, in a much more time-constrained environment but using the same tools (in Nvidia's case, Tensor cores on a lower-scale GPU) and likely getting the best possible replication of what the "target image" would have been if it had been created beforehand as a reference.
But without that first step of creating the neural network, you could never achieve the frame upsampling in a tiny fraction of a second that DLSS provides. So long as you have all of the data used to train a neural network on upsampling using Tensor cores and you retain control of the most efficient training methods to achieve the desired result, you can re-create that neural network for ANY purpose-built math accelerator like Intel's XMX cores or whatever comes next with very little fuss (by replacing some server blades and re-training a new one with the same data, basically).
This allows Nintendo to solve a (potential) future problem of uniform backwards compatibility even if they change SoC providers, because the training data for any neural network they'll need to use means it can be reconstructed for/added to whatever hardware environment they proceed to.
But yeah, TL; DR, read the patent, it absolutely indicates that Nintendo will not be packaging neural networks into the games themselves but making them accessible as a library that is frequently updated on the device itself.
Here's a playlist with all past directs.Huh, interesting... Guess this is the point of no return?
Here's a playlist with all past directs.
Another thing of note. The "Indie World" Showcases did have the "Nintendo Switch" branding on them as far back as when they started. Directs, however, never did up until the Direct of Early 2023.
What you say has definitely been true of patents that involve trade secrets/user-facing features. This is not such a patent, especially if the patent is not about the detailed upsampling method but the method of deployment on consoles, and could have been published in advance to stop Sony and MS from utilizing future advancements in FSR in a similar fashion.Nintendo patents that are published before we see them in public are typically stuff they don't intend to actually use.
The whole point of a console is that it's supposed to behave very predictably. This sort of system would violate that predictability in a very big way.
DLSS is a black box, but it's still deterministic. More importantly though, is that whatever it's doing, game developers can look at it in action and decide if it's working the way they want it to before they ship. If the console updates the neural network independently later, and that introduces ghosting or something in their game, now they'd have to figure out and push a new patch to fix something Nintendo broke outside of their control. Nintendo isn't going to be willing or able to actually test neural network updates for a hundred third party games to guard against regressions.What you say has definitely been true of patents that involve trade secrets/user-facing features. This is not such a patent, especially if the patent is not about the detailed upsampling method but the method of deployment on consoles, and could have been published in advance to stop Sony and MS from utilizing future advancements in FSR in a similar fashion.
The raw rendering does behave predictably, none of that changes. If you want there to be absolutely no variability and full predictability, however, then you're advocating against DLSS, it's that simple, that cut-and-dry.
Who said anything about independently updating a neural network? I know I didn't and I know the patent didn't.DLSS is a black box, but it's still deterministic. More importantly though, is that whatever it's doing, game developers can look at it in action and decide if it's working the way they want it to before they ship. If the console updates the neural network independently later, and that introduces ghosting or something in their game, now they'd have to figure out and push a new patch to fix something Nintendo broke outside of their control. Nintendo isn't going to be willing or able to actually test neural network updates for a hundred third party games to guard against regressions.
In the past I mused about an automatic roll-forward option for DLSS versions as an alternative to linking in specific versions with the game. Nobody seemed to agree with the idea, and I think it makes less sense now that the upscaler isn't seeing frequent big improvements like in 2.x, but if something like that existed, it would have to be an opt-in feature. And it would probably see limited use, while the the more extreme version outlined in the patent -- rather than just including new 2.x or 3.x versions in firmware updates as I imagined -- would see even less use and probably not be worth it from a maintenance perspective for Nintendo.
That is exactly what you said.Who said anything about independently updating a neural network? I know I didn't and I know the patent didn't.
That patent describes neural networks being stored separate from game software (perhaps at the OS level) and passive neural network updates being distributed via install from Game Cards (among other options), so any games calling to the same neural network to use the hardware to upsample can theoretically benefit from iterative improvement to that neural network.
Yeah, I said that Game Cards would be supplied with a neural network update that is performed passively (meaning not a hard firmware update, just updating the library when the Game Card is inserted if the library contained an older version). That's not "just randomly inserting shit into the neural network willy-nilly".That is exactly what you said.
Changing the model makes the output pixels unpredicatable. The code would obviously still run, but there's no guarantee that the final picture would still be as desired. "DLSS Super Resolution" isn't even a single model to begin with. Developers are given multiple options to choose from so that they can tune the output for their specific game. Even on PC, the default state of DLSS integrations is that updates are delivered via patches to the game. It's the sort of thing that most developers would probably want to run through QA before changing.What you say has definitely been true of patents that involve trade secrets/user-facing features. This is not such a patent, especially if the patent is not about the detailed upsampling method but the method of deployment on consoles, and could have been published in advance to stop Sony and MS from utilizing future advancements in FSR in a similar fashion.
The raw rendering and code base does behave predictably, none of that changes. If you want there to be absolutely no variability and full predictability, however, then you're advocating against DLSS, it's that simple, that cut-and-dry, because even Nvidia's solution (with their broad-use universal neural network) is constantly evolving. But Nintendo would never put forth a neural network update that broke compatibility in the first place, so it's a concern over a hypothetical that they'd never permit to happen.
Technically the PC SDK gained an opt-in "I like to live dangerously" mode recently, but I'd be surprised if that had any significant adoption, and even that won't change the model if you explicitly selected a preset. Perhaps that will persist to the Nintendo version, but I think it will probably be stubbed out, tbh. Either that or some regression will go viral and everyone will subsequently make sure it is firmly disabled.DLSS is a black box, but it's still deterministic. More importantly though, is that whatever it's doing, game developers can look at it in action and decide if it's working the way they want it to before they ship. If the console updates the neural network independently later, and that introduces ghosting or something in their game, now they'd have to figure out and push a new patch to fix something Nintendo broke outside of their control. Nintendo isn't going to be willing or able to actually test neural network updates for a hundred third party games to guard against regressions.
In the past I mused about an automatic roll-forward option for DLSS versions as an alternative to linking in specific versions with the game. Nobody seemed to agree with the idea, and I think it makes less sense now that the upscaler isn't seeing frequent big improvements like in 2.x, but if something like that existed, it would have to be an opt-in feature. And it would probably see limited use, while the more extreme version outlined in the patent -- rather than just including new 2.x or 3.x versions in firmware updates as I imagined -- would see even less use and probably not be worth it from a maintenance perspective for Nintendo.
Can we stop a second to appreciate how much better DLSS is compared to FSR 2 at very small resolutions?
The video below shows DLSS (1080p output) scaling at different quality modes, they being:
1080p output:
Quality: 1280x720p
Balanced: 1114x626p
Performance: 960x540p
Ultra Performance: 640x360p
But the gap isn’t the same. How do I say this? In terms of cutting-edgeness, it is similar, but ARM has progressed and has caught up to x86 processors, giving Nintendo a huge advantage. The gap between the X1 and a traditional home console in 2017 was much bigger, despite Nintendo opting for the "latest" mobile technology. The same latest mobile tech NOW has a much smaller gap. Also, this time it seems to me the chip is much more customized and specifically made for Nintendo more so than the X1.Totally. But to a lesser extent, so was the TX1. Yes it was already released for 2 years but as far as I know there weren't anything publicly available that can beat it for the price. Maybe a custom chip would have fare better but nothing that would make the heavily downgraded ports not heavily downgraded. Most people are just understandably ignorant on tech. There are plenty of people complaining about how the Switch chip is down clocked and that they hope it will be fixed with the successor. That makes no sense since any chip clocked for a handheld form factor can always clock higher in a bigger form with better cooling and additional power. I do wonder what a Mariko Switch at the very beginning would have cost.
err...
to be fair, the studio behind this game are crazy, in a good way
That comparison is old. Use something more recent:
Assume that anything you hear pre-hardware will have to be tuned for form factor, heat, and battery before it is in your hands.
As I brought up a bit earlier, ray tracing itself, like traditional rasterization (drawing pixels via triangles) scales with resolution. Just as if you are rasterizing half as many pixels, in theory your GPU load should be halved, if you shoot half as many rays (aka, the same number of rays per pixel but half as many pixels), the load on ray tracing would scale, outside of BVH tree building (setting up a tree structure to massively speed up finding which triangles intersect with a given ray in the world) and maaaybe denoising (which is a part of ray tracing I am sadly not very familiar with, Ive got to assume it scales with resolution as well though). Given BVH tree building presumably still is done on CPU in practice (in theory it could be done on GPU but I dont think thats too common in games), since the CPU speeds likely wont be changed in handheld mode, that shouldnt affect feasibility either. So while it is possible there will be minor performance differences between the two, as long as you are scaling down your resolution to match handheld mode, I can't see why ray tracing wouldn't work almost the same in handheld mode.I do have to wonder how the hell ray tracing will work on handheld, will it just be absolute wizardry from nvidia or not there?
Yep, I fail to see the point of such a demo if true.Imran posted this on the VGC/Eurogamer thread on Era:
I’m wondering, if the above is true, what’s the point of the tech demo shown to the developers?
That post seems more like he’s trying to temper expectations (not based on any truth whatsoever).Imran posted this on the VGC/Eurogamer thread on Era:
I’m wondering, if the above is true, what’s the point of the tech demo shown to the developers?
In all fairness, tempering expectations is a healthy thing to do even at the best of times. In this case however, I sincerely doubt that the demo isn't taken from at least a prototype of the system. Hell, it's probably taken from a finalised model of the Sw2.That post seems more like he’s trying to temper expectations (not based on any truth whatsoever).
That's how it worked with DLSS 1, where it was using neural networks trained for each individual game, and the version of DLSS everyone agreed was worse than the current.Changing the model makes the output pixels unpredicatable. The code would obviously still run, but there's no guarantee that the final picture would still be as desired. "DLSS Super Resolution" isn't even a single model to begin with. Developers are given multiple options to choose from so that they can tune the output for their specific game. Even on PC, the default state of DLSS integrations is that updates are delivered via patches to the game. It's the sort of thing that most developers would probably want to run through QA before changing.
It's frankly not a comment of merit, console games are already "tuned for form factor, heat and [when relevant] battery", even PS5 and XBS when that same demo was built to run on them.Imran posted this on the VGC/Eurogamer thread on Era:
I’m wondering, if the above is true, what’s the point of the tech demo shown to the developers?
Tempering expectations is not a bad thing, we already have posts stating that PS5 and Series X are completely redundant. However, this one seems a bit pointless.In all fairness, tempering expectations is a healthy thing to do even at the best of times. In this case however, I sincerely doubt that the demo isn't taken from at least a prototype of the system. Hell, it's probably taken from a finalised model of the Sw2.
Granted, this is something we don't really know until we see the device proper (either through a big stealth-drop trailer or Nintendo Switch Presentation 2024), but... c'mon there's so little chance that the real system wouldn't be able to run the Matrix demo if Nintendo wasn't confident in their hardware.
Probe-based RTGI has been touted as the cheapest method of ray traced global illumination. RT is as scalable as you want it to be. It's on phones, it's on hardware older than the Switch, it's literally in games now without acceleration.Touche.
I mostly see ray-tracing as a graphical hog but if some dev team actually managed to get it to run well in a multi-player setting in the year of our lord 2023, fair enough.
It's to show that the features of the Matrix Demo is running on Drake. If lumen, nanite, and VSM were removed, then you're right, there would be no point. That's like taking a Prius and making it as efficient as a Hummer just so you can say, "see we have a prius too!" The demo's very existence implies the main UE5 features are thereYep, I fail to see the point of such a demo if true.
"hey guys, that's what our machine would do if it was something else ! Not bad, Huh ? Obviously, it won't run as well on the new switch, but wouldn't it be cool ?"
The hardware is a year out, they may still tune how unreal runs (and I don't see them worsening the performance), but can we really not expect the hardware being pretty much final at this point?Imran posted this on the VGC/Eurogamer thread on Era:
I’m wondering, if the above is true, what’s the point of the tech demo shown to the developers?
Out of the game or not as long as you still believe that RTax wave race is possible on switch 2 that's all that matters.look man I'm kind of out of the game at this point lol
I don't fully remember what's expected of sd2
Steamdeck also has much higher power consumption. Its more comparable to docked mode.Switch 2 will exceeding Steam Deck 2 even in 2027,28,29. Switch 2 used heavy customised SOC with advanced Nvidia technologies unlike non customised SOC in current Switch.
Hit the nail on the head right there.Tempering expectations is not a bad thing, we already have posts stating that PS5 and Series X are completely redundant. However, this one seems a bit pointless.
Frankly the most terrifying part of this statement is that console wars between all three are probably going to ramp up due to idiotic "console warriors". Damn...By this point Xbox/Playstation/Nintendo will not be redundant any time soon, they all have separate ecosystems with different selling points and exclusives. Nintendo do have some overlaps though with a lot of people having Switch as their secondary console in the US while having Xbox/Playstation as their main console. So maybe some people would no longer use a Switch 2 as their secondary console if the upgrades over Switch is big enough.
OT I apologize but real quick…where do you sign up to get these surveys?
It is. IMO, I expect the exact same power supply too.Hey guys long-time lurker of this thread, I was wondering if given all this new power the same TDP (10W) as the 2017 Switch is anticipated?
Correctamundo! See this is why I think 1080p is a very smart option. Upscaled 1080p can provide a higher IQ at the same or less power than a native 720p, so it just makes sense from a power management angle.Recently have been thinking about how dlss could be a swag battery life saver in handheld mode, especially if the recent morning bird chirping of a 1080 panel are true.
Games can run at 540 or 720p natively, then with pure black magic, BAM, upressed 1080p. Which according to tests done on dlss before, would consume noticeably less power than native 1080p.
what backs this up is that Matrix was running on a Hardware that's running on Switch 2 specs, so it's hard imo at least to not take the matrix demo seriouslyIn all fairness, tempering expectations is a healthy thing to do even at the best of times. In this case however, I sincerely doubt that the demo isn't taken from at least a prototype of the system. Hell, it's probably taken from a finalised model of the Sw2.
Granted, this is something we don't really know until we see the device proper (either through a big stealth-drop trailer or Nintendo Switch Presentation 2024), but... c'mon there's so little chance that the real system wouldn't be able to run the Matrix demo if Nintendo wasn't confident in their hardware.
As long as they hit stable performance, it can be as heavy as they wantIdk but even if UE5 was tuned for the portable device, UE5 is still really heavy.
Just my one cent
Usually this is the same to say its running on devkits or non-final commercial version, but without letting them try and play.what backs this up is that Matrix was running on a Hardware that's running on Switch 2 specs, so it's hard imo at least to not take the matrix demo seriously
so it doesn't necessarily 100% matter for the retail version?Usually this is the same to say its running on devkits or non-final commercial version, but without letting them try and play.
Is normal and common, with the purpose of avoiding leaks
Considering Nintendo is on high-alert, I'm not surprised. They're basically trying everything and anything to avoid leaks getting out about the system.Usually this is the same to say its running on devkits or non-final commercial version, but without letting them try and play.
Is normal and common, with the purpose of avoiding leaks
This tells devs that Switch 2 hardware is modern and can run UE5.Imran posted this on the VGC/Eurogamer thread on Era:
I’m wondering, if the above is true, what’s the point of the tech demo shown to the developers?
Imran posted this on the VGC/Eurogamer thread on Era:
I’m wondering, if the above is true, what’s the point of the tech demo shown to the developers?
Correctamundo!
No, it isn't.It's "correctamente"
It's a bit of a puzzling statement, because the tuning for form factor etc. might be the main point of focus for developing a Switch. So of course he's right, but if they're releasing next year, that tuning has largely already happened. They're not showing this off to developers without being confident that it's representative.Imran posted this on the VGC/Eurogamer thread on Era:
I’m wondering, if the above is true, what’s the point of the tech demo shown to the developers?