- Pronouns
- He/Him
Alright you Nintendo employees masquerading as ignorant forum users, tell me when this thing is releasing so I can decide on whether or not to keep my Zelda OLED order.
Send the Ninjas.
Alright you Nintendo employees masquerading as ignorant forum users, tell me when this thing is releasing so I can decide on whether or not to keep my Zelda OLED order.
good news! this is one of the benefits of having tensor coresThis seems extremely computationally expensive currently for something (file size) that isn't a serious concern for a lot of devs right now.
This only seems viable for God of War because it barely uses any of the PS5's power.
When you can run these algorithms with like 1% of the power of a console, then yeah, they'll probably be used, but that seems like a PS7 thing.
Is this around the same time Nintendo started saying the hardware shortages were easing up?To follow up on this, it's probably worth making it clear that this drop in memory demand and prices didn't start this year, and has been ongoing for around a year. We're just seeing the greatest impact on manufacturers' balance sheets in the first half of this year.
Back in September, it was reported that mobile RAM prices dropped by 10-15% in Q3 2022, and would drop by 13-18% in Q4. SK Hynix reported an "Around 20%" QoQ drop in DRAM ASP in Q3, but didn't comment on ASP at all in Q4.
One thing though... According to Wikipedia, the X1 supports LPDDR3 and LPDDR4, and the X1+ supports LPDDR4 and LPDDR4X. If that's true, then is it possible that Drake supports LPDDR5 and LPDDR5X? Or since Drake is likely solely to Nintendo it doesn't make sense to expend extra money in such memory interface?RAM capacity can change at a relatively late stage, but not the speed or type of RAM (as those are limited by the memory interface on the SoC). The interface of a memory chip does't change with capacity, so they could easily change, say a pair of 4GB LPDDR5 modules to a pair of 6GB LPDDR5 modules right up to the moment manufacturing starts, it's just a matter of swapping in the higher capacity parts. There have been quite a few instances where memory capacity has changed at a relatively late stage in a console's development; the PS4 went from 4GB to 8GB of RAM quite late on, and the Switch reportedly changed from 3GB to 4GB of RAM.
Depends on what RAM memory controller Nintendo and Nvidia decide to use for Drake before having Drake taped out, I think.One thing though... According to Wikipedia, the X1 supports LPDDR3 and LPDDR4, and the X1+ supports LPDDR4 and LPDDR4X. If that's true, then is it possible that Drake supports LPDDR5 and LPDDR5X? Or since Drake is likely solely to Nintendo it doesn't make sense to expend extra money in such memory interface?
Is this around the same time Nintendo started saying the hardware shortages were easing up?
I know it's possible and that it's set in stone if Drake tapped out last year, but I would like to have a better idea on how likely or unlikely is it for Drake to be designed with supporting both in mind. For example to use LPDDR5 on launch and replace it with 5X in a future revision by the time 5X becomes cheaper without changing the chip, but also in a situation where they weren't sure if 5X would fit in their budget by launch.Depends on what RAM memory controller Nintendo and Nvidia decide to use for Drake before having Drake taped out, I think.
I think it's 70/30 in favor of no out of the box LPDDR5X support. There are licensable IPs from Synopsys that were available late 2021 that supported LPDDR4/5/5X, and memory to test with, but considering we have hints of verification starting in April of 2022, that might not be time to integrate a new MC with unknown power consumption properties.I know it's possible and that it's set in stone if Drake tapped out last year, but I would like to have a better idea on how likely or unlikely is it for Drake to be designed with supporting both in mind. For example to use LPDDR5 on launch and replace it with 5X in a future revision by the time 5X becomes cheaper without changing the chip, but also in a situation where they weren't sure if 5X would fit in their budget by launch.
Of course, that depends on how cheap/expensive is to have this flexibility and how how often SoC manufactures adopt it, to which I have no idea.
Thank you for your sacrifice.Today I bought a Zoled Switch. So from tomorrow any day can be the day of the announcement of the new console.
This screenshot is so relaxing for some reason
Send the Ninjas.
Don’t tempt me :<Today I bought a Zoled Switch. So from tomorrow any day can be the day of the announcement of the new console.
If 5x doesn't make it to Switch 2, I think it's likely gonna be a shoe in for the refresh revision in 2 years, which will likely support a small node as well (such as 3nm). There should be some wiggle room for a newer node.One thing though... According to Wikipedia, the X1 supports LPDDR3 and LPDDR4, and the X1+ supports LPDDR4 and LPDDR4X. If that's true, then is it possible that Drake supports LPDDR5 and LPDDR5X? Or since Drake is likely solely to Nintendo it doesn't make sense to expend extra money in such memory interface?
kopite7kimi mentioned that Nvidia won't use TSMC's 3 nm** process node for fabricating Blackwell GPUs. So TSMC's 3 nm** process node isn't guaranteed.If 5x doesn't make it to Switch 2, I think it's likely gonna be a shoe in for the refresh revision in 2 years, which will likely support a small node as well (such as 3nm). There should be some wiggle room for a newer node.
o7Today I bought a Zoled Switch. So from tomorrow any day can be the day of the announcement of the new console.
good news! this is one of the benefits of having tensor cores
and SSM did it in real time on a system that lacks dedicated hardware. same for Nvidia doing it for Super Resolution and Frame Generation. even FSR as a example without using inference. I'm saying we can't make definitive statements like this without an actual demonstration of it given the theory is fairly soundYeah, AI upscaling of textures takes a lot of time in general so I'm doubtful 64 tensor cores will be able to do it in near real time for the Switch 2.
Especially when presumably the same tensor cores will be busy with DLSS.Yeah, AI upscaling of textures takes a lot of time in general so I'm doubtful 64 tensor cores will be able to do it in near real time for the Switch 2.
and SSM did it in real time on a system that lacks dedicated hardware. same for Nvidia doing it for Super Resolution and Frame Generation. even FSR as a example without using inference. I'm saying we can't make definitive statements like this without an actual demonstration of it given the theory is fairly sound
not sure what this is supposed to prove. PS5 had a lot of power on tap for the game? so does Drake thanks to its tensor cores. how well it works is the question, but the use case and theory is soundSSM also did it while barely using any of the PS5's power for important things, this was a PS4 game with minimal enhancements.
Nintendo can survive without COD, unlike Sony, a substancial amount of Sony renoWe are talking here about a 70 billion USD deal here. And MS knows that they would burn bridges with Nintendo if anything would be trackable back to MS as source of the leak. Nintendo likely would even cancel the COD deal. It would be much easier to convince them to reveal the REDACTED way too early and pay for any projected loss Nintendo has because of that. I am sure MS can cover the billions more.
you know that chocolate can be fatal for diabetes and animalsthe whole point was to prevent people and animals from eating the carts, not encourage it!
on the topic of file sizes, I wonder how far AI texture upscaling can be pushed. on B3D, a user took a look at Naughty Dog's games through Nvidia Inspector and shown that a great many textures are actually 512x512 and 1024x1024, with some being 2048x2048. just how much can those 512 and 1024 textures be pushed to mimic a higher quality?
Current Generation Games Analysis Technical Discussion [2023] [XBSX|S, PS5, PC]
If only they offered a way to play all their games for cheap after some time passes, like a $30 a year subscription service... Is that really a good excuse for publishers to become worse about terrible launches that need tons more patching? If that's the case its a bad thingforum.beyond3d.com
in addition, Sony Santa Monica has had a presentation on realtime upscaling that I haven't watched yet. don't even know if it went up
EDIT: the slides are up, going through them now
Machine Learning Summit: Real-Time Neural Texture Upsampling in 'God of War Ragnarok'
This talk gives a detailed breakdown of the run-time neural texture upsampling system in God of War Ragnarkincluding how the networks are designed to directly output BC compressed image blocks, how the process is optimized to take full advantage...www.gdcvault.com
Alright you Nintendo employees masquerading as ignorant forum users, tell me when this thing is releasing so I can decide on whether or not to keep my Zelda OLED order.
my prompt is based on the idea that Nintendo could use AI texture upscaling to decrease storage size while maintaining good texture quality at high resolutions. this is the exact same goal that Sony Santa Monica was aiming to solve. having a high res texture pack available for download defeats the whole idea of needing to uspcale when you can just source the master textures.I mean.. does it even need to be real time?
As long the AI upscaling is on par with normal high res textures, even just an "install textures for 4K displays" option could bring quite a few benefits to Nintendo (eShop storage), big devs (smaller carts), small devs (4k textures "for free"), handheld/1080p players (no wasted space) and players who want 4K (smaller downloads, shorter loading times, "everything on cart", etc).
not sure what this is supposed to prove. PS5 had a lot of power on tap for the game? so does Drake thanks to its tensor cores. how well it works is the question, but the use case and theory is sound
you're making assumptions about future games based on nothing though. they can just as easily use this for Tears of the Kingdom as part of an update. they can even design games around it. any game can benefit from it, not just games that push the limit. from the AAA to the indie titleI'm saying that the games that could benefit from this additional compression are games that will likely be pushing the Switch 2 to its limits and therefore won't be able to use this additional compression. God of War Ragnarok does nothing to push the PS5 so they can play around with this stuff as the file size is so big, but like... Nintendo isn't going to be that fussed about the next Mario Party being 7 GBs when they could compress it down to 3 GBs (this is a much more extreme example about a potential future algorithm unlike the one used in God of War that seems to apply mostly to normal maps) by semi-real time upscaling the textures to 4K.
you're making assumptions about future games based on nothing though. they can just as easily use this for Tears of the Kingdom as part of an update. they can even design games around it. any game can benefit from it, not just games that push the limit. from the AAA to the indie title
Just to be clear, I'm talking about the console itself doing the upscaling, not downloading it.having a high res texture pack available for download
on Unity and PC, I can see it in the next couple of years. we're already seeing machine learning in those tools, just look at UE5's ML Deformer or Unity's ML trainers. a texture upscaler built-in isn't so ludicrous. and it's not demanding at all, again, PS5 is doing this in real-time. a quick google search shows tensorflow being usable in UE and Unity as well. I feel like you're still ignoring this point. as an initial implementation, there will be limits, but they already have ideas for future additions, as shown in their presentation. Microsoft has also touted ML format acceleration for the Series Duo, so one of their studios using this isn't off the cards either thanks to DirectML.Yeah, I don't think this tech (which sounds like a super specialized and hacked up application of a standard Python package) will be generalized in the near future to Unity or Unreal or other commonly used engines so that most devs can use it, lol. This is very demanding and in its infancy for semi-real time work.
Maybe in a decade, but almost no dev cares about file size right now, it's basically just 3-5 Nintendo studios that are super invested in this.
At that point, though, why not just "download actual high quality textures for 4K displays" rather than making your system work on a compromise version? Unless a person's still on dialup or whatever.I mean.. does it even need to be real time?
As long the AI upscaling is on par with normal high res textures, even just an "install textures for 4K displays" option could bring quite a few benefits to Nintendo (eShop storage), big devs (smaller carts), small devs (4k textures "for free"), handheld/1080p players (no wasted space) and players who want 4K (smaller downloads, shorter loading times, "everything on cart", etc).
on Unity and PC, I can see it in the next couple of years. we're already seeing machine learning in those tools, just look at UE5's ML Deformer or Unity's ML trainers. a texture upscaler built-in isn't so ludicrous. and it's not demanding at all, again, PS5 is doing this in real-time. a quick google search shows tensorflow being usable in UE and Unity as well. I feel like you're still ignoring this point. as an initial implementation, there will be limits, but they already have ideas for future additions, as shown in their presentation. Microsoft has also touted ML format acceleration for the Series Duo, so one of their studios using this isn't off the cards either thanks to DirectML.
these aren't "super specialized' anything. there are real examples being used right now. my ultimate point is that Nintendo is in the best position to leverage them (in theory) thanks to their partnership with Nvidia. everyone else has to deal with AMD's half-hearted efforts
it's not compression, it's upscaling textures. they literally have a team working on machine learned upscaling that they can apply this to. it is cheap to do (in theory) because of the tensor coresYeah, I don't see Nintendo utilizing their efforts to compress Zelda and Smash a little more instead of utilizing that power to do something else.
In 15 years when it's incredibly cheap to do, sure.
it's not compression, it's upscaling textures. they literally have a team working on machine learned upscaling that they can apply this to. it is cheap to do (in theory) because of the tensor cores
you're still ignoring the actual examples I'm giving you so you can stick your head in the sand
Such as?Looking at it as just file size is missing the forest past the trees.
There’s multiple benefits to this.
file sizes is already a problem because of the expensive costs of the media the games ship on. there are so few 32GB cards being used because of the price. with Drake, targeting higher fidelity, assets will only get larger for higher end games, like the next Zelda. it won't be long until Nintendo would brush up against the limits of a 32GB card just by increasing the textures by one step or two. looking Pyra's textures from Xenoblade Chronicles 2, she has a couple of 1024 textures already and a total texture size of around 5MB. just moving a step up, we'll be closer to 20MB for one character. it doesn't take long for the idea of real-time upscaling of textures to come into play. I mean, SSM didn't have to worry about this because they have big-ass SSDs they can fill, but they did, with the intent of keeping the file size down.First, real time upscaling of textures in these situations is functionally equivalent to lossy compression where the loss is hopefully minimal
Second, I'm pretty sure the five Nintendo games that struggle with file sizes will use the tensor cores to do something other than reducing file size. Zelda and other EPD games will always need to be fighting against the constraints of mobile hardware and will likely be focused on things other than file size.
file sizes is already a problem because of the expensive costs of the media the games ship on. there are so few 32GB cards being used because of the price. with Drake, targeting higher fidelity, assets will only get larger for higher end games, like the next Zelda. it won't be long until Nintendo would brush up against the limits of a 32GB card just by increasing the textures by one step or two. looking Pyra's textures from Xenoblade Chronicles 2, she has a couple of 1024 textures already and a total texture size of around 5MB. just moving a step up, we'll be closer to 20MB for one character. it doesn't take long for the idea of real-time upscaling of textures to come into play. I mean, SSM didn't have to worry about this because they have big-ass SSDs they can fill, but they did, with the intent of keeping the file size down.
I thought it would be obvious, such as reducing the memory requirement for the system or better streaming that doesn’t result in larger textures at a time for a slower storage medium vs smaller more textures that get scaled up.Such as?
Just got mine!!!!!!!Seeing everyone get their new ZOLEDs is causing an intense FOMO that I haven't experienced in a while. I'm just sitting here like , tempted... real tempted...
-flicks myself to get back to my senses-
-attempts really hard to justify how I mainly play docked, I can't keep spending money on TOTK stuff-
-gets allured to the ZOLED yet again, repeating the cycle- (ouroboros pun was not intended)
I don't think it's gonna be a one time tool if the benefits are apparent. They did have two studios work on it, after all. They could have it be used in a limited capacity in the future. Given how AI is a growing field of research in game development, it's only natural that companies leverage it to decrease workloads or file sizes when they can.I mean, I think this was just an experiment because they had tons of excess power that they barely used.
If Nintendo could use little computational power to massively lower the file size of Zelda, of course they would, but I think they might focus on pushing graphical power or scale more as texture upscaling will be computational expensive for a while.
Physically based Rendering, or PBR for short, is a technique that uses the physics involved in how light hits an object in the real world to give it a certain look that we associate with realism. So, what we associate with metal and a metallic look is replicated with PBR. The best example of this is simply to look at Magnemite or Magneton in Scarlet and Violet and compare it to the previous generations, you would have an idea of what PBR is.Jack Matthews (Lead engineer of the MP Trilogy) talks about how Retro used a technique that only came around during the eight gen called physically based rendering, they say its a very heavy thing that shouldn't be possible on the switch, much less at 60fps.
Can anyone give me some more info about it? It sounds incredible that Retro managed to use it like this.
But aren't those Tensor Cores presumably going to be busy doing DLSS, especially in docked mode?As for Nintendo, again, this is the benefit of having tensor cores. Unlike Sony or MS, they aren't sacrificing compute cores to do this as tensor cores work concurrently with compute cores. It's already been shown to be not that computationally expensive for some systems so designing a model for Drake in docked mode isn't so crazy as you thunk it is. This isn't that different than using dlss to upscale a frame, and they'd be doing this on images smaller than a frame, like 256, 512, or 1024
I know you’ve chosen conservative specs to illustrate a point, but wouldn’t the 8GB RAM be limiting if using 4K textures in a game?I don’t think it’s right to rule out something quite yet, this is all for speculation after all, the tech exists and Sony FP used it which means it is possible, no one is saying Nintendo is going to adopt it right now, it is still a speculation thread where we can speculate. PS5 lacks ML inference capabilities, everything would need to be done in the shaders. PS5 can top out at 20TF of FP16 that can be used for this
Drake, let’s give it a theoretical 2TF docked and assume it can only deliver 1TF in portable mode. It can still achieve 16TF (with sparsity which is a hardware feature on Ampere and later) of FP16 thanks to those tensor cores. Not that far off from the PS5 peak FP16 capability.
With that in mind, let’s assume that Drake is 1TF portable, 2TF docked at best. CPU is clocked to no better than 800MHz and it only has 8GB of RAM. Let’s also assume that it has really slow storage speed, storage speed that is actually not at all different from what the switch has right now, let’s put it at 25MB/s for simplicity sake.
A feature like this that can upscale textures in real time would be pretty useful for such a limited device. Sure it has a decompression engine, but we can assume for this example that it operates at only a factor of 2x. So, the storage speed operates better than what it does right now by 2.
Ok we have the device in mind, right? It’s an upgrade from the switch, but a far cry from the PS4 Pro, XBox One X and Series S.
Of that 8GB of memory, 1-1.5GB let’s reserve it for OS related tasks.
So here is what we have in essence:
-2TF TV mode/1TF HH mode
-8 A7x cores clocked to 800MHz
-8GB of RAM, 6.5-7GB available for game usage
-Bandwidth of 102.4GB/s at the highest peak
-64GB of eMMC storage that operates with delivering 25MB/s of throughput in its read speed but an FDE that would aid this factor to be more like a 50MB/s.
-and for extra sake, all of this only delivers at best 4.75H of battery life in portable mode and 1.75H at the lowest
These stats helps us with quantifying the purpose or use case that this can have if it was implemented into Nintendo’s own first party tools and even more for third party tools. The low memory amount and the slow speed of the storage helps to allow for lower textures to be scaled in real time to a higher quality one. Not only that, but it can help alleviate a cart issue quite a bit I think, rather than having to ship with large textures, ship with a step below and allow the system to scale it up.
So, maybe a game that is, say, 50GB. Would need a 64GB cart to fit, but maybe with smart utilization they can squeeze it to the 32GB and have all included.
On top of all that, you can get faster load times for this device and it can have competitive load times if devs accommodate for the system and it’s limitations.
But this would require a lot of work from NV and Nintendo to make it part of its tools for devs to use and make proper use of.
In a different world, there would be a 4GB, 8GB, 16GB, 20GB, 24GB, 32GB, 40GB, 48GB, 64GB and an elusive but expensive, 128GB, cart for devs to choose from and the first 6 being relatively cheap.
I know you’ve chosen conservative specs to illustrate a point, but wouldn’t the 8GB RAM be limiting if using 4K textures in a game?