Tell him to do the mathNo, have a nice weekend
I think you are reading way too much into the post. I can't speak for the posters intent but to me It never suggested you have no life, it was a playful suggestion to play some video games.Oh, there's a little of exaggeration in a completely harmless post about how much I want Switch 2 to be announced?That surely means I have no life outside of it and I don't play videogames. Very smart on your part.
Really, why this rudeness suddendly?
Have a 16 GB weekendNo, have a nice weekend
I don't want to go so off-topic and deviate the thread, so this is the last thing I'm going to say about this topic.I think you are reading way too much into the post. I can't speak for the posters intent but to me It never suggested you have no life, it was a playful suggestion to play some video games.
To my knowledge, most game engines on PC don't do even the non-DLSS equivalent of this, which is rendering in parallel to game logic. Unsurprising as there are latency costs, and it makes implementing a functional frame limiter really hard.It may be the case that building a stepwise rendering pipeline consisting of these three components (base image rendering, DLSS upscaling, higher-res post-processing, where step 1 of frame 2 can be launched while steps 2 and 3 of frame 1 have not yet finished) requires changes to the game engines that so far have not been identified as worthwhile investments considering the games run well without this significant optimisation.
Moving post-processing after DLSS, and doing it at the output resolution is one of the requirements of a "good" DLSS implementation from Nvidia's point of view. Post processing can remove data that DLSS wants for a good upscale, especially something like motion blur or depth-of-field. Also texture resolution/mip-maps should be set as if the game were running at the output resolution. None of these things are required, but if Nvidia is featuring the implementation, you can be sure they're doing it, and all those add costs.Interesting, thanks for mentioning this.
Didn't see the grass comment, with you on that one. That was a separate poster.I don't want to go so off-topic and deviate the thread, so this is the last thing I'm going to say about this topic.
A joke is all about context. We've made, collectively, lots of jokes about how people in this forum loves to talk about videogames but not playing them and stuff like that. That's fine. Going to a stranger and telling him to go touch some grass is not a thing I can interpret as a friendly joke. Maybe I'm weird, but I think that is a basic social skill.
216GB? 12GB? 8?
Nah. 6GB all the way
2x16GB
Is that why I notice horrendous analog stick input lag on the Switch, especially on 30FPS first person shooters and especially when using it in handheld mode?To my knowledge, most game engines on PC don't do even the non-DLSS equivalent of this, which is rendering in parallel to game logic. Unsurprising as there are latency costs, and it makes implementing a functional frame limiter really hard.
Nintendo's high level graphics libraries have this built in, and can hide it from the engine developer. Nintendo recommends rendering in parallel with CPU as the default, but encourages you to turn that off if you're building something like a fighting game or a rhythm game, as the latency cost is pretty detectable.
When is Nateโs podcast dropping again?
Great post as usual! You are great at explaining things, didn't think of it that way before.Moving post-processing after DLSS, and doing it at the output resolution is one of the requirements of a "good" DLSS implementation from Nvidia's point of view. Post processing can remove data that DLSS wants for a good upscale, especially something like motion blur or depth-of-field. Also texture resolution/mip-maps should be set as if the game were running at the output resolution. None of these things are required, but if Nvidia is featuring the implementation, you can be sure they're doing it, and all those add costs.
I would say that it's worth noting that if these things are required for a "good" DLSS implementation, then DF may not have found exactly what the chip is and is not capable of, but it was a decent insight into what were likely to get from high quality games.
I laughed. Sounds extremely weird in French, as bite means ...Have a 16 GB weekend
I'll bite.Originally i was gonna ask if 12GB was gonna be enough to last the entire gen. but I realize I don't really know what the next big push in video game tech {like the next 10 years) is gonna be so what do you all think it's gonna be? Just more refinement on current RT techniques?
Don't worry, we're good. It was my turn to have a go at the meme this time, I am happy I could finally contribute.Oh, there's a little of exaggeration in a completely harmless post about how much I want Switch 2 to be announced?That surely means I have no life outside of it and I don't play videogames. Very smart on your part.
Really, why this rudeness suddendly?
You just up and decided to fill this place with my favorite posts for some reason
you're saying we should curb our enthusiasm?late January...but temper your expectations
I apologize for the dumb post! It was meant in jest.Oh, there's a little of exaggeration in a completely harmless post about how much I want Switch 2 to be announced?That surely means I have no life outside of it and I don't play videogames. Very smart on your part.
Really, why this rudeness suddendly?
Requirements might have been a strong word. It's what Nvidia recommends as best practice in their programming guide, but in some developer talks they've mentioned that, even when Nvidia is working with a developer directly, sometimes their engine (or their deadline) makes getting "good enough" a priority over "best possible." They've specifically mentioned that there are games they've worked on that need to do DLSS after post-processing, and it still works.Great post as usual! You are great at explaining things, didn't think of it that way before.
I'm curious if you think Nvidia/ Nintys requirements for a "good" dlss implementation will be different than on PC, where frametime cost is a much smaller concern.
Given that it's probably not a 120 Hz screen, they'd probably target 30 fps as usual.Do you think Devs will target 40fps on Switch 2 for games that target 60fps on series S/X/PS5 for its performance mode, or just stick to 30fps? I think 40/45 fps being more common on Switch 2 depends on whether the portable screen can support it or not.
no โฆonly 22x16GB
I think Nintendo values parity and wants games to run the same frame rate in both modes (yea I know Bowsers fury, that was the exeption). The majority of TVs are 60hz and only support 30 and 60fps. So those will remain the standard framerate targets.Do you think Devs will target 40fps on Switch 2 for games that target 60fps on series S/X/PS5 for its performance mode, or just stick to 30fps? I think 40/45 fps being more common on Switch 2 depends on whether the portable screen can support it or not.
at this point it's not like we're going to wait much longer. The thing is coming out this year and due for an announcement either this fiscal quarter or next.you're saying we should curb our enthusiasm?
you need a vrr screen for that, which not many people have. and nintendo most likely won't put into their screen, especially if they're going LCD for teh sake of costsDo you think Devs will target 40fps on Switch 2 for games that target 60fps on series S/X/PS5 for its performance mode, or just stick to 30fps? I think 40/45 fps being more common on Switch 2 depends on whether the portable screen can support it or not.
I think weโre probably 2.5 months maximum away from the reveal of this thing. Weโre getting to the doorstep. Weโll get our direct early February. And after that the next thing is the announcement of Switch 2 is how I see it.at this point it's not like we're going to wait much longer. The thing is coming out this year and due for an announcement either this fiscal quarter or next.
It's not like we need to temper expectations either. The specs of this thing are pretty much well-known bar the CPU clocks, so we can deduce easily enough how it'll perform and how powerful it is in general.
The current question is more about if it'll have a new gimmick, however small it may be.
Even the lcd SD supports 40hz. I don't think cost would be prohibitive if Nintendo really wanted to have the feature.you need a vrr screen for that, which not many people have. and nintendo most likely won't put into their screen, especially if they're going LCD for teh sake of costs
Depends on what the versions add. Different parts of DLSS 3.2 only work with new hardware, while optimizations and some new features still work with older hardware. It really depends on what NVidia is trying to accomplish and if that needs more hardware.Out of curiosity, is it likely future DLSS versions will require beefier tensor cores? Or could they be optimized for previous hardware with existing tensor cores?
I think we would be hearing a lot more by now right? If Nintendo doesn't do a superbowl ad on the Switch 2, then they won't rush.I think weโre probably 2.5 months maximum away from the reveal of this thing. Weโre getting to the doorstep. Weโll get our direct early February. And after that the next thing is the announcement of Switch 2 is how I see it.
It's gonna vary game to game, but if this is the issue it'll affect all inputs, not just the stick. So if you're seeing similar lag with button presses, very possibly.Is that why I notice horrendous analog stick input lag on the Switch, especially on 30FPS first person shooters and especially when using it in handheld mode?
I can see Nintendo eventually evolving the Switch to do something like the Ayaneo Slide, but with a second screen instead of a keyboard.
Maybe by Switch 3 they can get everything refined enough to be able to dock, detachable controllers and with great performanc...
they would need for to reveal it by mid-February for that to happen. I don't think next week is plausible for it, since Peach is most likely gonna get dived into this one (there was a Nintendo Live Tokyo featuring it that was cancelled around then, so the marketing push will probably happen pretty soon)I think we would be hearing a lot more by now right? If Nintendo doesn't do a superbowl ad on the Switch 2, then they won't rush.
I'll bite.
The big one is obviously neural rendering, which we're going to get from both sides. On the right we're going to get a continuation of what DLSS does, where machine learning is used to quickly guess what would have been rendered if you'd thrown a lot of GPU power at the problem. Even if DLSS upscaling, frame generation, and ray reconstruction are all very different technologies, they all fall under that umbrella.
@Thraktor and I have tried to figure out what's next for DLSS itself, in version 4. I see two strong possibilities. The first is just to uplift existing post processing effects into DLSS. DLSS motion blur, DLSS depth-of-field, DLSS film grain. One of DLSS's hangups is that good implementations need to do these things after DLSS runs, at a higher resolution. To me, these things seem like prime candidates for an AI to handle anyway, and putting them into the DLSS core would offer performance advantages and developer simplicity.
The second is for AI to pick up more of the ray tracing pipeline. Right now DLSS handles denoising, which was the obvious one, and even some of us amateurs called it as the next step well before it was announced. That's still in the maturation phase, but last year Nvidia released a paper on neural BDRF. I don't think the solution presented here is viable as-is, but moving BDRF into the tensor cores opens up some interesting possibilities.
This isn't so much a "refinement" of current RT techniques, so much as it's totally new techniques designed to get the result that older techniques get on much more powerful hardware.
On the left side, though, neural rendering can also mean copyright erasing generative AI. And I'm sure we're going to see plenty of that, but it's going to come on the tools side, not in real time (I think).
Past rendering, there is interesting work in neural physics, neural animation, neural shading as a more generalized tool.
Past AI, I think game engines need to pay down their technical debt on threading (lots of work), and provide a model to developers that lets them scale across threads without pushing the entire cognitive load of threading into the game dev's heads.
We're going to need to start scaling more smartly around memory, because it just isn't growing like it used to. That will require new data formats (oh hey neural compression), new approaches to geometry. But the upside of that, potentially - the end of pop-in, the end of muddy close up textures, the end of obvious tessellation far away.
They release the DS because Sony release the PSP. Don't forgot that Nintendo lose the console market for Sony and was afraid to lose the portable mart too. The way they found to fight back is creating a "third pillar".Nintendo kinda did something similar in the past: they released the Gameboy Advance then a couple years after it released the DS.
2 months away is enough time to not hear anything until FebruaryI think we would be hearing a lot more by now right? If Nintendo doesn't do a superbowl ad on the Switch 2, then they won't rush.
Regarding joy-con comfort, the Steam Deck has spoiled me immensely in terms of ergonomics.
I've tested perhaps too many grip and controller alternatives for the Switch, including the Hori split pad pro / fit and the Satisfye grip.
Funnily enough this has been the most comfortable to hold, the smallest option:
It's all because of the butt. My hand wants something to wrap around. I don't think they need to make the console significantly 'wider' than the original Switch for more ergonomic controls, but adding enough depth would help.
I was prompted by the AYANEO Lite here ("the tasteful thickness of it...").
I know some don't like how the buttons and sticks are oriented vertically on the existing joy-con for comfort but I am skeptical they would move away from that positioning. We already expect the console to be larger and I'm not sure they'd want to make it wider like the Deck.
you're saying we should curb our enthusiasm?
Sorry to post you twice, but is that really true though? VRR is a screen that syncs output with the source. A panel with multiple fixed hz modes which I assume what SD has, is not VRR?you need a vrr screen for that, which not many people have. and nintendo most likely won't put into their screen, especially if they're going LCD for teh sake of costs
I could be completely wrong since you have much more experience than me in this regard. But I have the real impression that most games today already run with the CPU and GPU working in parallel on different frames.To my knowledge, most game engines on PC don't do even the non-DLSS equivalent of this, which is rendering in parallel to game logic. Unsurprising as there are latency costs, and it makes implementing a functional frame limiter really hard.
Out of curiosity, is it likely future DLSS versions will require beefier tensor cores? Or could they be optimized for previous hardware with existing tensor cores?
I wrote a long technical response here, but I don't think the technology actually matters that much.Depends on what the versions add. Different parts of DLSS 3.2 only work with new hardware, while optimizations and some new features still work with older hardware. It really depends on what NVidia is trying to accomplish and if that needs more hardware.
Right none of the Deck screens support VRR but they have fixed Hz values they can switch to including 40, 60, 90 and some in betweens, which can improve input lag in low frame rate high refresh rate pairings like 45 fps @ 90 Hz.Sorry to post you twice, but is that really true though? VRR is a screen that syncs output with the source. A panel with multiple fixed hz modes which I assume what SD has, is not VRR?
My old 1080p tv supports 24hz mode for Blu-rays. It certainly isn't VRR.
I may be incorrect in this in the architecture, but in terms of load, if you benchmark a game you can see alternating periods of CPU load and GPU load, so at the very least there is inefficient use of these resources. Which is understandable, parallelism is hard.I could be completely wrong since you have much more experience than me in this regard. But I have the real impression that most games today already run with the CPU and GPU working in parallel on different frames.
I suspect this is just because of the CPU not being able to keep the GPU fed. You can be CPU bound regardless of how threaded your implementation.That's why if a CPU today runs a game at a maximum of 40 FPS, adding a better GPU won't improve performance.