• Hey everyone, staff have documented a list of banned content and subject matter that we feel are not consistent with site values, and don't make sense to host discussion of on Famiboards. This list (and the relevant reasoning per item) is viewable here.

StarTopic Future Nintendo Hardware & Technology Speculation & Discussion |ST| (Read the staff posts before commenting!)

Nothing new ?

2ee85d2036a6b407820a61f9a107e25c.gif
Nah but we are expecting a early 2024 reveal with a September release date.
just gotta hold out a bit more
 
Which means, Fami…

Go and play your backlog of games! By the time you’re done, Switch 2 should be ready for launch!!!
I appreciate the vote of confidence, but my backlog is horrendous. It would take me years of only playing my backlog to complete it. A lot of my backlog is comprised of games I missed out on when I was younger and poorer that I picked up before they got even more elusive and expensive. I’m trying to be more responsible by waiting for sales of new games because for a while:

IMG-1030.gif
 
Interestingly (unsurprisingly?), MetalFX is just a custom AMD FSR by Apple: FSR 1 for MetalFX Performance, FSR 2 for MetalFX Quality
Which makes me wonder if the supposed "DLSS-like" upscaling solution for PS5 Pro is just a custom FSR meant to run on whatever ML accelerator tech it has.
 
Interestingly (unsurprisingly?), MetalFX is just a custom AMD FSR by Apple: FSR 1 for MetalFX Performance, FSR 2 for MetalFX Quality
maybe. Notebookcheck's conclusions aren't the best. the presence of AMD's FSR trademarks could mean there's some basis, or it could be some Ship of Thesus shit like with Apple's GPU
 
Which makes me wonder if the supposed "DLSS-like" upscaling solution for PS5 Pro is just a custom FSR meant to run on whatever ML accelerator tech it has.
Does MetalFX upscaling or FSR even USE machine learning hardware? Other than frame generation using AI in FSR 3, would AI even help? I think Sony will probably have a "proprietary" solution developed alongside AMD, rather than an implementation of FSR 1/2 as we understand it.

With access to both DLSS and FSR, I think NG Switch will still have the advantage in the upres race. I think people will be shocked by the output resolution being kept as high as it will, since the whole system is designed around upscaling (and indeed, multiple passes of upscaling).

That's not to say it will win the sharpness race, I doubt it wins in sharpness next to even Series S, but resolution, I think, will impress more than people think.

On that matter, the current Switch has a moderately competent 1080p upscaler. The NG Switch will surely have a 4K upscaler for games that don't hit that number, and I think there's reason to believe this scaler will be better than the existing one, or the one in the Shield TV. My two concerns with this are, developers likely don't have access to turning this on or off, and to prevent it from smearing say, lovely pixel art, they'll need to present their image to the output at "4K", so it doesn't get flubbed.
 
Does MetalFX upscaling or FSR even USE machine learning hardware? Other than frame generation using AI in FSR 3, would AI even help? I think Sony will probably have a "proprietary" solution developed alongside AMD, rather than an implementation of FSR 1/2 as we understand it.

With access to both DLSS and FSR, I think NG Switch will still have the advantage in the upres race. I think people will be shocked by the output resolution being kept as high as it will, since the whole system is designed around upscaling (and indeed, multiple passes of upscaling).

That's not to say it will win the sharpness race, I doubt it wins in sharpness next to even Series S, but resolution, I think, will impress more than people think.

On that matter, the current Switch has a moderately competent 1080p upscaler. The NG Switch will surely have a 4K upscaler for games that don't hit that number, and I think there's reason to believe this scaler will be better than the existing one, or the one in the Shield TV. My two concerns with this are, developers likely don't have access to turning this on or off, and to prevent it from smearing say, lovely pixel art, they'll need to present their image to the output at "4K", so it doesn't get flubbed.
MetalFX temporal upscaling uses machine learning hardware, FSR doesn't.

ML helps in generating better upscaling algorithms, as we seen in DLSS and XeSS. since FSR is open source, it's not out of the question for someone (like Apple, Sony, or Nintendo) to take it and use ML to enhance the quality
 
ML helps in generating better upscaling algorithms, as we seen in DLSS and XeSS.
Of course, but that's different to ML hardware being used to RUN upscaling algorithms. From MetalFX, it does seem FSR can be melded to hardware ML hardware for better results, of course. I'd have to assume Sony's implementation will do much the same, though with closer collaboration with AMD.

I wonder how Nintendo's patents on ML applications in game devices affects Sony in this case.
 
I’m confused as to what relevance Zelda remasters and Mario kart has here, I’m sorry if that’s rude I just don’t follow on what’s going on
It's part of the Drive the Hardware Thread Off Topic Initiative. Every two or three pages, they hold a meeting at 3 AM.

In all seriousness though, as I understand it, conversation about gimmicks -> dual screen gaming -> Wii U flop era -> Discussion about good Wii U games -> You Are Here. Definitely off topic, but it at least stemmed from somewhere on topic.
 
I appreciate the vote of confidence, but my backlog is horrendous. It would take me years of only playing my backlog to complete it. A lot of my backlog is comprised of games I missed out on when I was younger and poorer that I picked up before they got even more elusive and expensive. I’m trying to be more responsible by waiting for sales of new games because for a while:

IMG-1030.gif

You and me both. There are many titles I missed out on whether I was poor, or just didn’t get around to it. If it's any consolation, your games aren’t going anywhere anytime soon, so if it takes you another 5 years before you actually get to a game, that's alright.

But also remember there will be at least one game where you go, “I will never get to playing this.” And that, too, is also alright. Ask yourself what games are worth your time, and even if you think you’re wasting time playing all these games, just remember:

“Time you enjoy wasting is not wasted time.”
 
Of course, but that's different to ML hardware being used to RUN upscaling algorithms. From MetalFX, it does seem FSR can be melded to hardware ML hardware for better results, of course. I'd have to assume Sony's implementation will do much the same, though with closer collaboration with AMD.

I wonder how Nintendo's patents on ML applications in game devices affects Sony in this case.
I'm not sure what you mean here. DLSS and XeSS is done via matrix math accelerators. There's nothing stopping FSR from using that if modified to used an algorithm that comes from ML training rather than hand-tuned
 
I'm not sure what you mean here. DLSS and XeSS is done via matrix math accelerators. There's nothing stopping FSR from using that if modified to used an algorithm that comes from ML training rather than hand-tuned
waves hand side to side So, yes, if they make something that's a lot more like DLSS and brand it as a newer version of FSR, then FSR can be done that way.
 
It's neat that frame extrapolation is actually happening this time, after last year's false positive when DLSS 3 first released. I like the footnote in the first page of the paper Dakhil linked:
Specifically note that DLSS 3’s details are not released to the general public. For completeness, we still discuss with DLSS 3 in our work, treating it as spatial supersampling plus optical flow interpolation as generally believed.
And then later on:
Note that NVIDIA DLSS 3 is a combination of super sampling and interpolation since it generates intermediate frames. The interpolation based method will introduce extra latency for the rendering pipeline, so it requires an additional modules NVIDIA Reflex to decrease the latency. However, NVIDIA Reflex decreases the latency by reducing the bottleneck between CPU and GPU, and it doesn’t eliminate the latency from the frame interpolation
It tickles me that the Intel researchers are having the same conversations we are trying to figure out how DLSS works. And the first author on this paper appears to be a second year (?) PhD student at UCSB too, which is cool to see.
 
It's part of the Drive the Hardware Thread Off Topic Initiative. Every two or three pages, they hold a meeting at 3 AM.

In all seriousness though, as I understand it, conversation about gimmicks -> dual screen gaming -> Wii U flop era -> Discussion about good Wii U games -> You Are Here. Definitely off topic, but it at least stemmed from somewhere on topic.
….ah… I see.

I like the dual screen gimmick. It was cool. Please return at some point.
 
I hated the dual screen gimmick. Let us never return to it.

I can't be bothered with Nintendo's lack of QC on their dual screened devices. Having one screen be a shade of pink and the other be green or blue or something because they couldn't bother to have them calibrated anywhere close to a 6500k white point drove me insane.
 
I'm not sure I see that being the goal.

I think all consoles are supplementary to one another depending on what you buy them for but Nintendo consoles will almost always be bought for two major reasons: software and appealing hardware. For me, all consoles are supplementary to my PC, but there are times when one console or platform might be providing me with a much better experience than the others due to a game I discovered or maybe a strong year content-wise.

The bump we're seeing in power is the bump they have to make to keep up with the market. If you release something way too underpowered then you probably miss out on ports. Release something too powerful at that form factor you run into price, battery, and cooling issues that would jack up the price even more for them to solve. I think they acknowledge that docked play might've been one of the Switch's weak points compared to other consoles and are improving it.

Typically, owning their consoles just meant that you missed out on some massive 3rd party releases, it's been that way for a while now. The Wii was the last console to really receive its own ports (some funnily enough were better than their PS360 counterparts imo) but those days are over. For the games that didn't bother doing that you simply missed out on them if you didn't have the other consoles.

I think Nintendo's goal here is to transition as many Switch owners as possible to this new platform, push whatever new features are baked in there, strong branding, and continue their now consolidated development on one flagship platform with some continuous support for Switch. For the Switch-only user, it will be much more capable and run more games up to standard. As for the PS5, PC, or XSX owner, you now have a better docked mode than before and some games you might prefer to play portably will run even better.

It's a much tougher sell as the primary home consoles you hook up to your fancy monitor or TV, it will do that much better than the Switch, but I still think the novelty here is portability or just the mere option for it. With that said, there is no such thing as too many good ports or games on the Switch, seeing GTA VI or even Madden on there for people who like those games would be awesome. I believe many of the big 3rd party ports to Switch tend to do very well.
Nintendo very much is keen for their console to have parity with the others console with regards to third-party content. Games like GTA, COD, Elden Ring, etc represent millions of revenue per release for every console manufacturer due to royalty share, basically a goldmine. The fact Nintendo has been proudly showing that Switch is a very good platform for third-parties and that 3P software represent 50% of the game sales in the platform says it all.

And the idea of supplemental hardware used to hold true in the past. But not anymore. You're fighting for customers limited time and money, specially on a challenging macroeconomic scenario. You want customers to be engaged with your platform rather than the others. Hence why all 3 manufacturers always boast MAU numbers. A platform where customers only engage from time to time and is supplementary to others isn't worth investiment from 3P. And this goes back to the above argument: For customers to treat your platform as their primary platform of choice, you need content parity with others platforms + exclusive content to differentiate your platform, be it in form of hardware or software.

Nintendo wasn't able to do that with Switch and Tegra X1 due to vast differences in performance compared to the others platforms. But they were able to lay the groundwork and create a solid foundation to build and improve upon. From everything we know about T239, it's a far bigger leap that what was expected and the closest a Nintendo hardware has been of a competitor console since GCN. Nintendo will very much try to push for 3P to also support their machine with new AAA releases and have content parity. And we might even be seeing Nintendo 3P relations group already doing some works to secure content to Switch 2, like with the Nintendo employee in Baldur's Gate 3 credits.
 
I hope they can bring back dual screen gaming as an option somehow for specific games. Games like Mario Maker just work so much better with two screens,
 
Although not necessarily related to dual displays, I don't think flexible displays are a viable option for Nintendo in the foreseeable future. As shown with JerryRigEverything's teardown of the OnePlus Open from almost a week ago, flexible displays are still extremely fragile, probably too fragile for Nintendo's comfort.
 
Although not necessarily related to dual displays, I don't think flexible displays are a viable option for Nintendo in the foreseeable future. As shown with JerryRigEverything's teardown of the OnePlus Open from almost a week ago, flexible displays are still extremely fragile, probably too fragile for Nintendo's comfort.

Totally agree. I think it's too much to account for in terms or warranty and RMA. I purchased a Pixel Fold and the screen failed in ~3 months despite babying the device. The failure rate for foldables is just too high in its current state/form. The failure rate likely increases exponentially when you put such devices in the hands of children and individuals who do not treat their gadgets well.
 
0
I’m still holding out hope and checking back in here and there for a Christmas miracle: a funcle finally releases some details about manufacturing beginning before the years end.
 
I'm giving anyone whom has multiple monitors AND throwing shade to Dual screens a big side eye. The DS is a legend (I mean look at them sales). It should be treated as such.

Imo the killer feature of the Ds wasn't the dual screen, it was the touch screen. It was the first touch screen for many many people.
Now it's something everyone is used to (even small children are surprised when a screen isn't), but it was amazing at the time.
 
Imo the killer feature of the Ds wasn't the dual screen, it was the touch screen. It was the first touch screen for many many people.
Now it's something everyone is used to (even small children are surprised when a screen isn't), but it was amazing at the time.
Yeah, this. The dual screen thing is hurdle for development in a closed environment, but touch was a true revolution that stuck around. The PC screens thing isn't really the same; it's not like I have any games that use both of my screens. In that way, for future consoles it probably makes a lot more sense to try to integrate touch into a home console in the most comfortable way possible (which you can tell they're already trying to work around)
 
Imo the killer feature of the Ds wasn't the dual screen, it was the touch screen. It was the first touch screen for many many people.
Now it's something everyone is used to (even small children are surprised when a screen isn't), but it was amazing at the time.

It had an impact for sure. I'd argue that any other variation would not have caught on (Single touchscreen, Dual screens without touch, etc.). One thing I've noticed being a tech investor is that tech that aims to assist in multi-tasking in any kind of way often can catch fire sales wise compared to competitors.
 
Game Freak can barely handle gen 9 on Switch, what you think is make possible they will adapt to Nintendo next hardware, this easily, they usualy adapt later to new consoles
Game Freak needs more time. To optimize games AND to repair its completely broken engine. It has nothing to do with the hardware at this stage. Given just one more year, they can do better even on Switch 1. And by do better I simply mean release something that isn't appallingly shameful and disrespectful. It has nothing to do with hardware.
 
Game Freak needs more time. To optimize games AND to repair its completely broken engine. It has nothing to do with the hardware at this stage. Given just one more year, they can do better even on Switch 1. And by do better I simply mean release something that isn't appallingly shameful and disrespectful. It has nothing to do with hardware.
Without having detailed knowledge of the situation, I suspect going unity or unreal would be the more viable option.
 
Without having detailed knowledge of the situation, I suspect going unity or unreal would be the more viable option.
I agree. However, giving developers time, in general, seems to be the most important thing. Gen 4 remake was developed under Unity, and yet it too would have benefited from more development time. Everyone talked about the graphic style, but the most annoying thing, in my opinion, was the game's numerous technical problems, despite the use of a standard, proven engine.
 
the digital foundry interview about Avatar went up and there's a lot of little details, especially about optimizing for consoles

Ray Tracing scalability
Digital Foundry: I've only been playing on PC, but I'm very curious actually how you scale this to get the GI to run well on Xbox Series X, Series S and PlayStation 5, because there are obviously limitations to how much you can push a certain amount of hardware.

Oleksandr Koshlo:
It's been challenging I'd say, but there's a bunch of knobs to crank to scale it across different quality and hardware. Number of rays, the resolution of the result, quality of the denoising, precision of the results, the length of rays can vary. We have certain trade-offs. We can trace this faster if it's less precise, so let's use that one. The tweaks we have can be large, such as resolution, to very small things that we can tweak.

Nikolay Stefanov: I would also say that besides performance on the GPU, one of the things where we've had to scale has been memory, especially on Series S where there is less memory available than on the other target platforms. So for example, we load the ray tracing world at a short distance, so some of the distant shadows are not going to be as accurate as they are on the other platforms. Some of the geometry that we use for the BVH, for the ray tracing, it is at a lower LOD (level of detail) than it is on other platforms. Things like that.

Mesh Shading usage
Digital Foundry: One thing I noticed is that the world density is incredibly high in terms of just how much vegetation there is. Did you leverage any of the newer DX12 features and/or things brought about by RDNA, like primitive shading or mesh shading?

Oleksandr Koshlo
: We do ship with mesh shading on consoles. So, there are two things that contribute to the high density of our geometry in the world. One is the GPU geometry pipeline, which is new to Avatar, and it supports our procedural placement pipeline. So this brings a lot of geometry instances and we use the GPU to cull them away, to only render what is on the screen. And then we also chunk geometry into what we call meshlets, and we use native hardware features like primitive shading and mesh shading to then render them on screen. We use an additional culling pass to discard the meshes that aren't on screen. These things really improve performance for geometry rendering.

CPU utilization
Digital Foundry: You mentioned as part of the benchmark that you are recording data on CPU usage that you are exposing to the users. Can you go into how you're taking advantage of multi-core CPUs and multi-threading in a good way? Because it's something that is still a big, big problem area in PC games.

Nikolay Stefanov
: Absolutely, we can definitely go into a little more detail. So with Snowdrop and Avatar, we work with something that's called a task graph. Rather than having a more traditional single gameplay thread, we actually split the work up into individual tasks that have dependencies, and that allows us to use multi-core CPUs in a much more efficient way. Actually, the game doesn't run that well if you don't have many cores.

The way we do it is that we utilise all of the cores except one, which we leave for the operating system. For the rest, we run a bunch of tasks on them, depending on the load. One of the good things about Snowdrop is that it allows us the flexibility to run this kind of stuff and one of the things that we spend a lot of time on is just breaking up dependencies to make sure that for example, the NPCs can update in parallel, that the UI can update in parallel, that the physics can update in parallel as well. So hopefully you'll see good CPU optimisation.

a lot more and in depth here



low number of cpu cores would explain why the Steam Deck has problems. and it also sounds like this game really likes having as much bandwidth as possible, which also explains the other handhelds having some issues. considering those handhelds are making out memory bandwidth, a theoretical Drake version is gonna feature more nips and tucks to get running
 
Please read this staff post before posting.

Furthermore, according to this follow-up post, all off-topic chat will be moderated.
Last edited:


Back
Top Bottom